1.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base

2.
Binary number
–
The base-2 system is a positional notation with a radix of 2. Because of its implementation in digital electronic circuitry using logic gates. Each digit is referred to as a bit, the modern binary number system was devised by Gottfried Leibniz in 1679 and appears in his article Explication de lArithmétique Binaire. Systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, Leibniz was specifically inspired by the Chinese I Ching. The scribes of ancient Egypt used two different systems for their fractions, Egyptian fractions and Horus-Eye fractions, the method used for ancient Egyptian multiplication is also closely related to binary numbers. This method can be seen in use, for instance, in the Rhind Mathematical Papyrus, the I Ching dates from the 9th century BC in China. The binary notation in the I Ching is used to interpret its quaternary divination technique and it is based on taoistic duality of yin and yang. Eight trigrams and a set of 64 hexagrams, analogous to the three-bit and six-bit binary numerals, were in use at least as early as the Zhou Dynasty of ancient China. The Song Dynasty scholar Shao Yong rearranged the hexagrams in a format that resembles modern binary numbers, the Indian scholar Pingala developed a binary system for describing prosody. He used binary numbers in the form of short and long syllables, Pingalas Hindu classic titled Chandaḥśāstra describes the formation of a matrix in order to give a unique value to each meter. The binary representations in Pingalas system increases towards the right, the residents of the island of Mangareva in French Polynesia were using a hybrid binary-decimal system before 1450. Slit drums with binary tones are used to encode messages across Africa, sets of binary combinations similar to the I Ching have also been used in traditional African divination systems such as Ifá as well as in medieval Western geomancy. The base-2 system utilized in geomancy had long been applied in sub-Saharan Africa. Leibnizs system uses 0 and 1, like the modern binary numeral system, Leibniz was first introduced to the I Ching through his contact with the French Jesuit Joachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw the I Ching hexagrams as an affirmation of the universality of his own beliefs as a Christian. Binary numerals were central to Leibnizs theology and he believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing. Is not easy to impart to the pagans, is the ex nihilo through Gods almighty power. In 1854, British mathematician George Boole published a paper detailing an algebraic system of logic that would become known as Boolean algebra

3.
Hexadecimal
–
In mathematics and computing, hexadecimal is a positional numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 0–9 to represent values zero to nine, Hexadecimal numerals are widely used by computer system designers and programmers. As each hexadecimal digit represents four binary digits, it allows a more human-friendly representation of binary-coded values, one hexadecimal digit represents a nibble, which is half of an octet or byte. For example, a byte can have values ranging from 00000000 to 11111111 in binary form. In a non-programming context, a subscript is typically used to give the radix, several notations are used to support hexadecimal representation of constants in programming languages, usually involving a prefix or suffix. The prefix 0x is used in C and related languages, where this value might be denoted as 0x2AF3, in contexts where the base is not clear, hexadecimal numbers can be ambiguous and confused with numbers expressed in other bases. There are several conventions for expressing values unambiguously, a numerical subscript can give the base explicitly,15910 is decimal 159,15916 is hexadecimal 159, which is equal to 34510. Some authors prefer a text subscript, such as 159decimal and 159hex, or 159d and 159h. example. com/name%20with%20spaces where %20 is the space character, thus ’, represents the right single quotation mark, Unicode code point number 2019 in hex,8217. In the Unicode standard, a value is represented with U+ followed by the hex value. Color references in HTML, CSS and X Window can be expressed with six hexadecimal digits prefixed with #, white, CSS allows 3-hexdigit abbreviations with one hexdigit per component, #FA3 abbreviates #FFAA33. *nix shells, AT&T assembly language and likewise the C programming language, to output an integer as hexadecimal with the printf function family, the format conversion code %X or %x is used. In Intel-derived assembly languages and Modula-2, hexadecimal is denoted with a suffixed H or h, some assembly languages use the notation HABCD. Ada and VHDL enclose hexadecimal numerals in based numeric quotes, 16#5A3#, for bit vector constants VHDL uses the notation x5A3. Verilog represents hexadecimal constants in the form 8hFF, where 8 is the number of bits in the value, the Smalltalk language uses the prefix 16r, 16r5A3 PostScript and the Bourne shell and its derivatives denote hex with prefix 16#, 16#5A3. For PostScript, binary data can be expressed as unprefixed consecutive hexadecimal pairs, in early systems when a Macintosh crashed, one or two lines of hexadecimal code would be displayed under the Sad Mac to tell the user what went wrong. Common Lisp uses the prefixes #x and #16r, setting the variables *read-base* and *print-base* to 16 can also used to switch the reader and printer of a Common Lisp system to Hexadecimal number representation for reading and printing numbers. Thus Hexadecimal numbers can be represented without the #x or #16r prefix code, MSX BASIC, QuickBASIC, FreeBASIC and Visual Basic prefix hexadecimal numbers with &H, &H5A3 BBC BASIC and Locomotive BASIC use & for hex. TI-89 and 92 series uses a 0h prefix, 0h5A3 ALGOL68 uses the prefix 16r to denote hexadecimal numbers, binary, quaternary and octal numbers can be specified similarly

4.
ASCII
–
ASCII, abbreviated from American Standard Code for Information Interchange, is a character encoding standard. ASCII codes represent text in computers, telecommunications equipment, and other devices, most modern character-encoding schemes are based on ASCII, although they support many additional characters. ASCII was developed from telegraph code and its first commercial use was as a seven-bit teleprinter code promoted by Bell data services. Work on the ASCII standard began on October 6,1960, the first edition of the standard was published in 1963, underwent a major revision during 1967, and experienced its most recent update during 1986. Compared to earlier telegraph codes, the proposed Bell code and ASCII were both ordered for more convenient sorting of lists, and added features for other than teleprinters. Originally based on the English alphabet, ASCII encodes 128 specified characters into seven-bit integers as shown by the ASCII chart above. The characters encoded are numbers 0 to 9, lowercase letters a to z, uppercase letters A to Z, basic punctuation symbols, control codes that originated with Teletype machines, for example, lowercase j would become binary 1101010 and decimal 106. ASCII includes definitions for 128 characters,33 are non-printing control characters that affect how text and space are processed and 95 printable characters, of these, the IANA encourages use of the name US-ASCII for Internet uses of ASCII. The ASA became the United States of America Standards Institute and ultimately the American National Standards Institute, there was some debate at the time whether there should be more control characters rather than the lowercase alphabet. The X3.2.4 task group voted its approval for the change to ASCII at its May 1963 meeting, the X3 committee made other changes, including other new characters, renaming some control characters and moving or removing others. ASCII was subsequently updated as USAS X3. 4-1967, then USAS X3. 4-1968, ANSI X3. 4-1977 and they proposed a 9-track standard for magnetic tape, and attempted to deal with some punched card formats. The X3.2 subcommittee designed ASCII based on the earlier teleprinter encoding systems, like other character encodings, ASCII specifies a correspondence between digital bit patterns and character symbols. This allows digital devices to communicate each other and to process, store. Before ASCII was developed, the encodings in use included 26 alphabetic characters,10 numerical digits, ITA2 were in turn based on the 5-bit telegraph code Émile Baudot invented in 1870 and patented in 1874. The committee debated the possibility of a function, which would allow more than 64 codes to be represented by a six-bit code. In a shifted code, some character codes determine choices between options for the character codes. It allows compact encoding, but is reliable for data transmission. The standards committee decided against shifting, and so ASCII required at least a seven-bit code, the committee considered an eight-bit code, since eight bits would allow two four-bit patterns to efficiently encode two digits with binary-coded decimal