1.
Roman numerals
–
The numeric system represented by Roman numerals originated in ancient Rome and remained the usual way of writing numbers throughout Europe well into the Late Middle Ages. Numbers in this system are represented by combinations of letters from the Latin alphabet, Roman numerals, as used today, are based on seven symbols, The use of Roman numerals continued long after the decline of the Roman Empire. The numbers 1 to 10 are usually expressed in Roman numerals as follows, I, II, III, IV, V, VI, VII, VIII, IX, Numbers are formed by combining symbols and adding the values, so II is two and XIII is thirteen. Symbols are placed left to right in order of value. Named after the year of its release,2014 as MMXIV, the year of the games of the XXII Olympic Winter Games The standard forms described above reflect typical modern usage rather than a universally accepted convention. Usage in ancient Rome varied greatly and remained inconsistent in medieval, Roman inscriptions, especially in official contexts, seem to show a preference for additive forms such as IIII and VIIII instead of subtractive forms such as IV and IX. Both methods appear in documents from the Roman era, even within the same document, double subtractives also occur, such as XIIX or even IIXX instead of XVIII. Sometimes V and L are not used, with such as IIIIII. Such variation and inconsistency continued through the period and into modern times. Clock faces that use Roman numerals normally show IIII for four o’clock but IX for nine o’clock, however, this is far from universal, for example, the clock on the Palace of Westminster in London uses IV. Similarly, at the beginning of the 20th century, different representations of 900 appeared in several inscribed dates. For instance,1910 is shown on Admiralty Arch, London, as MDCCCCX rather than MCMX, although Roman numerals came to be written with letters of the Roman alphabet, they were originally independent symbols. The Etruscans, for example, used
2.
Unix
–
Among these is Apples macOS, which is the Unix version with the largest installed base as of 2014. Many Unix-like operating systems have arisen over the years, of which Linux is the most popular, Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmer users. The system grew larger as the system started spreading in academic circles, as users added their own tools to the system. Unix was designed to be portable, multi-tasking and multi-user in a time-sharing configuration and these concepts are collectively known as the Unix philosophy. By the early 1980s users began seeing Unix as a universal operating system. Under Unix, the system consists of many utilities along with the master control program. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space, the microkernel concept was introduced in an effort to reverse the trend towards larger kernels and return to a system in which most tasks were completed by smaller utilities. In an era when a standard computer consisted of a disk for storage and a data terminal for input and output. However, modern systems include networking and other new devices, as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores. In microkernel implementations, functions such as network protocols could be moved out of the kernel, Multics introduced many innovations, but had many problems. Frustrated by the size and complexity of Multics but not by the aims and their last researchers to leave Multics, Ken Thompson, Dennis Ritchie, M. D. McIlroy, and J. F. Ossanna, decided to redo the work on a much smaller scale. The name Unics, a pun on Multics, was suggested for the project in 1970. Peter H. Salus credits Peter Neumann with the pun, while Brian Kernighan claims the coining for himself, in 1972, Unix was rewritten in the C programming language. Bell Labs produced several versions of Unix that are referred to as Research Unix. In 1975, the first source license for UNIX was sold to faculty at the University of Illinois Department of Computer Science, UIUC graduate student Greg Chesson was instrumental in negotiating the terms of this license. During the late 1970s and early 1980s, the influence of Unix in academic circles led to adoption of Unix by commercial startups, including Sequent, HP-UX, Solaris, AIX. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4, in the 1990s, Unix-like systems grew in popularity as Linux and BSD distributions were developed through collaboration by a worldwide network of programmers
3.
Computing
–
Computing is any goal-oriented activity requiring, benefiting from, or creating a mathematical sequence of steps known as an algorithm — e. g. through computers. The field of computing includes computer engineering, software engineering, computer science, information systems, the ACM Computing Curricula 2005 defined computing as follows, In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. For example, an information systems specialist will view computing somewhat differently from a software engineer, regardless of the context, doing computing well can be complicated and difficult. Because society needs people to do computing well, we must think of computing not only as a profession, the fundamental question underlying all computing is What can be automated. The term computing is also synonymous with counting and calculating, in earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers. Computing is intimately tied to the representation of numbers, but long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization. These concepts include one-to-one correspondence, comparison to a standard, the earliest known tool for use in computation was the abacus, and it was thought to have been invented in Babylon circa 2400 BC. Its original style of usage was by lines drawn in sand with pebbles, abaci, of a more modern design, are still used as calculation tools today. This was the first known computer and most advanced system of calculation known to date - preceding Greek methods by 2,000 years. The first recorded idea of using electronics for computing was the 1931 paper The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena by C. E. Wynn-Williams. Claude Shannons 1938 paper A Symbolic Analysis of Relay and Switching Circuits then introduced the idea of using electronics for Boolean algebraic operations, a computer is a machine that manipulates data according to a set of instructions called a computer program. The program has a form that the computer can use directly to execute the instructions. The same program in its source code form, enables a programmer to study. Because the instructions can be carried out in different types of computers, the execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer and they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions, computer software or just software, is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures, algorithms, program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software
4.
X86-64
–
X86-64 is the 64-bit version of the x86 instruction set. It supports vastly larger amounts of memory and physical memory than is possible on its 32-bit predecessors. X86-64 also provides 64-bit general-purpose registers and numerous other enhancements and it is fully backward compatible with 16-bit and 32-bit x86 code. The original specification, created by AMD and released in 2000, has been implemented by AMD, Intel, the AMD K8 processor was the first to implement the architecture, this was the first significant addition to the x86 architecture designed by a company other than Intel. Intel was forced to suit and introduced a modified NetBurst family which was fully software-compatible with AMDs design. VIA Technologies introduced x86-64 in their VIA Isaiah architecture, with the VIA Nano, the x86-64 specification is distinct from the Intel Itanium architecture, which is not compatible on the native instruction set level with the x86 architecture. AMD64 was created as an alternative to the radically different IA-64 architecture, the first AMD64-based processor, the Opteron, was released in April 2003. AMDs processors implementing the AMD64 architecture include Opteron, Athlon 64, Athlon 64 X2, Athlon 64 FX, Athlon II, Turion 64, Turion 64 X2, Sempron, Phenom, Phenom II, FX, Fusion and Ryzen. The primary defining characteristic of AMD64 is the availability of 64-bit general-purpose processor registers, 64-bit integer arithmetic and logical operations, the designers took the opportunity to make other improvements as well. Some of the most significant changes are described below, pushes and pops on the stack default to 8-byte strides, and pointers are 8 bytes wide. Additional registers In addition to increasing the size of the general-purpose registers, AMD64 still has fewer registers than many common RISC instruction sets or VLIW-like machines such as the IA-64. However, an AMD64 implementation may have far more internal registers than the number of architectural registers exposed by the instruction set, additional XMM registers Similarly, the number of 128-bit XMM registers is also increased from 8 to 16. Larger virtual address space The AMD64 architecture defines a 64-bit virtual address format and this allows up to 256 TB of virtual address space. The architecture definition allows this limit to be raised in future implementations to the full 64 bits and this is compared to just 4 GB for the x86. This means that very large files can be operated on by mapping the entire file into the address space, rather than having to map regions of the file into. Larger physical address space The original implementation of the AMD64 architecture implemented 40-bit physical addresses, current implementations of the AMD64 architecture extend this to 48-bit physical addresses and therefore can address up to 256 TB of RAM. The architecture permits extending this to 52 bits in the future, for comparison, 32-bit x86 processors are limited to 64 GB of RAM in Physical Address Extension mode, or 4 GB of RAM without PAE mode. Any implementation therefore allows the physical address limit as under long mode
5.
Greek numerals
–
Greek numerals are a system of writing numbers using the letters of the Greek alphabet. These alphabetic numerals are known as Ionic or Ionian numerals, Milesian numerals. In modern Greece, they are used for ordinal numbers. For ordinary cardinal numbers, however, Greece uses Arabic numerals, attic numerals, which were later adopted as the basis for Roman numerals, were the first alphabetic set. They were acrophonic, derived from the first letters of the names of the numbers represented and they ran =1, =5, =10, =100, =1000, and =10000. 50,500,5000, and 50000 were represented by the letter with minuscule powers of ten written in the top right corner, the same system was used outside of Attica, but the symbols varied with the local alphabets, in Boeotia, was 1000. The present system probably developed around Miletus in Ionia, 19th-century classicists placed its development in the 3rd century BC, the occasion of its first widespread use. The present system uses the 24 letters adopted by Euclid as well as three Phoenician and Ionic ones that were not carried over, digamma, koppa, and sampi. The position of characters within the numbering system imply that the first two were still in use while the third was not. Greek numerals are decimal, based on powers of 10, the units from 1 to 9 are assigned to the first nine letters of the old Ionic alphabet from alpha to theta. Each multiple of one hundred from 100 to 900 was then assigned its own separate letter as well and this alphabetic system operates on the additive principle in which the numeric values of the letters are added together to obtain the total. For example,241 was represented as, in ancient and medieval manuscripts, these numerals were eventually distinguished from letters using overbars, α, β, γ, etc. In medieval manuscripts of the Book of Revelation, the number of the Beast 666 is written as χξϛ, although the Greek alphabet began with only majuscule forms, surviving papyrus manuscripts from Egypt show that uncial and cursive minuscule forms began early. These new letter forms sometimes replaced the ones, especially in the case of the obscure numerals. The old Q-shaped koppa began to be broken up and simplified, the numeral for 6 changed several times. During antiquity, the letter form of digamma came to be avoided in favor of a special numerical one. By the Byzantine era, the letter was known as episemon and this eventually merged with the sigma-tau ligature stigma. In modern Greek, a number of changes have been made
6.
Hexadecimal
–
In mathematics and computing, hexadecimal is a positional numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 0–9 to represent values zero to nine, Hexadecimal numerals are widely used by computer system designers and programmers. As each hexadecimal digit represents four binary digits, it allows a more human-friendly representation of binary-coded values, one hexadecimal digit represents a nibble, which is half of an octet or byte. For example, a byte can have values ranging from 00000000 to 11111111 in binary form. In a non-programming context, a subscript is typically used to give the radix, several notations are used to support hexadecimal representation of constants in programming languages, usually involving a prefix or suffix. The prefix 0x is used in C and related languages, where this value might be denoted as 0x2AF3, in contexts where the base is not clear, hexadecimal numbers can be ambiguous and confused with numbers expressed in other bases. There are several conventions for expressing values unambiguously, a numerical subscript can give the base explicitly,15910 is decimal 159,15916 is hexadecimal 159, which is equal to 34510. Some authors prefer a text subscript, such as 159decimal and 159hex, or 159d and 159h. example. com/name%20with%20spaces where %20 is the space character, thus ’, represents the right single quotation mark, Unicode code point number 2019 in hex,8217. In the Unicode standard, a value is represented with U+ followed by the hex value. Color references in HTML, CSS and X Window can be expressed with six hexadecimal digits prefixed with #, white, CSS allows 3-hexdigit abbreviations with one hexdigit per component, #FA3 abbreviates #FFAA33. *nix shells, AT&T assembly language and likewise the C programming language, to output an integer as hexadecimal with the printf function family, the format conversion code %X or %x is used. In Intel-derived assembly languages and Modula-2, hexadecimal is denoted with a suffixed H or h, some assembly languages use the notation HABCD. Ada and VHDL enclose hexadecimal numerals in based numeric quotes, 16#5A3#, for bit vector constants VHDL uses the notation x5A3. Verilog represents hexadecimal constants in the form 8hFF, where 8 is the number of bits in the value, the Smalltalk language uses the prefix 16r, 16r5A3 PostScript and the Bourne shell and its derivatives denote hex with prefix 16#, 16#5A3. For PostScript, binary data can be expressed as unprefixed consecutive hexadecimal pairs, in early systems when a Macintosh crashed, one or two lines of hexadecimal code would be displayed under the Sad Mac to tell the user what went wrong. Common Lisp uses the prefixes #x and #16r, setting the variables *read-base* and *print-base* to 16 can also used to switch the reader and printer of a Common Lisp system to Hexadecimal number representation for reading and printing numbers. Thus Hexadecimal numbers can be represented without the #x or #16r prefix code, MSX BASIC, QuickBASIC, FreeBASIC and Visual Basic prefix hexadecimal numbers with &H, &H5A3 BBC BASIC and Locomotive BASIC use & for hex. TI-89 and 92 series uses a 0h prefix, 0h5A3 ALGOL68 uses the prefix 16r to denote hexadecimal numbers, binary, quaternary and octal numbers can be specified similarly
7.
Factorization
–
In mathematics, factorization or factoring is the decomposition of an object into a product of other objects, or factors, which when multiplied together give the original. For example, the number 15 factors into primes as 3 ×5, in all cases, a product of simpler objects is obtained. The aim of factoring is usually to reduce something to “basic building blocks”, such as numbers to prime numbers, factoring integers is covered by the fundamental theorem of arithmetic and factoring polynomials by the fundamental theorem of algebra. Viètes formulas relate the coefficients of a polynomial to its roots, the opposite of polynomial factorization is expansion, the multiplying together of polynomial factors to an “expanded” polynomial, written as just a sum of terms. Integer factorization for large integers appears to be a difficult problem, there is no known method to carry it out quickly. Its complexity is the basis of the security of some public key cryptography algorithms. A matrix can also be factorized into a product of matrices of special types, One major example of this uses an orthogonal or unitary matrix, and a triangular matrix. There are different types, QR decomposition, LQ, QL, RQ and this situation is generalized by factorization systems. By the fundamental theorem of arithmetic, every integer greater than 1 has a unique prime factorization. Given an algorithm for integer factorization, one can factor any integer down to its constituent primes by repeated application of this algorithm, for very large numbers, no efficient classical algorithm is known. Modern techniques for factoring polynomials are fast and efficient, but use sophisticated mathematical ideas and these techniques are used in the construction of computer routines for carrying out polynomial factorization in Computer algebra systems. This article is concerned with classical techniques. While the general notion of factoring just means writing an expression as a product of simpler expressions, when factoring polynomials this means that the factors are to be polynomials of smaller degree. Thus, while x 2 − y = is a factorization of the expression, another issue concerns the coefficients of the factors. It is not always possible to do this, and a polynomial that can not be factored in this way is said to be irreducible over this type of coefficient, thus, x2 -2 is irreducible over the integers and x2 +4 is irreducible over the reals. In the first example, the integers 1 and -2 can also be thought of as real numbers, and if they are, then x 2 −2 = shows that this polynomial factors over the reals. Similarly, since the integers 1 and 4 can be thought of as real and hence complex numbers, x2 +4 splits over the complex numbers, i. e. x 2 +4 =. The fundamental theorem of algebra can be stated as, Every polynomial of n with complex number coefficients splits completely into n linear factors
8.
Number
–
Numbers that answer the question How many. Are 0,1,2,3 and so on, when used to indicate position in a sequence they are ordinal numbers. To the Pythagoreans and Greek mathematician Euclid, the numbers were 2,3,4,5, Euclid did not consider 1 to be a number. Numbers like 3 +17 =227, expressible as fractions in which the numerator and denominator are whole numbers, are rational numbers and these make it possible to measure such quantities as two and a quarter gallons and six and a half miles. What we today would consider a proof that a number is irrational Euclid called a proof that two lengths arising in geometry have no common measure, or are incommensurable, Euclid included proofs of incommensurability of lengths arising in geometry in his Elements. In the Rhind Mathematical Papyrus, a pair of walking forward marked addition. They were the first known civilization to use negative numbers, negative numbers came into widespread use as a result of their utility in accounting. They were used by late medieval Italian bankers, by 1740 BC, the Egyptians had a symbol for zero in accounting texts. In Maya civilization zero was a numeral with a shape as a symbol. The ancient Egyptians represented all fractions in terms of sums of fractions with numerator 1, for example, 2/5 = 1/3 + 1/15. Such representations are known as Egyptian Fractions or Unit Fractions. The earliest written approximations of π are found in Egypt and Babylon, in Babylon, a clay tablet dated 1900–1600 BC has a geometrical statement that, by implication, treats π as 25/8 =3.1250. In Egypt, the Rhind Papyrus, dated around 1650 BC, astronomical calculations in the Shatapatha Brahmana use a fractional approximation of 339/108 ≈3.139. Other Indian sources by about 150 BC treat π as √10 ≈3.1622 The first references to the constant e were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms calculated from the constant and it is assumed that the table was written by William Oughtred. The discovery of the constant itself is credited to Jacob Bernoulli, the first known use of the constant, represented by the letter b, was in correspondence from Gottfried Leibniz to Christiaan Huygens in 1690 and 1691. Leonhard Euler introduced the letter e as the base for natural logarithms, Euler started to use the letter e for the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons, and the first appearance of e in a publication was Eulers Mechanica. While in the subsequent years some researchers used the letter c, e was more common, the first numeral system known is Babylonian numeric system, that has a 60 base, it was introduced in 3100 B. C. and is the first Positional numeral system known
9.
100 (number)
–
100 or one hundred is the natural number following 99 and preceding 101. In medieval contexts, it may be described as the hundred or five score in order to differentiate the English. The standard SI prefix for a hundred is hecto-,100 is the basis of percentages, with 100% being a full amount. 100 is the sum of the first nine prime numbers, as well as the sum of pairs of prime numbers e. g.3 +97,11 +89,17 +83,29 +71,41 +59. 100 is the sum of the cubes of the first four integers and this is related by Nicomachuss theorem to the fact that 100 also equals the square of the sum of the first four integers,100 =102 =2. 26 +62 =100, thus 100 is a Leyland number and it is divisible by the number of primes below it,25 in this case. It can not be expressed as the difference between any integer and the total of coprimes below it, making it a noncototient and it can be expressed as a sum of some of its divisors, making it a semiperfect number. 100 is a Harshad number in base 10, and also in base 4, there are exactly 100 prime numbers whose digits are in strictly ascending order. 100 is the smallest number whose common logarithm is a prime number,100 senators are in the U. S One hundred is the atomic number of fermium, an actinide. On the Celsius scale,100 degrees is the temperature of pure water at sea level. The Kármán line lies at an altitude of 100 kilometres above the Earths sea level and is used to define the boundary between Earths atmosphere and outer space. There are 100 blasts of the Shofar heard in the service of Rosh Hashana, a religious Jew is expected to utter at least 100 blessings daily. In Hindu Religion - Mythology Book Mahabharata - Dhritarashtra had 100 sons known as kauravas, the United States Senate has 100 Senators. Most of the currencies are divided into 100 subunits, for example, one euro is one hundred cents. The 100 Euro banknotes feature a picture of a Rococo gateway on the obverse, the U. S. hundred-dollar bill has Benjamin Franklins portrait, the Benjamin is the largest U. S. bill in print. American savings bonds of $100 have Thomas Jeffersons portrait, while American $100 treasury bonds have Andrew Jacksons portrait, One hundred is also, The number of years in a century. The number of pounds in an American short hundredweight, in Greece, India, Israel and Nepal,100 is the police telephone number. In Belgium,100 is the ambulance and firefighter telephone number, in United Kingdom,100 is the operator telephone number
10.
Integer overflow
–
The most common result of an overflow is that the least significant representable bits of the result are stored the result is said to wrap around the maximum. An overflow condition gives incorrect results and, particularly if the possibility has not been anticipated, the register width of a processor determines the range of values that can be represented. In particular, multiplying or adding two integers may result in a value that is small, and subtracting from a small integer may cause a wrap to a large positive value. If the variable has an integer type, a program may make the assumption that a variable always contains a positive value. An integer overflow can cause the value to wrap and become negative, most computers have two dedicated processor flags to check for overflow conditions. The carry flag is set when the result of an addition or subtraction, considering the operands and result as unsigned numbers and this indicates an overflow with a carry/borrow from the most significant bit. This indicates than an overflow has occurred and the result represented in twos complement form would not fit in the given number of bits. Handling, If it is anticipated that overflow may occur and when it happens detected, cPUs generally have a way of detecting this to support addition of numbers larger than their register size, typically using a status bit. Propagation, if a value is too large to be stored it can be assigned a value indicating that overflow has occurred. This is useful so that the problem can be checked for once at the end of a long rather than after each step. This is often supported in Floating Point Hardware called FPUs, run-time overflow detection implementation AddressSanitizer is also available for C compilers. Using such languages may thus be helpful to mitigate this issue, however, in some such languages, situations are still possible where an integer overflow can occur. An example is explicit optimization of a path which is considered a bottleneck by the profiler. In the case of Common Lisp, this is possible by using a declaration to type-annotate a variable to a machine-size word. In Java 8, there are overloaded methods, for example like Math#addExact, in computer graphics or signal processing, it is typical to work on data that ranges from 0 to 1 or from −1 to 1. An example of this is an image where 0 represents black,1 represents white. One operation that one may want to support is brightening the image by multiplying every pixel by a constant, unanticipated arithmetic overflow is a fairly common cause of program errors. Such overflow bugs may be hard to discover and diagnose because they may manifest themselves only for large input data sets