1.
Computing
–
Computing is any goal-oriented activity requiring, benefiting from, or creating a mathematical sequence of steps known as an algorithm — e. g. through computers. The field of computing includes computer engineering, software engineering, computer science, information systems, the ACM Computing Curricula 2005 defined computing as follows, In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. For example, an information systems specialist will view computing somewhat differently from a software engineer, regardless of the context, doing computing well can be complicated and difficult. Because society needs people to do computing well, we must think of computing not only as a profession, the fundamental question underlying all computing is What can be automated. The term computing is also synonymous with counting and calculating, in earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers. Computing is intimately tied to the representation of numbers, but long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization. These concepts include one-to-one correspondence, comparison to a standard, the earliest known tool for use in computation was the abacus, and it was thought to have been invented in Babylon circa 2400 BC. Its original style of usage was by lines drawn in sand with pebbles, abaci, of a more modern design, are still used as calculation tools today. This was the first known computer and most advanced system of calculation known to date - preceding Greek methods by 2,000 years. The first recorded idea of using electronics for computing was the 1931 paper The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena by C. E. Wynn-Williams. Claude Shannons 1938 paper A Symbolic Analysis of Relay and Switching Circuits then introduced the idea of using electronics for Boolean algebraic operations, a computer is a machine that manipulates data according to a set of instructions called a computer program. The program has a form that the computer can use directly to execute the instructions. The same program in its source code form, enables a programmer to study. Because the instructions can be carried out in different types of computers, the execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer and they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions, computer software or just software, is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures, algorithms, program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software
2.
Binary file
–
A binary file is a computer file that is not a text file. The term binary file is used as a term meaning non-text file. Binary files are usually thought of as being a sequence of bytes, binary files typically contain bytes that are intended to be interpreted as something other than text characters. Compiled computer programs are examples, indeed, compiled applications are sometimes referred to, particularly by programmers. But binary files can also mean that they contain images, sounds, compressed versions of files, etc. — in short. Some binary files contain headers, blocks of metadata used by a program to interpret the data in the file. The header often contains a signature or magic number which can identify the format, for example, a GIF file can contain multiple images, and headers are used to identify and describe each block of image data. The leading bytes of the header would contain text like GIF87a or GIF89a that can identify the binary as a GIF file, if a binary file does not contain any headers, it may be called a flat binary file. To send binary files through certain systems that do not allow all data values, encoding the data has the disadvantage of increasing the file size during the transfer, as well as requiring translation back into binary after receipt. See Binary-to-text encoding for more on this subject, a hex editor or viewer may be used to view file data as a sequence of hexadecimal values for corresponding bytes of a binary file. If a binary file is opened in an editor, each group of eight bits will typically be translated as a single character. Other type of viewers simply replace the characters with spaces revealing only the human-readable text. This type of view is useful for inspection of a binary file in order to find passwords in games, find hidden text in non-text files. It can even be used to inspect suspicious files for unwanted effects, for example, the user would see any URL/email to which the suspected software may attempt to connect in order to upload unapproved data. If the file is itself treated as an executable and run, standards are very important to binary files. For example, a binary file interpreted by the ASCII character set will result in text being displayed, a custom application can interpret the file differently, a byte may be a sound, or a pixel, or even an entire word. Binary itself is meaningless, until such time as an executed algorithm defines what should be done with each bit, byte, thus, just examining the binary and attempting to match it against known formats can lead to the wrong conclusion as to what it actually represents. This fact can be used in steganography, where an algorithm interprets a binary data file differently to reveal hidden content, without the algorithm, it is impossible to tell that hidden content exists
3.
Floating-point arithmetic
–
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. A number is, in general, represented approximately to a number of significant digits and scaled using an exponent in some fixed base. For example,1.2345 =12345 ⏟ significand ×10 ⏟ base −4 ⏞ exponent, the term floating point refers to the fact that a numbers radix point can float, that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floating-point representation can be thought of as a kind of scientific notation. The result of dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers, however, since the 1990s, the most commonly encountered representation is that defined by the IEEE754 Standard. A floating-point unit is a part of a computer system designed to carry out operations on floating point numbers. A number representation specifies some way of encoding a number, usually as a string of digits, there are several mechanisms by which strings of digits can represent numbers. In common mathematical notation, the string can be of any length. If the radix point is not specified, then the string implicitly represents an integer, in fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the point in the middle. The scaling factor, as a power of ten, is then indicated separately at the end of the number, floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of, A signed digit string of a length in a given base. This digit string is referred to as the significand, mantissa, the length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit and this article generally follows the convention that the radix point is set just after the most significant digit. A signed integer exponent, which modifies the magnitude of the number, using base-10 as an example, the number 7005152853504700000♠152853.5047, which has ten decimal digits of precision, is represented as the significand 1528535047 together with 5 as the exponent. In storing such a number, the base need not be stored, since it will be the same for the range of supported numbers. Symbolically, this value is, s b p −1 × b e, where s is the significand, p is the precision, b is the base
4.
Computer memory
–
In computing, memory refers to the computer hardware devices involved to store information for immediate use in a computer, it is synonymous with the term primary storage. Computer memory operates at a speed, for example random-access memory, as a distinction from storage that provides slow-to-access program and data storage. If needed, contents of the memory can be transferred to secondary storage. An archaic synonym for memory is store, there are two main kinds of semiconductor memory, volatile and non-volatile. Examples of non-volatile memory are flash memory and ROM, PROM, EPROM and EEPROM memory, most semiconductor memory is organized into memory cells or bistable flip-flops, each storing one bit. Flash memory organization includes both one bit per cell and multiple bits per cell. The memory cells are grouped into words of fixed word length, each word can be accessed by a binary address of N bit, making it possible to store 2 raised by N words in the memory. This implies that processor registers normally are not considered as memory, since they only store one word, typical secondary storage devices are hard disk drives and solid-state drives. In the early 1940s, memory technology oftenly permit a capacity of a few bytes, the next significant advance in computer memory came with acoustic delay line memory, developed by J. Presper Eckert in the early 1940s. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient, two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred Williams would invent the Williams tube, the Williams tube would prove more capacious than the Selectron tube and less expensive. The Williams tube would prove to be frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory, jay Forrester, Jan A. Rajchman and An Wang developed magnetic core memory, which allowed for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor-based memory in the late 1960s, developments in technology and economies of scale have made possible so-called Very Large Memory computers. The term memory when used with reference to computers generally refers to Random Access Memory or RAM, volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM or dynamic RAM, SRAM retains its contents as long as the power is connected and is easy for interfacing, but uses six transistors per bit. SRAM is not worthwhile for desktop system memory, where DRAM dominates, SRAM is commonplace in small embedded systems, which might only need tens of kilobytes or less. Forthcoming volatile memory technologies that aim at replacing or competing with SRAM and DRAM include Z-RAM and A-RAM, non-volatile memory is computer memory that can retain the stored information even when not powered
5.
IEEE 754
–
The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point computation established in 1985 by the Institute of Electrical and Electronics Engineers. The standard addressed many problems found in the floating point implementations that made them difficult to use reliably and portably. Many hardware floating point units now use the IEEE754 standard, the international standard ISO/IEC/IEEE60559,2011 has been approved for adoption through JTC1/SC25 under the ISO/IEEE PSDO Agreement and published. The binary formats in the standard are included in the new standard along with three new basic formats. To conform to the current standard, an implementation must implement at least one of the formats as both an arithmetic format and an interchange format. As of September 2015, the standard is being revised to incorporate clarifications, an IEEE754 format is a set of representations of numerical values and symbols. A format may also include how the set is encoded, a format comprises, Finite numbers, which may be either base 2 or base 10. Each finite number is described by three integers, s = a sign, c = a significand, q = an exponent, the numerical value of a finite number is s × c × bq where b is the base, also called radix. For example, if the base is 10, the sign is 1, the significand is 12345, two kinds of NaN, a quiet NaN and a signaling NaN. A NaN may carry a payload that is intended for diagnostic information indicating the source of the NaN, the sign of a NaN has no meaning, but it may be predictable in some circumstances. Hence the smallest non-zero positive number that can be represented is 1×10−101 and the largest is 9999999×1090, the numbers −b1−emax and b1−emax are the smallest normal numbers, non-zero numbers between these smallest numbers are called subnormal numbers. Zero values are finite values with significand 0 and these are signed zeros, the sign bit specifies if a zero is +0 or −0. Some numbers may have several representations in the model that has just been described, for instance, if b=10 and p=7, −12.345 can be represented by −12345×10−3, −123450×10−4, and −1234500×10−5. However, for most operations, such as operations, the result does not depend on the representation of the inputs. For the decimal formats, any representation is valid, and the set of representations is called a cohort. When a result can have several representations, the standard specifies which member of the cohort is chosen, for the binary formats, the representation is made unique by choosing the smallest representable exponent. For numbers with an exponent in the range, the leading bit of the significand will always be 1. Consequently, the leading 1 bit can be implied rather than explicitly present in the memory encoding and this rule is called leading bit convention, implicit bit convention, or hidden bit convention
6.
3dfx Interactive
–
3dfx Interactive was a company headquartered in San Jose, California, founded in 1994, that specialized in the manufacturing of 3D graphics processing units and, later, graphics cards. It was a pioneer in the field from the late 1990s until 2000, the companys flagship product was the Voodoo Graphics, an add-in card that accelerated 3D graphics. The hardware accelerated only 3D rendering, relying on the PCs current video card for 2D support, despite this limitation, the Voodoo Graphics product and its follow-up, Voodoo2, were popular. It became standard for 3D games to support for the companys Glide API. 3dfx rapidly declined beginning in the late 1990s and was acquired by Nvidia mostly for intellectual property rights after going bankrupt in 2002, founded in 1994 by Ross Smith, Gary Tarolli and Scott Sellers with backing from Gordie Campbells TechFarm, 3dfx released its Voodoo Graphics chip in 1996. 3dfx gained initial fame in the arcade market, the first arcade machine that 3Dfx Voodoo Graphics hardware was used in was ICE Home Run Derby, a game released in 1996. Later that year it was featured in popular titles, such as Ataris San Francisco Rush. Prior to affordable 3D hardware, games such as Doom and Quake had compelled video game players to move from their 80386s to 80486s, and then to the Pentium. A typical Voodoo Graphics PCI expansion card consisted of a DAC, a frame buffer processor, the RAM and graphics processors operated at 50 MHz. It provided only 3D acceleration and as such the computer also needed a video controller for conventional 2D software. A pass-through VGA cable daisy-chained the video controller to the Voodoo, the method used to engage the Voodoos output circuitry varied between cards, with some using mechanical relays while others utilized purely electronic components. The mechanical relays emitted an audible clicking sound when they engaged and disengaged, the Voodoos primary competition was from PowerVR and Rendition. PowerVR produced a similar 3D-only add-on card with capable 3D support, 3Dfx saw intense competition in the market from cards that offered the combination of 2D and 3D acceleration. While these cards, such as Matrox Mystique, S3 ViRGE and ATI 3D Rage, offered inferior 3D acceleration, their lower cost and this advantage is still in place today. The abstraction layers overhead crippled performance in practice, while there were many games that used Glide, the killer application for Voodoo Graphics was the MiniGL driver developed to allow hardware acceleration of the game Quake by id Software. MiniGL implemented only the subset of OpenGL used by Quake, in August 1997, 3dfx released the Voodoo Rush chipset, combining a Voodoo chip with a 2D chip that lay on the same circuit board, eliminating the need for a separate VGA card. The Rush had the same specifications as Voodoo Graphics, but did not perform as well because the Rush chipset had to share memory bandwidth with the CRTC of the 2D chip. Furthermore, the Rush chipset was not directly present on the PCI bus but had to be programmed through linked registers of the 2D chip, the typical performance hit was around 10% compared to Voodoo Graphics, and even worse in windowed mode
7.
Nvidia
–
Nvidia Corporation is an American technology company based in Santa Clara, California. It designs graphics processing units for the market, as well as system on a chip units for the mobile computing. Its primary GPU product line, labeled GeForce, is in competition with Advanced Micro Devices Radeon products. Nvidia expanded its presence in the industry with its handheld SHIELD Portable, SHIELD Tablet. Since 2014, Nvidia has shifted to become a company focused on four markets – gaming, professional visualization, data centers. In addition to GPU manufacturing, Nvidia provides parallel processing capabilities to researchers and they are deployed in supercomputing sites around the world. More recently, It has moved into the computing market. In addition to AMD, its competitors include Intel, Qualcomm, Nvidia is now focused on artificial intelligence. The name of the company comes from Invidia in Roman mythology who corresponds to Nemesis, RIVA TNT in 1998 solidified Nvidias reputation for capable hardware. Autumn 1999 saw the release of the GeForce, most notably introducing on-board transformation, running at 120 MHz and featuring four pixel pipelines, it implemented advanced video acceleration, motion compensation and hardware sub-picture alpha blending. The GeForce outperformed existing products by a wide margin, due to the success of its products, Nvidia won the contract to develop the graphics hardware for Microsofts Xbox game console, which earned Nvidia a $200 million advance. However, the project drew the time of many of its best engineers away from other projects, in the short term this did not matter, and the GeForce2 GTS shipped in the summer of 2000. In December 2000, Nvidia reached an agreement to acquire the assets of its one-time rival 3dfx. The acquisition process was finalized in April 2002, in July 2002, Nvidia acquired Exluna for an undisclosed sum. Exluna made software rendering tools and the personnel were merged into the Cg project, in August 2003, Nvidia acquired MediaQ for approximately US$70 million. On April 22,2004, Nvidia acquired iReady, also a provider of high performance TCP/IP, in December 2004, it was announced that Nvidia would assist Sony with the design of the graphics processor in the PlayStation 3 game console. In March 2006, it emerged that Nvidia would deliver RSX to Sony as an IP core, under the agreement, Nvidia would provide ongoing support to port the RSX to Sonys fabs of choice, as well as die shrinks to 65 nm. This practice contrasted with its business arrangement with Microsoft, in which Nvidia managed production, meanwhile, in May 2005 Microsoft chose to license a design by ATI and to make its own manufacturing arrangements for the Xbox 360 graphics hardware, as had Nintendo for the Wii console
8.
Microsoft
–
Its best known software products are the Microsoft Windows line of operating systems, Microsoft Office office suite, and Internet Explorer and Edge web browsers. Its flagship hardware products are the Xbox video game consoles and the Microsoft Surface tablet lineup, as of 2016, it was the worlds largest software maker by revenue, and one of the worlds most valuable companies. Microsoft was founded by Paul Allen and Bill Gates on April 4,1975, to develop and it rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Microsoft Windows. The companys 1986 initial public offering, and subsequent rise in its share price, since the 1990s, it has increasingly diversified from the operating system market and has made a number of corporate acquisitions. In May 2011, Microsoft acquired Skype Technologies for $8.5 billion, in June 2012, Microsoft entered the personal computer production market for the first time, with the launch of the Microsoft Surface, a line of tablet computers. The word Microsoft is a portmanteau of microcomputer and software, Paul Allen and Bill Gates, childhood friends with a passion for computer programming, sought to make a successful business utilizing their shared skills. In 1972 they founded their first company, named Traf-O-Data, which offered a computer that tracked and analyzed automobile traffic data. Allen went on to pursue a degree in science at Washington State University. The January 1975 issue of Popular Electronics featured Micro Instrumentation and Telemetry Systemss Altair 8800 microcomputer, Allen suggested that they could program a BASIC interpreter for the device, after a call from Gates claiming to have a working interpreter, MITS requested a demonstration. Since they didnt actually have one, Allen worked on a simulator for the Altair while Gates developed the interpreter and they officially established Microsoft on April 4,1975, with Gates as the CEO. Allen came up with the name of Micro-Soft, as recounted in a 1995 Fortune magazine article. In August 1977 the company formed an agreement with ASCII Magazine in Japan, resulting in its first international office, the company moved to a new home in Bellevue, Washington in January 1979. Microsoft entered the OS business in 1980 with its own version of Unix, however, it was MS-DOS that solidified the companys dominance. For this deal, Microsoft purchased a CP/M clone called 86-DOS from Seattle Computer Products, branding it as MS-DOS, following the release of the IBM PC in August 1981, Microsoft retained ownership of MS-DOS. Since IBM copyrighted the IBM PC BIOS, other companies had to engineer it in order for non-IBM hardware to run as IBM PC compatibles. Due to various factors, such as MS-DOSs available software selection, the company expanded into new markets with the release of the Microsoft Mouse in 1983, as well as with a publishing division named Microsoft Press. Paul Allen resigned from Microsoft in 1983 after developing Hodgkins disease, while jointly developing a new OS with IBM in 1984, OS/2, Microsoft released Microsoft Windows, a graphical extension for MS-DOS, on November 20,1985. Once Microsoft informed IBM of NT, the OS/2 partnership deteriorated, in 1990, Microsoft introduced its office suite, Microsoft Office
9.
GeForce FX series
–
The GeForce FX or GeForce 5 series is a line of graphics processing units from the manufacturer NVIDIA. NVIDIAs GeForce FX series is the generation of the GeForce line. With GeForce 3, NVIDIA introduced programmable shader functionality into their 3D architecture, the GeForce 4 Ti was an enhancement of the GeForce 3 technology. With real-time 3D graphics technology continually advancing, the release of DirectX9.0 brought further refinement of programmable pipeline technology with the arrival of Shader Model 2.0, the GeForce FX series is NVIDIAs first generation Direct3D 9-compliant hardware. The series was manufactured on TSMCs 130 nm fabrication process and it is compliant with Shader Model 2. 0/2. 0A, allowing more flexibility in complex shader/fragment programs and much higher arithmetic precision. It supports a number of new technologies, including DDR2, GDDR2 and GDDR3. The anisotropic filtering implementation has potentially higher quality than previous NVIDIA designs, anti-aliasing methods have been enhanced and additional modes are available compared to GeForce 4. Memory bandwidth and fill-rate optimization mechanisms have been improved, some members of the series offer double fill-rate in z-buffer/stencil-only passes. The series also brought improvements to NVIDIAs video processing hardware, in the form of the Video Processing Engine, the primary addition, compared to previous NVIDIA video processors, was per-pixel video-deinterlacing. The initial version of the GeForce FX was one of the first cards to come equipped with a large dual-slot cooling solution, called Flow FX, the cooler was very large in comparison to ATIs small, single-slot cooler on the 9700 series. It was jokingly referred to as the Dustbuster, due to a level of fan noise. The advertising campaign for the GeForce FX featured the Dawn technology demo, NVIDIA touted it as The Dawn of Cinematic Computing, while critics noted that this was the strongest case of using sex appeal in order to sell graphics cards yet. NVIDIA debuted a new campaign to motivate developers to optimize their titles for NVIDIA hardware at the Game Developers Conference in 2002, Developers also had extensive access to NVIDIA engineers, who helped produce code optimized for NVIDIA products. Hardware based on the NV30 project didnt launch until near the end of 2002, GeForce FX is an architecture designed with DirectX7,8 and 9 software in mind. Its weak performance in processing Shader Model 2 programs is caused by several factors, the NV3x design has less overall parallelism and calculation throughput than its competitors. When 32-bit shader code is used, the performance is severely hampered. Proper instruction ordering and instruction composition of shader code is critical for making the most of the computational resources. NVIDIAs initial release, the GeForce FX5800, was intended as a high-end part, at the time, there were no GeForce FX products for the other segments of the market
10.
Industrial Light & Magic
–
Industrial Light & Magic is an American motion picture visual effects company that was founded in May 1975 by George Lucas. It is a division of the production company, Lucasfilm, which Lucas founded. It is also the original company of the animation studio Pixar. ILM originated in Van Nuys, California, then moved to San Rafael in 1978. In 2012, The Walt Disney Company acquired ILM as part of its purchase of Lucasfilm, Lucas wanted his 1977 film Star Wars to include visual effects that had never been seen on film before. After discovering that the effects department at 20th Century Fox was no longer operational, Lucas approached Douglas Trumbull, famous for the effects on 2001. Trumbull declined as he was committed to working on Steven Spielbergs film Close Encounters of the Third Kind. Dykstra brought together a team of college students, artists, and engineers. Lucas named the group Industrial Light and Magic, which became the Special Visual Effects department on Star Wars. Alongside Dykstra, other leading members of the original ILM team were Ken Ralston, Richard Edlund, Dennis Muren, Joe Johnston, Phil Tippett, Steve Gawley, Lorne Peterson, and Paul Huston. In late 1978, when in pre-production for The Empire Strikes Back, Lucas reformed most of the team into Industrial Light & Magic in Marin County, the Extra-Terrestrial, *batteries not included, The Abyss, and Flubber, and also provided work for Avatar, alongside Weta Digital. In addition to their work for George Lucas, ILM also collaborates with Steven Spielberg on most films that he directs, Dennis Muren has acted as Visual Effects Supervisor on many of these films. After the success of the first Star Wars movie, Lucas became interested in using computer graphics on the sequel. So he contacted Triple-I, known for their early computer effects in movies like Westworld and Futureworld and he found it to be too expensive and returned to handmade models. But the test had showed him it was possible, and he decided he would create his own computer graphics department instead, one of Lucas employees was given the task to find the right people to hire. His search would lead him to NYIT, where he found Edwin Catmull, Catmull and others accepted Lucas job offer, and a new computer division at ILM was created in 1979 with the hiring of Ed Catmull as the first NYIT employee who joined Lucasfilm. John Lasseter, who was hired a few later, worked on computer animation as part of ILMs contribution to Young Sherlock Holmes. The Graphics Group was later sold to Steve Jobs, named Pixar, in 2000, ILM created the OpenEXR format for high-dynamic-range imaging
11.
Dynamic range
–
Dynamic range, abbreviated DR, DNR, or DYR is the ratio between the largest and smallest values that a certain quantity can assume. It is often used in the context of signals, like sound and it is measured either as a ratio or as a base-10 or base-2 logarithmic value of the difference between the smallest and largest signal values, in parallel to the common usage for audio signals. The human senses of sight and hearing have a high dynamic range. A human is capable of hearing anything from a quiet murmur in a room to the sound of the loudest heavy metal concert. Such a difference can exceed 100 dB which represents a factor of 100,000 in amplitude, a human cannot perform these feats of perception at both extremes of the scale at the same time. The eyes take time to adjust to different light levels, the instantaneous dynamic range of human audio perception is similarly subject to masking so that, for example, a whisper cannot be heard in loud surroundings. In practice, it is difficult to achieve the full dynamic range experienced by humans using electronic equipment. For example, a good quality LCD has a range of around 1000,1. Paper reflectance can achieve a range of about 100,1. A professional ENG camcorder such as the Sony Digital Betacam achieves a range of greater than 90 dB in audio recording. A nighttime scene will usually contain duller colours and will often be lit with blue lighting, the dynamic range of human hearing is roughly 140 dB, varying with frequency, from the threshold of hearing to the threshold of pain. The dynamic range of music as normally perceived in a concert hall doesnt exceed 80 dB, the dynamic range differs from the ratio of the maximum to minimum amplitude a given device can record, as a properly dithered recording device can record signals well below the noise RMS amplitude. Digital audio with undithered 20-bit digitization is theoretically capable of 120 dB dynamic range, 24-bit digital audio calculates to 144 dB dynamic range. Multiple noise processes determine the noise floor of a system, noise can be picked up from microphone self-noise, preamp noise, wiring and interconnection noise, media noise, etc. Early 78 rpm phonograph discs had a range of up to 40 dB, soon reduced to 30 dB. Ampex tape recorders in the 1950s achieved 60 dB in practical usage, the peak of professional analog magnetic recording tape technology reached 90 dB dynamic range in the midband frequencies at 3% distortion, or about 80 dB in practical broadband applications. The Dolby SR noise reduction gave a 20 dB further increased range resulting in 110 dB in the midband frequencies at 3% distortion. Specialized bias and record head improvements by Nakamichi and Tandberg combined with Dolby C noise reduction yielded 72 dB dynamic range for the cassette
12.
Silicon Graphics
–
Silicon Graphics, Inc. was an American high-performance computing manufacturer, producing computer hardware and software. Early systems were based on the Geometry Engine that Clark and Marc Hannah had developed at Stanford University, for much of its history, the company focused on 3D imaging and were a major supplier of both hardware and software in this market. They reincorporated as a Delaware corporation in January 1990, through the mid to late-1990s, the rapidly improving performance of commodity Wintel machines began to erode SGIs stronghold in the 3D market. The porting of Maya to other platforms is an event in this process. In the mid-2000s the company repositioned itself as a supercomputer vendor, Ed McCracken was CEO of Silicon Graphics from 1984 to 1997. During those years, SGI grew from revenues of $5.4 million to $3.7 billion. The addition of 3D graphic capabilities to PCs, and the ability of clusters of Linux- and BSD-based PCs to take on many of the tasks of larger SGI servers, ate into SGIs core markets. The porting of Maya to Linux, Mac OS X and Microsoft Windows further eroded the low end of SGIs product line, in response to challenges faced in the marketplace and a falling share price Ed McCracken was fired and SGI brought in Richard Belluzzo to replace him. Under Belluzzos leadership a number of initiatives were taken which are considered to have accelerated the corporate decline, one such initiative was trying to sell workstations running Windows NT called Visual Workstations instead of just ones which ran IRIX, the companys version of UNIX. This put the company in more direct competition with the likes of Dell. The product line was unsuccessful and abandoned a few years later, SGIs premature announcement of its migration from MIPS to Itanium and its abortive ventures into IA-32 architecture systems damaged SGIs credibility in the market. In 1999, in an attempt to clarify their current market position as more than a company, Silicon Graphics Inc. changed its corporate identity to SGI. The new logo drew criticism for wasting the professional associated with the previous cube logo. SGI continued to use the Silicon Graphics name for its product line. In November 2005, SGI announced that it had been delisted from the New York Stock Exchange because its common stock had fallen below the share price for listing on the exchange. SGIs market capitalization dwindled from a peak of seven billion dollars in 1995 to just $120 million at the time of delisting. In February 2006, SGI noted that it could run out of cash by the end of the year, in mid-2005, SGI hired Alix Partners to advise it on returning to profitability and received a new line of credit. SGI announced it was postponing its scheduled annual December stockholders meeting until March 2006 and it proposed a reverse stock split to deal with the de-listing from the New York Stock Exchange
13.
SIGGRAPH
–
SIGGRAPH is the name of the annual conference on computer graphics convened by the ACM SIGGRAPH organization. The first SIGGRAPH conference was in 1974, the conference is attended by tens of thousands of computer professionals. Past SIGGRAPH conferences have been held in Los Angeles, Dallas, New Orleans, Boston, Vancouver, some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large floor, where several hundred companies set up elaborate booths and compete for attention. Most of the companies are in the engineering, graphics, motion picture, there are also many booths for schools which specialize in computer graphics or interactivity. Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research, the recent paper acceptance rate for SIGGRAPH has been less than 26%. The submitted papers are peer-reviewed in a single-blind process, there has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress. The papers accepted for presentation at SIGGRAPH are printed since 2003 in an issue of the ACM Transactions on Graphics journal. Prior to 1992, SIGGRAPH papers were printed as part of the Computer Graphics publication, in addition to the papers, there are numerous panels of industry experts set up to discuss a wide variety of topics, from computer graphics to machine interactivity to education. SIGGRAPH also offers many full- and half-day courses in computer graphics topics, as well as shorter sketch presentations where artists. In 1984, under LucasFilm Computer Group, John Lasseters first computer animated short, pixars first computer animated short, Luxo, Jr. debuted in 1986. Pixar has debuted numerous shorts at the conference since, SIGGRAPH has several awards programs to recognize outstanding contributions to computer graphics. The most prestigious is the Steven Anson Coons Award for Outstanding Creative Contributions to Computer Graphics and it has been awarded every two years since 1983 to recognize an individuals lifetime achievement in computer graphics. The following conference areas are the areas scheduled for SIGGRAPH2012, courses, attendees learn from the experts in the field and gain inside knowledge that is critical to career advancement. Exhibition, presents the newest hardware systems, software tools, posters, presenting student, in-progress, and late-breaking work. Showcase for the latest trends and techniques for pushing the boundaries of interactive visuals, sandbox, provides an opportunity to get hands-on with the latest, most innovative real-time projects produced over the last 12 months. SIGkids, engages local youngsters with outreach and on-site programs to excite and cultivate the next-next-generation, SIGGRAPH Business Symposium SIGGRAPH Dailies, Each presenter has one minute to present an animation and describe the work. Studio, a place for making and creating at SIGGRAPH, talks, presentations on recent achievements in all areas of computer graphics and interactive techniques, including art, design, animation, visual effects, interactivity, research, and engineering
14.
Computer graphics
–
Computer graphics are pictures and films created using computers. Usually, the term refers to computer-generated image data created with help from specialized hardware and software. It is a vast and recent area in computer science, the phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, though sometimes referred to as CGI. The overall methodology depends heavily on the sciences of geometry, optics. Computer graphics is responsible for displaying art and image data effectively and meaningfully to the user and it is also used for processing image data received from the physical world. Computer graphic development has had a significant impact on many types of media and has revolutionized animation, movies, advertising, video games, the term computer graphics has been used a broad sense to describe almost everything on computers that is not text or sound. Such imagery is found in and on television, newspapers, weather reports, a well-constructed graph can present complex statistics in a form that is easier to understand and interpret. In the media such graphs are used to illustrate papers, reports, thesis, many tools have been developed to visualize data. Computer generated imagery can be categorized into different types, two dimensional, three dimensional, and animated graphics. As technology has improved, 3D computer graphics have become more common, Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Screens could display art since the Lumiere brothers use of mattes to create effects for the earliest films dating from 1895. New kinds of displays were needed to process the wealth of information resulting from such projects, early projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction interface and introduced the light pen as an input device. Douglas T. Ross of the Whirlwind SAGE system performed an experiment in 1954 in which a small program he wrote captured the movement of his finger. Electronics pioneer Hewlett-Packard went public in 1957 after incorporating the decade prior, and established ties with Stanford University through its founders. This began the transformation of the southern San Francisco Bay Area into the worlds leading computer technology hub - now known as Silicon Valley. The field of computer graphics developed with the emergence of computer graphics hardware, further advances in computing led to greater advancements in interactive computer graphics. In 1959, the TX-2 computer was developed at MITs Lincoln Laboratory, the TX-2 integrated a number of new man-machine interfaces
15.
JPEG XR
–
JPEG XR is a still-image compression standard and file format for continuous tone photographic images, based on technology originally developed and patented by Microsoft under the name HD Photo. It supports both lossy and lossless compression, and is the image format for Ecma-388 Open XML Paper Specification documents. Microsoft first announced Windows Media Photo at WinHEC2006, and then renamed it to HD Photo in November of that year, in July 2007, the Joint Photographic Experts Group and Microsoft announced HD Photo to be under consideration to become a JPEG standard known as JPEG XR. On 16 March 2009, JPEG XR was given final approval as ITU-T Recommendation T.832 and starting in April 2009, on 19 June 2009, it passed an ISO/IEC Final Draft International Standard ballot, resulting in final approval as International Standard ISO/IEC 29199-2. The ITU-T updated its publication with a corrigendum approved in December 2009, in 2010, after completion of the image coding specification, the ITU-T and ISO/IEC also published a motion format specification, a conformance test set, and reference software for JPEG XR. In 2011, they published a report describing the workflow architecture for the use of JPEG XR images in applications. Lossless compression JPEG XR also supports lossless compression, the signal processing steps in JPEG XR are the same for both lossless and lossy coding. This makes the lossless mode simple to support and enables the trimming of some bits from a compressed image to produce a lossy compressed image. Tile structure support A JPEG XR coded image can be segmented into tile regions, the data for each region can be decoded separately. This enables rapid access to parts of an image without needing to decode the entire image, when a type of tiling referred to as soft tiling is used, the tile region structuring can be changed without fully decoding the image and without introducing additional distortion. For support of using an RGB color space, JPEG XR includes an internal conversion to the YCgCo color space. 16-bit and 32-bit fixed point color component codings are supported in JPEG XR. Moreover, 16-bit and 32-bit floating point color component codings are supported in JPEG XR, in these cases the image is interpreted as floating point data, although the JPEG XR encoding and decoding steps are all performed using only integer operations. The shared-exponent floating point color format known as RGBE is also supported, in addition to RGB and CMYK formats, JPEG XR also supports grayscale and multi-channel color encodings with an arbitrary number of channels. The color representations, in most cases, are transformed to a color representation. The transformation is reversible, so that this color transformation step does not introduce distortion. Transparency map support An alpha channel may be present to represent transparency, full decoding is also unnecessary for certain editing operations such as cropping, horizontal or vertical flips, or cardinal rotations. The tile structure for access to image regions can also be changed without full decoding, metadata support A JPEG XR image file may optionally contain an embedded ICC color profile, to achieve consistent color representation across multiple devices
16.
GIMP
–
GIMP /ɡɪmp/ is a free and open-source raster graphics editor used for image retouching and editing, free-form drawing, converting between different image formats, and more specialized tasks. GIMP is released under GPLv3+ licenses and is available for Linux, macOS, GIMP was originally released as the General Image Manipulation Program. In 1995 Spencer Kimball and Peter Mattis began developing GIMP as a project at the University of California. In 1996 GIMP was released as the first publicly available release, in the following year Richard Stallman visited UC Berkeley where Spencer Kimball and Peter Mattis asked if they could change General to GNU. Richard Stallman approved and the definition of the acronym GIMP was changed to be the GNU Image Manipulation Program and this reflected its new existence as being developed as Free Software as a part of the GNU Project. The number of architectures and operating systems supported has expanded significantly since its first release. The first release supported UNIX systems, such as Linux, SGI IRIX, lillqvist in 1997 and was supported in the GIMP1.1 release. Following the first release GIMP was quickly adopted and a community of contributors formed, the community began developing tutorials, artwork and shared better work-flows and techniques. A GUI toolkit called GTK was developed to facilitate the development of GIMP, GTK was replaced by its successor GTK+ after being redesigned using object-oriented programming techniques. The development of GTK+ has been attributed to Peter Mattis becoming disenchanted with the Motif toolkit GIMP originally used, GIMP is primarily developed by volunteers as a free software project associated to both the GNU and GNOME Projects. Development takes place in a public git source code repository, on mailing lists. New features are held in public separate source code branches and merged into the branch when the GIMP team is sure they wont damage existing functions. Sometimes this means that features that appear complete do not get merged or take months or years before they become available in GIMP, GIMP itself is released as source code. After a source code release installers and packages are made for different operating systems by parties who might not be in contact with the maintainers of GIMP. The version number used in GIMP is expressed in a format, with each number carrying a specific meaning. Each year GIMP applies for several positions in the Google Summer of Code, from 2006 to 2009 there have been nine GSoC projects that have been listed as successful, although not all successful projects have been merged into GIMP yet. Several of the GSoC projects were completed in 2008, but have not been merged into a stable GIMP release, actual Development Version is 2.9.4 with many deep improvements. Next stable Version in Roadmap is 2.10, the user interface of GIMP is designed by a dedicated design and usability team
17.
OpenGL
–
Open Graphics Library is a cross-language, cross-platform application programming interface for rendering 2D and 3D vector graphics. The API is typically used to interact with a processing unit. OpenGL is managed by the non-profit technology consortium Khronos Group, the OpenGL specification describes an abstract API for drawing 2D and 3D graphics. Although it is possible for the API to be implemented entirely in software, the API is defined as a set of functions which may be called by the client program, alongside a set of named integer constants. Although the function definitions are similar to those of the programming language C. In addition to being language-independent, OpenGL is also cross-platform, the specification says nothing on the subject of obtaining, and managing an OpenGL context, leaving this as a detail of the underlying windowing system. For the same reason, OpenGL is purely concerned with rendering, providing no APIs related to input, audio, new versions of the OpenGL specifications are regularly released by the Khronos Group, each of which extends the API to support various new features. In addition to the features required by the core API, graphics processing unit vendors may provide additional functionality in the form of extensions, extensions may introduce new functions and new constants, and may relax or remove restrictions on existing OpenGL functions. Vendors can use extensions to expose custom APIs without needing support from vendors or the Khronos Group as a whole. All extensions are collected in, and defined by, the OpenGL Registry, each extension is associated with a short identifier, based on the name of the company which developed it. For example, Nvidias identifier is NV, which is part of the extension name GL_NV_half_float, the constant GL_HALF_FLOAT_NV, if multiple vendors agree to implement the same functionality using the same API, a shared extension may be released, using the identifier EXT. In such cases, it could happen that the Khronos Groups Architecture Review Board gives the extension their explicit approval. The features introduced by new version of OpenGL are typically formed from the combined features of several widely implemented extensions. OpenGLs popularity is due to the quality of its official documentation. The OpenGL Architecture Review Board released a series of manuals along with the specification which have been updated to track changes in the API and these are almost universally known by the colors of their covers, The Red Book OpenGL Programming Guide, 8th Edition. ISBN 0-321-77303-9 A tutorial and reference book, the Orange Book OpenGL Shading Language, 3rd edition. ISBN 0-321-63763-1 A tutorial and reference book for GLSL, historic books, The Green Book OpenGL Programming for the X Window System. ISBN 978-0-201-48359-8 A book about X11 interfacing and OpenGL Utility Toolkit, the Blue Book OpenGL Reference manual, 4th edition
18.
Shadow
–
A shadow is a dark area where light from a light source is blocked by an opaque object. It occupies all of the volume behind an object with light in front of it. The cross section of a shadow is a silhouette, or a reverse projection of the object blocking the light. A point source of light casts only a shadow, called an umbra. For a non-point or extended source of light, the shadow is divided into the umbra, penumbra and antumbra, the wider the light source, the more blurred the shadow becomes. If two penumbras overlap, the appear to attract and merge. This is known as the Shadow Blister Effect, the outlines of the shadow zones can be found by tracing the rays of light emitted by the outermost regions of the extended light source. The umbra region does not receive any direct light from any part of the light source, a viewer located in the umbra region cannot directly see any part of the light source. By contrast, the penumbra is illuminated by some parts of the light source, a viewer located in the penumbra region will see the light source, but it is partially blocked by the object casting the shadow. If there is more than one source, there will be several shadows, with the overlapping parts darker. The more diffuse the lighting is, the softer and more indistinct the shadow outlines become, the lighting of an overcast sky produces few visible shadows. The absence of diffusing atmospheric effects in the vacuum of outer space produces shadows that are stark, for a person or object touching the surface where the shadow is projected the shadows converge at the point of contact. A shadow shows, apart from distortion, the image as the silhouette when looking at the object from the sun-side. The names umbra, penumbra and antumbra are often used for the shadows cast by objects, though they are sometimes used to describe levels of darkness. An astronomical object casts human-visible shadows when its apparent magnitude is equal or lower than -4, currently the only astronomical objects able to produce visible shadows on Earth are the sun, the moon and, in the right conditions, Venus or Jupiter. A shadow cast by the Earth on the Moon is a lunar eclipse, conversely, a shadow cast by the Moon on the Earth is a solar eclipse. The sun casts shadows which change dramatically through the day, the length of a shadow cast on the ground is proportional to the cotangent of the suns elevation angle—its angle θ relative to the horizon. Near sunrise and sunset, when θ = 0° and cot = ∞, if the sun passes directly overhead, then θ = 90°, cot =0, and shadows are cast directly underneath objects
19.
Single-precision floating-point format
–
Single-precision floating-point format is a computer number format that occupies 4 bytes in computer memory and represents a wide dynamic range of values by using a floating point. In IEEE 754-2008 the 32-bit base-2 format is referred to as binary32. It was called single in IEEE 754-1985, in older computers, different floating-point formats of 4 bytes were used, e. g. GW-BASICs single-precision data type was the 32-bit MBF floating-point format. One of the first programming languages to provide single- and double-precision floating-point data types was Fortran, before the widespread adoption of IEEE 754-1985, the representation and properties of the double float data type depended on the computer manufacturer and computer model. Single-precision binary floating-point is used due to its range over fixed point. A signed 32-bit integer can have a value of 231 −1 =2,147,483,647. As an example, the 32-bit integer 2,147,483,647 converts to 2,147,483,650 in IEEE754 form. Single precision is termed REAL in Fortran, float in C, C++, C#, Java, Float in Haskell, and Single in Object Pascal, Visual Basic, and MATLAB. However, float in Python, Ruby, PHP, and OCaml, in most implementations of PostScript, the only real precision is single. The IEEE754 standard specifies a binary32 as having, Sign bit,1 bit Exponent width,8 bits Significand precision,24 bits This gives from 6 to 9 significant decimal digits precision. Sign bit determines the sign of the number, which is the sign of the significand as well, Exponent is either an 8-bit signed integer from −128 to 127 or an 8-bit unsigned integer from 0 to 255, which is the accepted biased form in IEEE754 binary32 definition. Exponents range from −126 to +127 because exponents of −127 and +128 are reserved for special numbers, the true significand includes 23 fraction bits to the right of the binary point and an implicit leading bit with value 1, unless the exponent is stored with all zeros. Thus only 23 fraction bits of the significand appear in the memory format, B0 =1 + ∑ i =123 b 23 − i 2 − i =1 +1 ⋅2 −2 =1.25 ∈ ⊂ ⊂ [1,2 ). Thus, value = ×1.25 ×2 −3 = +0.15625. Note,1 +2 −23 ≈1.000000119,2 −2 −23 ≈1.999999881,2 −126 ≈1.17549435 ×10 −38,2 +127 ≈1.70141183 ×10 +38. The single-precision binary floating-point exponent is encoded using a representation, with the zero offset being 127. The stored exponents 00H and FFH are interpreted specially, the minimum positive normal value is 2−126 ≈1.18 × 10−38 and the minimum positive value is 2−149 ≈1.4 × 10−45. In general, refer to the IEEE754 standard itself for the conversion of a real number into its equivalent binary32 format
20.
Decimal32 floating-point format
–
In computing, decimal32 is a decimal floating-point computer numbering format that occupies 4 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, like the binary16 format, it is intended for memory saving storage. Decimal32 supports 7 decimal digits of significand and an exponent range of −95 to +96, because the significand is not normalized, most values with less than 7 significant digits have multiple possible representations, 1×102=0. 1×103=0. 01×104, etc. Decimal32 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal32 values. The standard does not specify how to signify which representation is used, in one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer. The other, alternative, representation method is based on densely packed decimal for most of the significand, both alternatives provide exactly the same range of representable numbers,7 digits of significand and 3×26=192 possible exponent values. The remaining combinations encode infinities and NaNs and this format uses a binary significand from 0 to 107−1 =9999999 = 98967F16 =1001100010010110011111112. The encoding can represent binary significands up to 10×220−1 =10485759 = 9FFFFF16 =1001111111111111111111112, as described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 8-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 in the true significand, compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the 00,01, or 10 bits are part of the exponent field, the leading digit is between 0 and 9, and the rest of the significand uses the densely packed decimal encoding. These six bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 20 bits are the significand continuation field, consisting of two 10-bit declets. Each declet encodes three decimal digits using the DPD encoding, the DPD/3BCD transcoding for the declets is given by the following table. B9. b0 are the bits of the DPD, and d2. d0 are the three BCD digits, the 8 decimal values whose digits are all 8s or 9s have four codings each. The bits marked x in the table above are ignored on input, but will always be 0 in computed results
21.
Double-precision floating-point format
–
Double-precision floating-point format is a computer number format that occupies 8 bytes in computer memory and represents a wide, dynamic range of values by using a floating point. Double-precision floating-point format usually refers to binary64, as specified by the IEEE754 standard, in older computers, different floating-point formats of 8 bytes were used, e. g. GW-BASICs double-precision data type was the 64-bit MBF floating-point format. Double-precision binary floating-point is a commonly used format on PCs, due to its range over single-precision floating point, in spite of its performance. As with single-precision floating-point format, it lacks precision on integer numbers when compared with a format of the same size. It is commonly simply as double. The IEEE754 standard specifies a binary64 as having, Sign bit,1 bit Exponent,11 bits Significand precision,53 bits This gives 15–17 significant decimal digits precision. If an IEEE754 double precision is converted to a string with at least 17 significant digits and then converted back to double. The format is written with the significand having an implicit integer bit of value 1, with the 52 bits of the fraction significand appearing in the memory format, the total precision is therefore 53 bits. For the next range, from 253 to 254, everything is multiplied by 2, so the numbers are the even ones. Conversely, for the range from 251 to 252, the spacing is 0.5. The spacing as a fraction of the numbers in the range from 2n to 2n+1 is 2n−52, the maximum relative rounding error when rounding a number to the nearest representable one is therefore 2−53. The 11 bit width of the exponent allows the representation of numbers between 10−308 and 10308, with full 15–17 decimal digits precision, by compromising precision, the subnormal representation allows even smaller values up to about 5 × 10−324. The double-precision binary floating-point exponent is encoded using a representation, with the zero offset being 1023. All bit patterns are valid encoding, except for the above exceptions, the entire double-precision number is described by, sign ×2 exponent − exponent bias ×1. Fraction In the case of subnormals the double-precision number is described by, because there have been many floating point formats with no network standard representation for them, the XDR standard uses big-endian IEEE754 as its representation. It may therefore appear strange that the widespread IEEE754 floating point standard does not specify endianness, theoretically, this means that even standard IEEE floating point data written by one machine might not be readable by another. One area of computing where this is an issue is for parallel code running on GPUs. For example, when using NVIDIAs CUDA platform, on video cards designed for gaming, doubles are implemented in many programming languages in different ways such as the following
22.
Decimal64 floating-point format
–
In computing, decimal64 is a decimal floating-point computer numbering format that occupies 8 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, decimal64 supports 16 decimal digits of significand and an exponent range of −383 to +384, i. e. ±0. 000000000000000×10^−383 to ±9. 999999999999999×10^384. In contrast, the binary format, which is the most commonly used type, has an approximate range of ±0. 000000000000001×10^−308 to ±1. 797693134862315×10^308. Because the significand is not normalized, most values with less than 16 significant digits have multiple representations, 1×102=0. 1×103=0. 01×104. Decimal64 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal64 values. Both alternatives provide exactly the range of representable numbers,16 digits of significand. In both cases, the most significant 4 bits of the significand are combined with the most significant 2 bits of the exponent to use 30 of the 32 possible values of a 5-bit field, the remaining combinations encode infinities and NaNs. In the cases of Infinity and NaN, all bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a byte value. This format uses a binary significand from 0 to 1016−1 =9999999999999999 = 2386F26FC0FFFF16 =1000111000011011110010011011111100000011111111111111112, the encoding, completely stored on 64 bits, can represent binary significands up to 10×250−1 =11258999068426239 = 27FFFFFFFFFFFF16, but values larger than 1016−1 are illegal. As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 10-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 for the most bits of the true significand, compare having an implicit 1-bit prefix 1 in the significand of normal values for the binary formats. Note also that the 2-bit sequences 00,01, or 10 after the bit are part of the exponent field. Note that the bits of the significand field do not encode the most significant decimal digit. The highest valid significant is 9999999999999999 whose binary encoding is 0111000011011110010011011111100000011111111111111112, the leading digit is between 0 and 9, and the rest of the significand uses the densely packed decimal encoding. This eight bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 50 bits are the significand continuation field, consisting of five 10-bit declets
23.
Quadruple-precision floating-point format
–
That kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed. In IEEE 754-2008 the 128-bit base-2 format is referred to as binary128. The IEEE754 standard specifies a binary128 as having, Sign bit,1 bit Exponent width,15 bits Significand precision,113 bits This gives from 33 to 36 significant decimal digits precision. The format is written with an implicit lead bit with value 1 unless the exponent is stored with all zeros, thus only 112 bits of the significand appear in the memory format, but the total precision is 113 bits. The bits are laid out as, A binary256 would have a precision of 237 bits. The stored exponents 000016 and 7FFF16 are interpreted specially, the minimum strictly positive value is 2−16494 ≈ 10−4965 and has a precision of only one bit. The minimum positive value is 2−16382 ≈3.3621 × 10−4932 and has a precision of 113 bits. The maximum representable value is 216384 −216271 ≈1.1897 ×104932 and these examples are given in bit representation, in hexadecimal, of the floating-point value. This includes the sign, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, a common software technique to implement nearly quadruple precision using pairs of double-precision values is sometimes called double-double arithmetic. That is, the pair is stored in place of q, note that double-double arithmetic has the following special characteristics, As the magnitude of the value decreases, the amount of extra precision also decreases. Therefore, the smallest number in the range is narrower than double precision. The smallest number with full precision is 1000.02 × 2−1074, numbers whose magnitude is smaller than 2−1021 will not have additional precision compared with double precision. The actual number of bits of precision can vary, in general, the magnitude of the low-order part of the number is no greater than half ULP of the high-order part. If the low-order part is less than half ULP of the high-order part, certain algorithms that rely on having a fixed number of bits in the significand can fail when using 128-bit long double numbers. Because of the reason above, it is possible to represent values like 1 + 2−1074 and they are represented as a sum of three double-precision values respectively. They can represent operations with at least 159/161 and 212/215 bits respectively, a similar technique can be used to produce a double-quad arithmetic, which is represented as a sum of two quadruple-precision values. They can represent operations with at least 226 bits, quadruple precision is often implemented in software by a variety of techniques, since direct hardware support for quadruple precision is, as of 2016, less common
24.
Decimal128 floating-point format
–
In computing, decimal128 is a decimal floating-point computer numbering format that occupies 16 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, decimal128 supports 34 decimal digits of significand and an exponent range of −6143 to +6144, i. e. ±0. 000000000000000000000000000000000×10^−6143 to ±9. 999999999999999999999999999999999×10^6144. Therefore, decimal128 has the greatest range of values compared with other IEEE basic floating point formats, because the significand is not normalized, most values with less than 34 significant digits have multiple possible representations, 1×102=0. 1×103=0. 01×104, etc. Decimal128 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal128 values. The standard does not specify how to signify which representation is used, in one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer. The other, alternative, representation method is based on densely packed decimal for most of the significand, both alternatives provide exactly the same range of representable numbers,34 digits of significand and 3×212 =12288 possible exponent values. In both cases, the most significant 4 bits of the significand are combined with the most significant 2 bits of the exponent to use 30 of the 32 possible values of 5 bits in the combination field, the remaining combinations encode infinities and NaNs. In the case of Infinity and NaN, all bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a byte value. The encoding can represent binary significands up to 10×2110−1 =12980742146337069071326240823050239, as described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 14-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 in the true significand, compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the 00,01, or 10 bits are part of the exponent field, for the decimal128 format, all of these significands are out of the valid range, and are thus decoded as zero, but the pattern is same as decimal32 and decimal64. The leading digit is between 0 and 9, and the rest of the uses the densely packed decimal encoding. This twelve bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 110 bits are the significand continuation field, consisting of eleven 10-bit declets. Each declet encodes three decimal digits using the DPD encoding, the DPD/3BCD transcoding for the declets is given by the following table. B9. b0 are the bits of the DPD, and d2. d0 are the three BCD digits, the 8 decimal values whose digits are all 8s or 9s have four codings each
25.
Octuple-precision floating-point format
–
In computing, octuple precision is a binary floating-point-based computer number format that occupies 32 bytes in computer memory. This 256-bit octuple precision is for applications requiring results in higher than quadruple precision and this format is rarely used and very few things support it. Thus only 236 bits of the significand appear in the memory format, the stored exponents 0000016 and 7FFFF16 are interpreted specially. The minimum strictly positive value is 2−262378 ≈ 10−78984 and has a precision of one bit. The minimum positive value is 2−262142 ≈2.4824 × 10−78913. The maximum representable value is 2262144 −2261907 ≈1.6113 ×1078913 and these examples are given in bit representation, in hexadecimal, of the floating-point value. This includes the sign, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, octuple precision is rarely implemented since usage of it is extremely rare. Apple Inc. had an implementation of addition, subtraction and multiplication of numbers with a 224-bit twos complement significand. One can use general arbitrary-precision arithmetic libraries to obtain octuple precision, there is little to no hardware support for it. Octuple-precision arithmetic is too impractical for most commercial uses of it, IEEE Standard for Floating-Point Arithmetic ISO/IEC10967, Language-independent arithmetic Primitive data type
26.
Extended precision
–
Extended precision refers to floating point number formats that provide greater precision than the basic floating point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in values of expressions on the base format. In contrast to extended precision, arbitrary-precision arithmetic refers to implementations of much larger numeric types using special software, the IBM1130 offered two floating point formats, a 32-bit standard precision format and a 40-bit extended precision format. Standard precision format contained a 24-bit twos complement significand while extended precision utilized a 32-bit twos complement significand, the latter format could make full use of the cpus 32-bit integer operations. The characteristic in both formats was an 8-bit field containing the power of two biased by 128, floating-point arithmetic operations were performed by software, and double precision was not supported at all. The extended format occupied three 16-bit words, with the extra space simply ignored, the IBM System/360 supports a 32-bit short floating point format and a 64-bit long floating point format. The 360/85 and follow-on System/370 added support for a 128-bit extended format and these formats are still supported in the current design, where they are now called the hexadecimal floating point formats. The IEEE754 floating point standard recommends that implementations provide extended precision formats, the standard specifies the minimum requirements for an extended format but does not specify an encoding. The encoding is the implementors choice, the IA32 and x86-64 and Itanium processors support an 80-bit double extended extended precision format with a 64-bit significand. The Intel 8087 math coprocessor was the first x86 device which supported floating point arithmetic in hardware and it was designed to support a 32-bit single precision format and a 64-bit double precision format for encoding and interchanging floating point numbers. To mitigate such issues the internal registers in the 8087 were designed to hold intermediate results in an 80-bit extended precision format, the floating-point unit on all subsequent x86 processors have supported this format. As a result software can be developed which takes advantage of the higher precision provided by this format and that kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed. The Motorola 6888x math coprocessors and the Motorola 68040 and 68060 processors support this same 64-bit significand extended precision type, the follow-on Coldfire processors do not support this 96-bit extended precision format. The x87 and Motorola 68881 80-bit formats meet the requirements of the IEEE754 double extended format and this 80-bit format uses one bit for the sign of the significand,15 bits for the exponent field and 64 bits for the significand. The exponent field is biased by 16383, meaning that 16383 has to be subtracted from the value in the exponent field to compute the power of 2. An exponent field value of 32767 is reserved so as to enable the representation of states such as infinity. If the exponent field is zero, the value is a denormal number, the m field is the combination of the integer and fraction parts in the above diagram. In contrast to the single and double-precision formats, this format does not utilize an implicit/hidden bit, rather, bit 63 contains the integer part of the significand and bits 62-0 hold the fractional part
27.
Minifloat
–
In computing, minifloats are floating point values represented with very few bits. Predictably, they are not well suited for general purpose numerical calculations and they are used for special purposes most often in computer graphics where iterations are small and precision has aesthetic effects. Additionally they are encountered as a pedagogical tool in computer science courses to demonstrate the properties and structures of floating point arithmetic. Minifloats with 16 bits are half-precision numbers, there are also minifloats with 8 bits or even fewer. Minifloats can be designed following the principles of the IEEE754 standard, in this case they must obey the rules for the frontier between subnormal and normal numbers and they must have special patterns for infinity and NaN. Normalized numbers are stored with a biased exponent, the new revision of the standard, IEEE 754-2008, has 16-bit binary minifloats. The Radeon R300 and R420 GPUs used an fp24 floating-point format with 7 bits of exponent and 16 bits of mantissa, Full Precision in Direct3D9.0 is a proprietary 24-bit floating point format. Microsofts D3D9 graphics API initially supported both FP24 and FP32 as Full Precision as well as FP16 as Partial Precision for vertex, in computer graphics minifloats are sometimes used to represent only integral values. If at the same time subnormal values should exist, the least subnormal number has to be 1 and this statement can be used to calculate the bias value. The following example demonstrates the calculation as well as the underlying principles, a minifloat in one byte with one sign bit, four exponent bits and three mantissa bits should be used to represent integral values. All IEEE754 principles should be valid, the only free value is the exponent bias, which will come out as −2. The unknown exponent is called for the moment x, numbers in a different base are marked as. base. The bit patterns have spaces to visualize their parts,00000000 =0 The mantissa is extended with 0. 00000001 =0.0012 × 2x =0.125 × 2x =1,00000111 =0.1112 × 2x =0.875 × 2x =7 The mantissa is extended with 1. 00001000 =1.0002 × 2x =1 × 2x =800001001 =1.0012 × 2x =1.125 × 2x =9. 00010000 =1.0002 × 2x+1 =1 × 2x+1 =1600010001 =1.0012 × 2x+1 =1.125 × 2x+1 =18. 01110000 =1.0002 × 2x+13 =1.000 × 2x+13 =6553601110001 =1.0012 × 2x+13 =1.125 × 2x+13 =73728. Therefore the bias has to be −2, that is every stored exponent has to be decreased by −2 or has to be increased by 2, to get the numerical exponent
28.
Microsoft Binary Format
–
In computing, Microsoft Binary Format was a format for floating point numbers used in Microsofts BASIC language products including MBASIC, GW-BASIC and QuickBasic prior to version 4.00. In 1975, Bill Gates and Paul Allen were working on Altair BASIC, one thing still missing was code to handle floating point numbers, needed to support calculations with very big and very small numbers, which would be particularly useful for science and engineering. One of the uses of the Altair was as a scientific calculator. At a dinner at Currier House, a residential house at Harvard, Gates. One of them, Monte Davidoff, told them he had written floating point routines before and convinced Gates, at the time there was no standard for floating point numbers, so Davidoff had to come up with his own. He decided 32 bits would allow enough range and precision, when Allen had to demonstrate it to MITS, it was the first time it ran on an actual Altair. But it worked and when he entered ‘PRINT 2+2’, Davidoffs adding routine gave the right answer, the source code for Altair BASIC was thought to have been lost to history, but resurfaced in 2000. It had been sitting behind Gatess former tutor and dean Harry Lewiss file cabinet, a comment in the source credits Davidoff as the writer of Altair BASICs math package. Altair BASIC took off and soon most early home computers ran some form of Microsoft BASIC, the BASIC port for the 6502 CPU, such as used in the Commodore PET, took up more space due to the lower code density of the 6502. Because of this it would not fit in a single ROM chip together with the machine-specific input and output code. Since an extra chip was necessary, extra space was available, not long afterwards the Z80 ports, such as Level II BASIC for the TRS-80, introduced the 64 bit, double precision format as a separate data type from 32 bit, single precision. Even so, for a while MBF became the de facto floating point format on computers, to the point where people still occasionally encounter legacy files. As early as in 1976, Intel was starting the development of a floating point coprocessor, Intel hoped to be able to sell a chip containing good implementations of all the operations found in the widely varying maths software libraries. John Palmer, who managed the project, contacted William Kahan, the first VAX, the VAX-11/780 had just come out in late 1977 and its floating point was highly regarded. However, seeking to market their chip to the broadest possible market, Intel wanted the best floating point possible, when rumours of Intels new chip reached its competitors they started a standardization effort, called IEEE754, to prevent Intel from gaining too much ground. Kahan got Palmers permission to participate, he was allowed to explain Intels design decisions and their underlying reasoning, vAXs floating point formats differed from MBF only in that it had the sign in the most significant bit. It turned out that for double precision numbers, an 8 bit exponent isnt wide enough for some wanted operations, both Kahans proposal and a counter-proposal by DEC therefore used 11 bits, like the time-tested 60 bits floating point format of the CDC6600 from 1965. The next year DEC had a study done in order to demonstrate that gradual underflow was a bad idea, in 1985 the standard was ratified, but it had already become the de facto standard a year earlier, implemented by many manufacturers
29.
Exponentiation
–
Exponentiation is a mathematical operation, written as bn, involving two numbers, the base b and the exponent n. The exponent is usually shown as a superscript to the right of the base, Some common exponents have their own names, the exponent 2 is called the square of b or b squared, the exponent 3 is called the cube of b or b cubed. The exponent −1 of b, or 1 / b, is called the reciprocal of b, when n is a positive integer and b is not zero, b−n is naturally defined as 1/bn, preserving the property bn × bm = bn + m. The definition of exponentiation can be extended to any real or complex exponent. Exponentiation by integer exponents can also be defined for a variety of algebraic structures. The term power was used by the Greek mathematician Euclid for the square of a line, archimedes discovered and proved the law of exponents, 10a 10b = 10a+b, necessary to manipulate powers of 10. In the late 16th century, Jost Bürgi used Roman numerals for exponents, early in the 17th century, the first form of our modern exponential notation was introduced by Rene Descartes in his text titled La Géométrie, there, the notation is introduced in Book I. Nicolas Chuquet used a form of notation in the 15th century. The word exponent was coined in 1544 by Michael Stifel, samuel Jeake introduced the term indices in 1696. In the 16th century Robert Recorde used the square, cube, zenzizenzic, sursolid, zenzicube, second sursolid. Biquadrate has been used to refer to the power as well. Some mathematicians used exponents only for greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as ax + bxx + cx3 + d, another historical synonym, involution, is now rare and should not be confused with its more common meaning. In 1748 Leonhard Euler wrote consider exponentials or powers in which the exponent itself is a variable and it is clear that quantities of this kind are not algebraic functions, since in those the exponents must be constant. With this introduction of transcendental functions, Euler laid the foundation for the introduction of natural logarithm as the inverse function for y = ex. The expression b2 = b ⋅ b is called the square of b because the area of a square with side-length b is b2, the expression b3 = b ⋅ b ⋅ b is called the cube of b because the volume of a cube with side-length b is b3. The exponent indicates how many copies of the base are multiplied together, for example,35 =3 ⋅3 ⋅3 ⋅3 ⋅3 =243. The base 3 appears 5 times in the multiplication, because the exponent is 5