Computer programming is the process of designing and building an executable computer program for accomplishing a specific computing task. Programming involves tasks such as: analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, the implementation of algorithms in a chosen programming language; the source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, directly executed by the central processing unit. The purpose of programming is to find a sequence of instructions that will automate the performance of a task on a computer for solving a given problem; the process of programming thus requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, formal logic. Tasks accompanying and related to programming include: testing, source code maintenance, implementation of build systems, management of derived artifacts, such as the machine code of computer programs.
These might be considered part of the programming process, but the term software development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of code. Software engineering combines engineering techniques with software development practices. Reverse engineering is the opposite process. A hacker is any skilled computer expert that uses their technical knowledge to overcome a problem, but it can mean a security hacker in common language. Programmable devices have existed at least as far back as 1206 AD, when the automata of Al-Jazari were programmable, via pegs and cams, to play various rhythms and drum patterns. However, the first computer program is dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. Women would continue to dominate the field of computer programming until the mid 1960s. In the 1880s Herman Hollerith invented the concept of storing data in machine-readable form.
A control panel added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way. However, with the concept of the stored-program computers introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory. Machine code was the language of early programs, written in the instruction set of the particular machine in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format, with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, any two machines with different instruction sets have different assembly languages. Kathleen Booth created one of the first Assembly languages in 1950 for various computers at Birkbeck College. High-level languages allow the programmer to write programs in terms that are syntactically richer, more capable of abstracting the code, making it targetable to varying machine instruction sets via compilation declarations and heuristics.
The first compiler for a programming language was developed by Grace Hopper. When Hopper went to work on UNIVAC in 1949, she brought the idea of using compilers with her. Compilers harness the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation for example. FORTRAN, the first used high-level language to have a functional implementation which permitted the abstraction of reusable blocks of code, came out in 1957. In 1951 Frances E. Holberton developed the first sort-merge generator which ran on the UNIVAC I. Another woman working at UNIVAC, Adele Mildred Koss, developed a program, a precursor to report generators. In USSR, Kateryna Yushchenko developed the Address programming language for the MESM in 1955; the idea for the creation of COBOL started in 1959 when Mary K. Hawes, who worked for Burroughs Corporation, set up a meeting to discuss creating a common business language, she invited six people, including Grace Hopper.
Hopper was involved in developing COBOL as a business language and creating "self-documenting" programming. Hopper's contribution to COBOL was based on her programming language, called FLOW-MATIC. In 1961, Jean E. Sammet developed FORMAC and published Programming Languages: History and Fundamentals which went on to be a standard work on programming languages. Programs were still entered using punched cards or paper tape. See computer programming in the punch card era. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Frances Holberton created a code to allow keyboard inputs while she worked at UNIVAC. Text editors were developed that allowed changes and corrections to be made much more than with punched cards. Sister Mary Kenneth Keller worked on developing the programming language, BASIC when she was a graduate student at Dartmouth in the 1960s. One of the first object-oriented programming languages, was developed by seven programmers, including Adele Goldberg, in the 1970s.
In 1985, Radia Perlman developed the Spannin
MOS Technology 6502
The MOS Technology 6502 is an 8-bit microprocessor, designed by a small team led by Chuck Peddle for MOS Technology. When it was introduced in 1975, the 6502 was, by a considerable margin, the least expensive microprocessor on the market, it sold for less than one-sixth the cost of competing designs from larger companies, such as Motorola and Intel, caused rapid decreases in pricing across the entire processor market. Along with the Zilog Z80, it sparked a series of projects that resulted in the home computer revolution of the early 1980s. Popular home video game consoles and computers, such as the Atari 2600, Atari 8-bit family, Apple II, Nintendo Entertainment System, Commodore 64, Atari Lynx, BBC Micro and others, used the 6502 or variations of the basic design. Soon after the 6502's introduction, MOS Technology was purchased outright by Commodore International, who continued to sell the microprocessor and licenses to other manufacturers. In the early days of the 6502, it was second-sourced by Rockwell and Synertek, licensed to other companies.
In its CMOS form, developed by the Western Design Center, the 6502 family continues to be used in embedded systems, with estimated production volumes in the hundreds of millions. The 6502 was designed by many of the same engineers that had designed the Motorola 6800 microprocessor family. Motorola started the 6800 microprocessor project in 1971 with Tom Bennett as the main architect; the chip layout began in late 1972, the first 6800 chips were fabricated in February 1974 and the full family was released in November 1974. John Buchanan was the designer of the 6800 chip and Rod Orgill, who did the 6501, assisted Buchanan with circuit analyses and chip layout. Bill Mensch joined Motorola in June 1971 after graduating from the University of Arizona, his first assignment was helping define the peripheral ICs for the 6800 family and he was the principal designer of the 6820 Peripheral Interface Adapter. Motorola's engineers could run digital simulations on an IBM 370-165 mainframe computer. Bennett hired Chuck Peddle in 1973 to do architectural support work on the 6800 family products in progress.
He contributed in many areas, including the design of the 6850 ACIA. Motorola's target customers were established electronics companies such as Hewlett-Packard, Tektronix, TRW, Chrysler. In May 1972, Motorola's engineers began visiting select customers and sharing the details of their proposed 8-bit microprocessor system with ROM, RAM, parallel and serial interfaces. In early 1974, they provided engineering samples of the chips so that customers could prototype their designs. Motorola's "total product family" strategy did not focus on the price of the microprocessor, but on reducing the customer's total design cost, they offered development software on a timeshare computer, the "EXORciser" debugging system, onsite training and field application engineer support. Both Intel and Motorola had announced a $360 price for a single microprocessor; the actual price for production quantities was much less. Motorola offered a design kit containing the 6800 with six support chips for $300. Peddle, who would accompany the sales people on customer visits, found that customers were put off by the high cost of the microprocessor chips.
To lower the price, the IC chip size would have to shrink so that more chips could be produced on each silicon wafer. This could be done by removing inessential features in the 6800 and using a newer fabrication technology, "depletion-mode" MOS transistors. Peddle and other team members started outlining the design of an improved feature, reduced size microprocessor. At that time, Motorola's new semiconductor fabrication facility in Austin, was having difficulty producing MOS chips and mid 1974 was the beginning of a year-long recession in the semiconductor industry. Many of the Mesa, employees were displeased with the upcoming relocation to Austin. Motorola Semiconductor Products Division's management was overwhelmed with problems and showed no interest in Peddle's low-cost microprocessor proposal. Chuck Peddle was frustrated with Motorola's management for missing this new opportunity. In a November 1975 interview, Motorola's Chairman, Robert Galvin, agreed, he said, "We did not choose the right leaders in the Semiconductor Products division."
The division was reorganized and the management replaced. New group vice-president John Welty said, "The semiconductor sales organization lost its sensitivity to customer needs and couldn't make speedy decisions."Peddle began looking for a source of funding for this new project and found a small semiconductor company in Pennsylvania. In August 1974, Chuck Peddle, Bill Mensch, Rod Orgill, Harry Bawcom, Ray Hirt, Terry Holdt and Wil Mathys left Motorola to join MOS Technology. Of the seventeen chip designers and layout people on the 6800 team, seven left. There were 30 to 40 application engineers and system engineers on the 6800 team; that December, Gary Daniels transferred into the 6800 microprocessor group. Tom Bennett did not want to leave the Phoenix area so Daniels took over the microprocessor development in Austin, his first project was a "depletion-mode" version of the 6800. The faster parts were available in July 1976; this was followed by the 6802 which added 128 bytes of an on-chip clock oscillator circuit.
MOS Technology was formed in 1969 by three executives from General Instrument, Mort Jaffe, Don McLaughlin, John Pavinen, to produce metal-oxide-semiconductor integrated circuits. Allen-Br
A binary code represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is "0" and "1" from the binary number system; the binary code assigns a pattern of binary digits known as bits, to each character, etc. For example, a binary string of eight bits can represent any of 256 possible values and can, represent a wide variety of different items. In computing and telecommunications, binary codes are used for various methods of encoding data, such as character strings, into bit strings; those methods may use variable-width strings. In a fixed-width binary code, each letter, digit, or other character is represented by a bit string of the same length. There are many character sets and many character encodings for them. A bit string, interpreted as a binary number, can be translated into a decimal number. For example, the lower case a, if represented by the bit string 01100001, can be represented as the decimal number "97"; the modern binary number system, the basis for binary code, was invented by Gottfried Leibniz in 1689 and appears in his article Explication de l'Arithmétique Binaire.
The full title is translated into English as the "Explanation of the binary arithmetic", which uses only the characters 1 and 0, with some remarks on its usefulness, on the light it throws on the ancient Chinese figures of Fu Xi.". Leibniz's system uses 1, like the modern binary numeral system. Leibniz encountered the I Ching through French Jesuit Joachim Bouvet and noted with fascination how its hexagrams correspond to the binary numbers from 0 to 111111, concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired. Leibniz saw the hexagrams as an affirmation of the universality of his own religious belief. Binary numerals were central to Leibniz's theology, he believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing. Leibniz was trying to find a system that converts logic’s verbal statements into a pure mathematical one. After his ideas were ignored, he came across a classic Chinese text called I Ching or ‘Book of Changes’, which used a type of binary code.
The book had confirmed his theory that life could be simplified or reduced down to a series of straightforward propositions. He created a system consisting of rows of ones. During this time period, Leibniz had not yet found a use for this system. Binary systems predating Leibniz existed in the ancient world; the aforementioned I Ching that Leibniz encountered dates from the 9th century BC in China. The binary system of the I Ching, a text for divination, is based on the duality of yang. Slit drums with binary tones are used to encode messages across Asia; the Indian scholar Pingala developed a binary system for describing prosody in his Chandashutram. The residents of the island of Mangareva in French Polynesia were using a hybrid binary-decimal system before 1450. In the 11th century and philosopher Shao Yong developed a method for arranging the hexagrams which corresponds, albeit unintentionally, to the sequence 0 to 63, as represented in binary, with yin as 0, yang as 1 and the least significant bit on top.
The ordering is the lexicographical order on sextuples of elements chosen from a two-element set. In 1605 Francis Bacon discussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could be encoded as scarcely visible variations in the font in any random text. For the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of a twofold difference only. George Boole published a paper in 1847 called'The Mathematical Analysis of Logic' that describes an algebraic system of logic, now known as Boolean algebra. Boole's system was based on binary, a yes-no, on-off approach that consisted of the three most basic operations: AND, OR, NOT; this system was not put into use until a graduate student from Massachusetts Institute of Technology, Claude Shannon, noticed that the Boolean algebra he learned was similar to an electric circuit. Shannon wrote his thesis in 1937. Shannon's thesis became a starting point for the use of the binary code in practical applications such as computers, electric circuits, more.
The bit string is not the only type of binary code. A binary system, in general, is any system that allows only two choices such as a switch in an electronic system or a simple true or false test. Braille is a type of binary code, used by the blind to read and write by touch, named for its creator, Louis Braille; this system consists of grids of six dots each, three per column, in which each dot has two states: raised or not raised. The different combinations of raised and flattened dots are capable of representing all letters and punctuation signs; the bagua are diagrams used in feng shui, Taoist cosmology and I Ching studies. The ba gua consists of 8 trigrams; the same word is used for the 64 guà. Each figure combines three lines that are either unbroken; the relationships between the trigrams are represented in two arrangements, the primordial, "Earlier Heaven" or "Fuxi" bagua, the manifested, "Later Heaven,"or "King Wen" bagua. (See the King Wen seque
C++ is a general-purpose programming language, developed by Bjarne Stroustrup as an extension of the C language, or "C with Classes". It has imperative, object-oriented and generic programming features, while providing facilities for low-level memory manipulation, it is always implemented as a compiled language, many vendors provide C++ compilers, including the Free Software Foundation, Intel, IBM, so it is available on many platforms. C++ was designed with a bias toward system programming and embedded, resource-constrained software and large systems, with performance and flexibility of use as its design highlights. C++ has been found useful in many other contexts, with key strengths being software infrastructure and resource-constrained applications, including desktop applications and performance-critical applications. C++ is standardized by the International Organization for Standardization, with the latest standard version ratified and published by ISO in December 2017 as ISO/IEC 14882:2017.
The C++ programming language was standardized in 1998 as ISO/IEC 14882:1998, amended by the C++03, C++11 and C++14 standards. The current C++ 17 standard supersedes these with an enlarged standard library. Before the initial standardization in 1998, C++ was developed by Danish computer scientist Bjarne Stroustrup at Bell Labs since 1979 as an extension of the C language. C++20 is the next planned standard, keeping with the current trend of a new version every three years. In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on "C with Classes", the predecessor to C++; the motivation for creating a new language originated from Stroustrup's experience in programming for his Ph. D. thesis. Stroustrup found that Simula had features that were helpful for large software development, but the language was too slow for practical use, while BCPL was fast but too low-level to be suitable for large software development; when Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing.
Remembering his Ph. D. experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it was general-purpose, fast and used; as well as C and Simula's influences, other languages influenced C++, including ALGOL 68, Ada, CLU and ML. Stroustrup's "C with Classes" added features to the C compiler, including classes, derived classes, strong typing and default arguments. In 1983, "C with Classes" was renamed to "C++", adding new features that included virtual functions, function name and operator overloading, constants, type-safe free-store memory allocation, improved type checking, BCPL style single-line comments with two forward slashes. Furthermore, it included the development of a standalone compiler for Cfront. In 1985, the first edition of The C++ Programming Language was released, which became the definitive reference for the language, as there was not yet an official standard; the first commercial implementation of C++ was released in October of the same year.
In 1989, C++ 2.0 was released, followed by the updated second edition of The C++ Programming Language in 1991. New features in 2.0 included multiple inheritance, abstract classes, static member functions, const member functions, protected members. In 1990, The Annotated C++ Reference Manual was published; this work became the basis for the future standard. Feature additions included templates, namespaces, new casts, a boolean type. After the 2.0 update, C++ evolved slowly until, in 2011, the C++11 standard was released, adding numerous new features, enlarging the standard library further, providing more facilities to C++ programmers. After a minor C++14 update released in December 2014, various new additions were introduced in C++17, further changes planned for 2020; as of 2017, C++ remains the third most popular programming language, behind Java and C. On January 3, 2018, Stroustrup was announced as the 2018 winner of the Charles Stark Draper Prize for Engineering, "for conceptualizing and developing the C++ programming language".
According to Stroustrup: "the name signifies the evolutionary nature of the changes from C". This name is credited to Rick Mascitti and was first used in December 1983; when Mascitti was questioned informally in 1992 about the naming, he indicated that it was given in a tongue-in-cheek spirit. The name comes from C's ++ operator and a common naming convention of using "+" to indicate an enhanced computer program. During C++'s development period, the language had been referred to as "new C" and "C with Classes" before acquiring its final name. Throughout C++'s life, its development and evolution has been guided by a set of principles: It must be driven by actual problems and its features should be useful in real world programs; every feature should be implementable. Programmers should be free to pick their own programming style, that style should be supported by C++. Allowing a useful feature is more important than preventing every possible misuse of C++, it should provide facilities for organising programs into separate, well-defined parts, provide facilities for combining separately developed parts.
No implicit violations of the type system (but allow explicit violations.
A microprocessor is a computer processor that incorporates the functions of a central processing unit on a single integrated circuit, or at most a few integrated circuits. The microprocessor is a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, provides results as output. Microprocessors contain sequential digital logic. Microprocessors operate on symbols represented in the binary number system; the integration of a whole CPU onto a single or a few integrated circuits reduced the cost of processing power. Integrated circuit processors are produced in large numbers by automated processes, resulting in a low unit price. Single-chip processors increase reliability because there are many fewer electrical connections that could fail; as microprocessor designs improve, the cost of manufacturing a chip stays the same according to Rock's law. Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits.
Microprocessors combined this into a few large-scale ICs. Continued increases in microprocessor capacity have since rendered other forms of computers completely obsolete, with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers; the complexity of an integrated circuit is bounded by physical limitations on the number of transistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, the heat that the chip can dissipate. Advancing technology makes more powerful chips feasible to manufacture. A minimal hypothetical microprocessor might include only an arithmetic logic unit, a control logic section; the ALU performs addition and operations such as AND or OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation.
The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction. A single operation code might affect many individual data paths and other elements of the processor; as integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger. Additional features were added to the processor architecture. Floating-point arithmetic, for example, was not available on 8-bit microprocessors, but had to be carried out in software. Integration of the floating point unit first as a separate integrated circuit and as part of the same microprocessor chip sped up floating point calculations. Physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. While this required extra logic to handle, for example and overflow within each slice, the result was a system that could handle, for example, 32-bit words using integrated circuits with a capacity for only four bits each.
The ability to put large numbers of transistors on one chip makes it feasible to integrate memory on the same die as the processor. This CPU cache has the advantage of faster access than off-chip memory and increases the processing speed of the system for many applications. Processor clock frequency has increased more than external memory speed, so cache memory is necessary if the processor is not delayed by slower external memory. A microprocessor is a general-purpose entity. Several specialized processing devices have followed: A digital signal processor is specialized for signal processing. Graphics processing units are processors designed for realtime rendering of images. Other specialized units exist for video machine vision. Microcontrollers integrate a microprocessor with peripheral devices in embedded systems. Systems on chip integrate one or more microprocessor or microcontroller cores. Microprocessors can be selected for differing applications based on their word size, a measure of their complexity.
Longer word sizes allow each clock cycle of a processor to carry out more computation, but correspond to physically larger integrated circuit dies with higher standby and operating power consumption. 4, 8 or 12 bit processors are integrated into microcontrollers operating embedded systems. Where a system is expected to handle larger volumes of data or require a more flexible user interface, 16, 32 or 64 bit processors are used. An 8- or 16-bit processor may be selected over a 32-bit processor for system on a chip or microcontroller applications that require low-power electronics, or are part of a mixed-signal integrated circuit with noise-sensitive on-chip analog electronics such as high-resolution analog to digital converters, or both. Running 32-bit arithmetic on an 8-bit chip could end up using more power, as the chip must execute software with multiple instructions. Thousands of items that were traditionally not computer-related inc
In computing, memory refers to the computer hardware integrated circuits that store information for immediate use in a computer. Computer memory operates at a high speed, for example random-access memory, as a distinction from storage that provides slow-to-access information but offers higher capacities. If needed, contents of the computer memory can be transferred to secondary storage. An archaic synonym for memory is store; the term "memory", meaning "primary storage" or "main memory", is associated with addressable semiconductor memory, i.e. integrated circuits consisting of silicon-based transistors, used for example as primary storage but other purposes in computers and other digital electronic devices. There are two main kinds of semiconductor memory and non-volatile. Examples of non-volatile memory are ROM, PROM, EPROM and EEPROM memory. Examples of volatile memory are primary storage, dynamic random-access memory, fast CPU cache memory, static random-access memory, fast but energy-consuming, offering lower memory areal density than DRAM.
Most semiconductor memory is organized into memory cells or bistable flip-flops, each storing one bit. Flash memory organization includes multiple bits per cell; the memory cells are grouped into words of fixed word length, for example 1, 2, 4, 8, 16, 32, 64 or 128 bit. Each word can be accessed by a binary address of N bit, making it possible to store 2 raised by N words in the memory; this implies that processor registers are not considered as memory, since they only store one word and do not include an addressing mechanism. Typical secondary storage devices are solid-state drives. In the early 1940s, memory technology permitted a capacity of a few bytes; the first electronic programmable digital computer, the ENIAC, using thousands of octal-base radio vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits which were held in the vacuum tube accumulators. The next significant advance in computer memory came with acoustic delay line memory, developed by J. Presper Eckert in the early 1940s.
Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information in the form of sound waves propagating through mercury, with the quartz crystals acting as transducers to read and write bits. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient. Two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred Williams would invent the Williams tube, which would be the first random-access computer memory; the Williams tube would prove less expensive. The Williams tube would prove to be frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory. Jay Forrester, Jan A. Rajchman and An Wang developed magnetic-core memory, which allowed for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor-based memory in the late 1960s.
Developments in technology and economies of scale have made possible so-called Very Large Memory computers. The term "memory" when used with reference to computers refers to random-access memory. Volatile memory is computer memory. Most modern semiconductor volatile memory is either static RAM or dynamic RAM. SRAM retains its contents as long as the power is connected and is easy for interfacing, but uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs. SRAM is not worthwhile for desktop system memory, where DRAM dominates, but is used for their cache memories. SRAM is commonplace in small embedded systems. Forthcoming volatile memory technologies that aim at replacing or competing with SRAM and DRAM include Z-RAM and A-RAM. Non-volatile memory is computer memory that can retain the stored information when not powered.
Examples of non-volatile memory include read-only memory, flash memory, most types of magnetic computer storage devices, optical discs, early computer storage methods such as paper tape and punched cards. Forthcoming non-volatile memory technologies include FERAM, CBRAM, PRAM, STT-RAM, SONOS, RRAM, racetrack memory, NRAM, 3D XPoint, millipede memory. A third category of memory is "semi-volatile"; the term is used to describe a memory which has some limited non-volatile duration after power is removed, but data is lost. A typical goal when using a semi-volatile memory is to provide high performance/durability/etc. Associated with volatile memories, while providing some benefits of a true non-volatile memory. For example, some non-volatile memory types can wear out, where a "worn" cell has increased volatility but otherwise continues to work. Data locations which are written