A microcontroller is a small computer on a single integrated circuit. In modern terminology, it is less sophisticated than, a system on a chip. A microcontroller contains one or more CPUs along with memory and programmable input/output peripherals. Program memory in the form of ferroelectric RAM, NOR flash or OTP ROM is often included on chip, as well as a small amount of RAM. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications consisting of various discrete chips. Microcontrollers are used in automatically controlled products and devices, such as automobile engine control systems, implantable medical devices, remote controls, office machines, power tools and other embedded systems. By reducing the size and cost compared to a design that uses a separate microprocessor and input/output devices, microcontrollers make it economical to digitally control more devices and processes. Mixed signal microcontrollers are common, integrating analog components needed to control non-digital electronic systems.
In the context of the internet of things, microcontrollers are an economical and popular means of data collection and actuating the physical world as edge devices. Some microcontrollers may use four-bit words and operate at frequencies as low as 4 kHz, for low power consumption, they have the ability to retain functionality while waiting for an event such as a button press or other interrupt. Other microcontrollers may serve performance-critical roles, where they may need to act more like a digital signal processor, with higher clock speeds and power consumption; the first microprocessor is claimed to be the 4-bit Intel 4004 released in 1972. It was followed by the 4-bit 4040, the 8-bit Intel 8008, the 8-bit Intel 8080. All of these processors required several external chips to implement a working system, including memory and peripheral interface chips; as a result, the total system cost was several hundred dollars, making it impossible to economically computerize small appliances. MOS Technology introduced sub-$100 microprocessors, the 6501 and 6502, with the chief aim of addressing this economic obstacle, but these microprocessors still required external support and peripheral chips which kept the total system cost in the $100s of dollars.
One book credits TI engineers Gary Boone and Michael Cochran with the successful creation of the first microcontroller in 1971. The result of their work was the TMS 1000, which became commercially available in 1974, it combined read-only memory, read/write memory and clock on one chip and was targeted at embedded systems. In response to the existence of the single-chip TMS 1000, Intel developed a computer system on a chip optimized for control applications, the Intel 8048, with commercial parts first shipping in 1977, it combined ROM with on the same chip with a microprocessor. Among numerous applications, this chip would find its way into over one billion PC keyboards. At that time Intel's President, Luke J. Valenter, stated that the microcontroller was one of the most successful products in the company's history, he expanded the microcontroller division's budget by over 25%. Most microcontrollers at this time had concurrent variants. One had EPROM program memory, with a transparent quartz window in the lid of the package to allow it to be erased by exposure to ultraviolet light.
These erasable chips were used for prototyping. The other variant was either a mask programmed ROM or a PROM variant, only programmable once. For the latter, sometimes the designation OTP was used, standing for "one-time programmable". In an OTP microcontroller, the PROM was of identical type as the EPROM, but the chip package had no quartz window; because the erasable versions required ceramic packages with quartz windows, they were more expensive than the OTP versions, which could be made in lower-cost opaque plastic packages. For the erasable variants, quartz was required, instead of less expensive glass, for its transparency to ultraviolet light—to which glass is opaque—but the main cost differentiator was the ceramic package itself. In 1993, the introduction of EEPROM memory allowed microcontrollers to be electrically erased without an expensive package as required for EPROM, allowing both rapid prototyping, in-system programming; the same year, Atmel introduced the first microcontroller using Flash memory, a special type of EEPROM.
Other companies followed suit, with both memory types. Nowadays microcontrollers are cheap and available for hobbyists, with large online communities around certain processors. On 21 June the "world's smallest computer" was announced by the University of Michigan; the device is a "0.04mm3 16nW wireless and batteryless sensor system with integrated Cortex-M0+ processor and optical communication for cellular temperature measurement." It "measures just 0.3 mm to a side—dwarfed by a grain of rice. In addition to the RAM and photovoltaics, the new computing de
Random-access memory is a form of computer data storage that stores data and machine code being used. A random-access memory device allows data items to be read or written in the same amount of time irrespective of the physical location of data inside the memory. In contrast, with other direct-access data storage media such as hard disks, CD-RWs, DVD-RWs and the older magnetic tapes and drum memory, the time required to read and write data items varies depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement. RAM contains multiplexing and demultiplexing circuitry, to connect the data lines to the addressed storage for reading or writing the entry. More than one bit of storage is accessed by the same address, RAM devices have multiple data lines and are said to be "8-bit" or "16-bit", etc. devices. In today's technology, random-access memory takes the form of integrated circuits. RAM is associated with volatile types of memory, where stored information is lost if power is removed, although non-volatile RAM has been developed.
Other types of non-volatile memories exist that allow random access for read operations, but either do not allow write operations or have other kinds of limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash. Integrated-circuit RAM chips came into the market in the early 1970s, with the first commercially available DRAM chip, the Intel 1103, introduced in October 1970. Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order. Drum memory could be expanded at low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, out of discrete transistors, were used for smaller and faster memories such as registers; such registers were large and too costly to use for large amounts of data. The first practical form of random-access memory was the Williams tube starting in 1947.
It stored data. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access; the capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored program was implemented in the Manchester Baby computer, which first ran a program on 21 June 1948. In fact, rather than the Williams tube memory being designed for the Baby, the Baby was a testbed to demonstrate the reliability of the memory. Magnetic-core memory was developed up until the mid-1970s, it became a widespread form of random-access memory. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible.
Magnetic core memory was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. Dynamic random-access memory allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor, had to be periodically refreshed every few milliseconds before the charge could leak away; the Toshiba Toscal BC-1411 electronic calculator, introduced in 1965, used a form of DRAM built from discrete components. DRAM was developed by Robert H. Dennard in 1968. Prior to the development of integrated read-only memory circuits, permanent random-access memory was constructed using diode matrices driven by address decoders, or specially wound core rope memory planes; the two used forms of modern RAM are static RAM and dynamic RAM. In SRAM, a bit of data is stored using the state of a six transistor memory cell.
This form of RAM is more expensive to produce, but is faster and requires less dynamic power than DRAM. In modern computers, SRAM is used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a DRAM cell; the capacitor holds a high or low charge, the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers. Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment; these persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, solid-state drives.
ECC memory includes special circuitry to detect and/or correct random faults (mem
The Motorola 68000 is a 16/32-bit CISC microprocessor, which implements a 32-bit instruction set, with 32-bit registers and 32-bit internal data bus, but with a 16-bit data ALU and two 16-bit arithmetic ALUs and a 16-bit external data bus and marketed by Motorola Semiconductor Products Sector. Introduced in 1979 with HMOS technology as the first member of the successful 32-bit Motorola 68000 series, it is software forward-compatible with the rest of the line despite being limited to a 16-bit wide external bus. After 39 years in production, the 68000 architecture is still in use; the 68000 grew out of the MACSS project, begun in 1976 to develop an new architecture without backward compatibility. It would be a higher-power sibling complementing the existing 8-bit 6800 line rather than a compatible successor. In the end, the 68000 did retain a bus protocol compatibility mode for existing 6800 peripheral devices, a version with an 8-bit data bus was produced. However, the designers focused on the future, or forward compatibility, which gave the 68000 design a head start against 32-bit instruction set architectures.
For instance, the CPU registers are 32 bits wide, though few self-contained structures in the processor itself operate on 32 bits at a time. The MACSS team drew on the influence of minicomputer processor design, such as the PDP-11 and VAX systems, which were microcode-based. In the mid 1970s, the 8-bit microprocessor manufacturers raced to introduce the 16-bit generation. National Semiconductor had been first with its IMP-16 and PACE processors in 1973–1975, but these had issues with speed. Intel had worked on their advanced 16/32-bit Intel iAPX 432 since 1975 and their Intel 8086 since 1976. Arriving late to the 16-bit arena afforded the new processor more transistors, 32-bit macroinstructions, acclaimed general ease of use; the original MC68000 was fabricated using an HMOS process with a 3.5 µm feature size. Formally introduced in September 1979, initial samples were released in February 1980, with production chips available over the counter in November. Initial speed grades were 4, 6, 8 MHz. 10 MHz chips became available during 1981, 12.5 MHz chips by June 1982.
The 16.67 MHz "12F" version of the MC68000, the fastest version of the original HMOS chip, was not produced until the late 1980s. The 68k instruction set was well suited to implement Unix, the 68000 and its successors became the dominant CPUs for Unix-based workstations including Sun workstations and Apollo/Domain workstations; the 68000 was used for mass-market computers such as the Apple Lisa, Macintosh and Atari ST. The 68000 was used in Microsoft Xenix systems, as well as an early NetWare Unix-based Server; the 68000 was used in the first generation of desktop laser printers, including the original Apple Inc. LaserWriter and the HP LaserJet. In 1982, the 68000 received an update to its ISA allowing it to support virtual memory and to conform to the Popek and Goldberg virtualization requirements; the updated chip was called the 68010. A further extended version, which exposed 31 bits of the address bus, was produced in small quantities as the 68012. To support lower-cost systems and control applications with smaller memory sizes, Motorola introduced the 8-bit compatible MC68008 in 1982.
This was a 68000 with a smaller address bus. After 1982, Motorola devoted more attention to the 88000 projects. Several other companies were second-source manufacturers of the HMOS 68000; these included Hitachi, who shrank the feature size to 2.7 µm for their 12.5 MHz version, Rockwell, Thomson/SGS-Thomson, Toshiba. Toshiba was a second-source maker of the CMOS 68HC000. Encrypted variants of the 68000, being the Hitachi FD1089 and FD1094, store decryption keys for opcodes and opcode data in battery-backed memory and were used in certain Sega arcade systems including System 16 to prevent piracy and illegal bootleg games; the 68HC000, the first CMOS version of the 68000, was designed by Hitachi and jointly introduced in 1985. Motorola's version was called the MC68HC000, while Hitachi's was the HD68HC000; the 68HC000 was offered at speeds of 8–20 MHz. Except for using CMOS circuitry, it behaved identically to the HMOS MC68000, but the change to CMOS reduced its power consumption; the original HMOS MC68000 consumed around 1.35 watts at an ambient temperature of 25 °C, regardless of clock speed, while the MC68HC000 consumed only 0.13 watts at 8 MHz and 0.38 watts at 20 MHz.
Apple selected the 68HC000 for use in the Macintosh Portable. Motorola replaced the MC68008 with the MC68HC001 in 1990; this chip resembled the 68HC000 in most respects, but its data bus could operate in either 16-bit or 8-bit mode, depending on the value of an input pin at reset. Thus, like the 68008, it could be used in systems with cheaper 8-bit memories; the evolution of the 68000 focused on more modern embedded control applications and on-chip peripherals. The 68EC000 chip and SCM68000 core expanded the address bus to 32 bits, removed the M6800 peripheral bus, excluded the MOVE from SR instruction from user mode programs. In 1996, Motorola updated the standalone core with static circuitry, drawing only 2 µW in l
The 68HC12 is a microcontroller family from Freescale Semiconductor. Introduced in the mid-1990s, the architecture is an enhancement of the Freescale 68HC11. Programs written for the HC11 are compatible with the HC12, which has a few extra instructions; the first 68HC12 derivatives had a maximum bus speed of 8 MHz and flash memory sizes up to 128 KB. Like the 68HC11, the 68HC12 has two 8-bit accumulators A and B, two 16-bit registers X and Y, a 16-bit program counter, a 16-bit stack pointer and an 8-bit Condition Code Register. Unlike the 68HC11 the processor has 16bit internal data paths The 68HC12 adds to and replaces a small number of 68HC11 instructions with new forms that are closer to the 6809 processor. More it changes the instruction encodings to be far more dense and adds many 6809 like indexing features, some with more flexibility; the net result is that code sizes are 30% smaller. Beginning in 2000 the family was extended with the introduction of the MC9S12 derivatives which have bus speeds of up to 25 MHz and flash sizes up to 512 KB.
The MC9S12NE64 was introduced by Freescale in September 2004, claiming to be the "industry's first single-chip fast-Ethernet Flash microcontroller." It features a 25 MHz HCS12 CPU, 64 KB of FLASH EEPROM, 8 KB of RAM, an Ethernet 10/100 Mbit/s controller. The MC9S12XDP512, introduced in 2004 has a bus speed of 40 MHz and a peripheral co-processor known as the XGATE which allows for some tasks to be offloaded from the CPU; the CPU of the S12X derivative features several new instructions to increase performance. Freescale announced the MC9S12XEP100 in May 2006 to further extend the S12X family to 50 MHz bus speed and add a Memory protection unit and a hardware scheme to provide emulated EEPROM. HCS12 products contain a single processor, the HCS12X feature the additional XGATE peripheral processor; the S12X family offer two main methods to address more than 64KBytes. Paged memory regions in the 64 KB local map: PPAGE for paged program data, RPAGE for paged RAM, EPAGE for paged EEPROM/flash Global addressing permits access to any addresses in the 8 MB address space.
GPAGE is used in conjunction with special opcodes. The XGATE co-processor is a 16-bit RISC processor operating at twice the main bus clock, it does not run a background loop. The first versions of the XGATE do not allow for higher priority interrupts to pre-empt a handled interrupt, but the "XGATEV3" as featured in the 9S12XEP100 does allow this; the S12X can trigger software interrupts on the XGATE vice versa. A semaphore system is implemented to allow the S12X and XGATE cores to synchronize access to peripherals; the XGATE code is copied to RAM at device startup and executed from RAM for a speed benefit. The XGATE has a partial 64KByte address space with no paging; the registers share addresses, but the flash and RAM appear at different addresses between the cores. Free real-time kernel source code for HCS12 with sample apps Freescale's official OpenTCP project on SourceForge for MC9S12NE family DRAGON12-Plus HCS12/9S12 Trainer, EVB and Development Board MiniIDE development environment that works with both 68HC11 and 68HC12 68HC12 Discussion Group 68HC12 Development and training system
A barcode is a visual, machine-readable representation of data. Traditional barcodes systematically represent data by varying the widths and spacings of parallel lines, may be referred to as linear or one-dimensional. Two-dimensional variants were developed, using rectangles, dots and other geometric patterns, called matrix codes or 2D barcodes, although they do not use bars as such. Barcodes were only scanned by special optical scanners called barcode readers. Application software became available for devices that could read images, such as smartphones with cameras; the barcode was invented by Norman Joseph Woodland and Bernard Silver and patented in the US in 1952. The invention was based on Morse code, extended to thin and thick bars. However, it took over twenty years. An early use of one type of barcode in an industrial context was sponsored by the Association of American Railroads in the late 1960s. Developed by General Telephone and Electronics and called KarTrak ACI, this scheme involved placing colored stripes in various combinations on steel plates which were affixed to the sides of railroad rolling stock.
Two plates were used per car, one on each side, with the arrangement of the colored stripes encoding information such as ownership, type of equipment, identification number. The plates were read by a trackside scanner, located for instance, at the entrance to a classification yard, while the car was moving past; the project was abandoned after about ten years because the system proved unreliable after long-term use. Barcodes became commercially successful when they were used to automate supermarket checkout systems, a task for which they have become universal, their use has spread to many other tasks that are generically referred to as automatic identification and data capture. The first scanning of the now-ubiquitous Universal Product Code barcode was on a pack of Wrigley Company chewing gum in June 1974. QR codes, a specific type of 2D barcode, have become popular. Other systems have made inroads in the AIDC market, but the simplicity and low cost of barcodes has limited the role of these other systems before technologies such as radio-frequency identification became available after 2000.
In 1948 Bernard Silver, a graduate student at Drexel Institute of Technology in Philadelphia, Pennsylvania, US overheard the president of the local food chain, Food Fair, asking one of the deans to research a system to automatically read product information during checkout. Silver told his friend Norman Joseph Woodland about the request, they started working on a variety of systems, their first working system used ultraviolet ink, but the ink faded too and was expensive. Convinced that the system was workable with further development, Woodland left Drexel, moved into his father's apartment in Florida, continued working on the system, his next inspiration came from Morse code, he formed his first barcode from sand on the beach. "I just extended the dots and dashes downwards and made narrow lines and wide lines out of them." To read them, he adapted technology from optical soundtracks in movies, using a 500-watt incandescent light bulb shining through the paper onto an RCA935 photomultiplier tube on the far side.
He decided that the system would work better if it were printed as a circle instead of a line, allowing it to be scanned in any direction. On 20 October 1949, Woodland and Silver filed a patent application for "Classifying Apparatus and Method", in which they described both the linear and bull's eye printing patterns, as well as the mechanical and electronic systems needed to read the code; the patent was issued on 7 October 1952 as US Patent 2,612,994. In 1951, Woodland continually tried to interest IBM in developing the system; the company commissioned a report on the idea, which concluded that it was both feasible and interesting, but that processing the resulting information would require equipment, some time off in the future. IBM offered to buy the patent. Philco purchased the patent in 1962 and sold it to RCA sometime later. During his time as an undergraduate, David Collins worked at the Pennsylvania Railroad and became aware of the need to automatically identify railroad cars. After receiving his master's degree from MIT in 1959, he started work at GTE Sylvania and began addressing the problem.
He developed a system called KarTrak using blue and red reflective stripes attached to the side of the cars, encoding a six-digit company identifier and a four-digit car number. Light reflected off the stripes was fed into one of two photomultipliers, filtered for red; the Boston and Maine Railroad tested the KarTrak system on their gravel cars in 1961. The tests continued until 1967, when the Association of American Railroads selected it as a standard, Automatic Car Identification, across the entire North American fleet; the installations began on 10 October 1967. However, the economic downturn and rash of bankruptcies in the industry in the early 1970s slowed the rollout, it was not until 1974 that 95% of the fleet was labeled. To add to its woes, the system was found to be fooled by dirt in certain applications, which affected accuracy; the AAR abandoned the system in the late 1970s, it was not until the mid-1980s that they introduced a similar system, this time based on radio tags. The railway project had failed, but a toll bridge in New Jersey requested a similar syst
The Motorola MC68012 processor is a 16/32-bit microprocessor from the early 1980s. It is an 84-pin PGA version of the Motorola MC68010; the memory space was extended to 2 GB and an RMC pin was added, in order to help the design of multiprocessor systems with virtual memory. All other features of the MC68010 were preserved; the expansion of the memory space caused an issue for any programs that used the high byte of an address to store data, a programming trick, successful with those processors that only have a 24-bit address bus. A similar problem affected the 68020
A microprocessor is a computer processor that incorporates the functions of a central processing unit on a single integrated circuit, or at most a few integrated circuits. The microprocessor is a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, provides results as output. Microprocessors contain sequential digital logic. Microprocessors operate on symbols represented in the binary number system; the integration of a whole CPU onto a single or a few integrated circuits reduced the cost of processing power. Integrated circuit processors are produced in large numbers by automated processes, resulting in a low unit price. Single-chip processors increase reliability because there are many fewer electrical connections that could fail; as microprocessor designs improve, the cost of manufacturing a chip stays the same according to Rock's law. Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits.
Microprocessors combined this into a few large-scale ICs. Continued increases in microprocessor capacity have since rendered other forms of computers completely obsolete, with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers; the complexity of an integrated circuit is bounded by physical limitations on the number of transistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, the heat that the chip can dissipate. Advancing technology makes more powerful chips feasible to manufacture. A minimal hypothetical microprocessor might include only an arithmetic logic unit, a control logic section; the ALU performs addition and operations such as AND or OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation.
The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction. A single operation code might affect many individual data paths and other elements of the processor; as integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger. Additional features were added to the processor architecture. Floating-point arithmetic, for example, was not available on 8-bit microprocessors, but had to be carried out in software. Integration of the floating point unit first as a separate integrated circuit and as part of the same microprocessor chip sped up floating point calculations. Physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. While this required extra logic to handle, for example and overflow within each slice, the result was a system that could handle, for example, 32-bit words using integrated circuits with a capacity for only four bits each.
The ability to put large numbers of transistors on one chip makes it feasible to integrate memory on the same die as the processor. This CPU cache has the advantage of faster access than off-chip memory and increases the processing speed of the system for many applications. Processor clock frequency has increased more than external memory speed, so cache memory is necessary if the processor is not delayed by slower external memory. A microprocessor is a general-purpose entity. Several specialized processing devices have followed: A digital signal processor is specialized for signal processing. Graphics processing units are processors designed for realtime rendering of images. Other specialized units exist for video machine vision. Microcontrollers integrate a microprocessor with peripheral devices in embedded systems. Systems on chip integrate one or more microprocessor or microcontroller cores. Microprocessors can be selected for differing applications based on their word size, a measure of their complexity.
Longer word sizes allow each clock cycle of a processor to carry out more computation, but correspond to physically larger integrated circuit dies with higher standby and operating power consumption. 4, 8 or 12 bit processors are integrated into microcontrollers operating embedded systems. Where a system is expected to handle larger volumes of data or require a more flexible user interface, 16, 32 or 64 bit processors are used. An 8- or 16-bit processor may be selected over a 32-bit processor for system on a chip or microcontroller applications that require low-power electronics, or are part of a mixed-signal integrated circuit with noise-sensitive on-chip analog electronics such as high-resolution analog to digital converters, or both. Running 32-bit arithmetic on an 8-bit chip could end up using more power, as the chip must execute software with multiple instructions. Thousands of items that were traditionally not computer-related inc