Attachment Unit Interface
The Attachment Unit Interface is a physical and logical interface defined in the original IEEE 802.3 standard for 10BASE5 Ethernet. The physical interface consists of a 15-pin connection that provides a path between an Ethernet node's Physical Signaling and the Medium Attachment Unit, sometimes known as a transceiver. An AUI cable may be up to 50 metres long, although the cable is omitted altogether and the MAU and MAC are directly attached to one another. AUI connectors became rare beginning in the early 1990s when computers and hubs began to incorporate the MAU as the 10BASE-T standard became more common and use of 10BASE5 and 10BASE2 declined; the electrical AUI connection was still present inside the equipment. With the introduction of Fast Ethernet the AUI became obsolete, was replaced by the Media Independent Interface. Gigabit Ethernet and 10 Gigabit Ethernet have the GMII and XGMII interfaces. A modified form using a smaller connector called the AAUI was introduced on Apple Macintosh computers in 1991, its use discontinued in 1998.
An AUI connector is a DA-15. It has a sliding clip in place of the thumbscrews found on a D-connector to hold two connectors together; this clip permits the MAU and MAC to be directly attached to one another when their size and shape would preclude the use of thumbscrews. This clip is however found to be awkward and/or unreliable. Media Independent Interface Gigabit Media Independent Interface 10 Gigabit Media Independent Interface XAUI Apple Attachment Unit Interface This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later
Hard disk drive
A hard disk drive, hard disk, hard drive, or fixed disk, is an electromechanical data storage device that uses magnetic storage to store and retrieve digital information using one or more rigid rotating disks coated with magnetic material. The platters are paired with magnetic heads arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are a type of non-volatile storage, retaining stored data when powered off. Introduced by IBM in 1956, HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers and personal computers. More than 200 companies have produced HDDs though after extensive industry consolidation most units are manufactured by Seagate and Western Digital. HDDs dominate the volume of storage produced for servers.
Though production is growing sales revenues and unit shipments are declining because solid-state drives have higher data-transfer rates, higher areal storage density, better reliability, much lower latency and access times. The revenues for SSDs, most of which use NAND exceed those for HDDs. Though SSDs have nearly 10 times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, durability are important; the primary characteristics of an HDD are its performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte drive has a capacity of 1,000 gigabytes; some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, inbuilt redundancy for error correction and recovery. There is confusion regarding storage capacity, since capacities are stated in decimal Gigabytes by HDD manufacturers, whereas some operating systems report capacities in binary Gibibytes, which results in a smaller number than advertised.
Performance is specified by the time required to move the heads to a track or cylinder adding the time it takes for the desired sector to move under the head, the speed at which the data is transmitted. The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, 2.5-inch for laptops. HDDs are connected to systems by standard interface cables such as SATA, USB or SAS cables; the first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was the size of two medium-sized refrigerators and stored five million six-bit characters on a stack of 50 disks. In 1962, the IBM 350 was superseded by the IBM 1301 disk storage unit, which consisted of 50 platters, each about 1/8-inch thick and 24 inches in diameter. While the IBM 350 used only two read/write heads, the 1301 used an array of heads, one per platter, moving as a single unit. Cylinder-mode read/write operations were supported, the heads flew about 250 micro-inches above the platter surface.
Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes. Access time was about a quarter of a second. In 1962, IBM introduced the model 1311 disk drive, about the size of a washing machine and stored two million characters on a removable disk pack. Users could interchange them as needed, much like reels of magnetic tape. Models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk" drives; some high-performance HDDs were manufactured with one head per track so that no time was lost physically moving the heads to a track. Known as fixed-head or head-per-track disk drives they were expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code-named "Winchester".
Its primary distinguishing feature was that the disk heads were not withdrawn from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to "land" on a special area of the disk surface upon spin-down, "taking off" again when the disk was powered on; this reduced the cost of the head actuator mechanism, but precluded removing just the disks from the drive as was done with the disk packs of the day. Instead, the first models of "Winchester technology" drives featured a removable disk module, which included both the disk pack and the head assembly, leaving the actuator motor in the drive upon removal. "Winchester" drives abandoned the removable media concept and returned to non-removable platters. Like the first removable pack drive, the first "Winchester" drives used platters 14 inches in diameter. A few years designers were exploring the possibility that physically smaller platters might offer advantages. Drives with non-removable eight-inch platters appeared, drives that used a 5 1⁄4 in form factor.
The latter were intended for the then-fl
A floppy disk known as a floppy, diskette, or disk, is a type of disk storage composed of a disk of thin and flexible magnetic storage medium, sealed in a rectangular plastic enclosure lined with fabric that removes dust particles. Floppy disks are written by a floppy disk drive. Floppy disks as 8-inch media and in 5 1⁄4-inch and 3 1⁄2 inch sizes, were a ubiquitous form of data storage and exchange from the mid-1970s into the first years of the 21st century. By 2006 computers were manufactured with installed floppy disk drives; these formats are handled by older equipment. The prevalence of floppy disks in late-twentieth century culture was such that many electronic and software programs still use the floppy disks as save icons. While floppy disk drives still have some limited uses with legacy industrial computer equipment, they have been superseded by data storage methods with much greater capacity, such as USB flash drives, flash storage cards, portable external hard disk drives, optical discs, cloud storage and storage available through computer networks.
The first commercial floppy disks, developed in the late 1960s, were 8 inches in diameter. These disks and associated drives were produced and improved upon by IBM and other companies such as Memorex, Shugart Associates, Burroughs Corporation; the term "floppy disk" appeared in print as early as 1970, although IBM announced its first media as the "Type 1 Diskette" in 1973, the industry continued to use the terms "floppy disk" or "floppy". In 1976, Shugart Associates introduced the 5 1⁄4-inch FDD. By 1978 there were more than 10 manufacturers producing such FDDs. There were competing floppy disk formats, with hard- and soft-sector versions and encoding schemes such as FM, MFM, M2FM and GCR; the 5 1⁄4-inch format displaced the 8-inch one for most applications, the hard-sectored disk format disappeared. The most common capacity of the 5 1⁄4-inch format in DOS-based PCs was 360 KB, for the DSDD format using MFM encoding. In 1984 IBM introduced with its PC-AT model the 1.2 MB dual-sided 5 1⁄4-inch floppy disk, but it never became popular.
IBM started using the 720 KB double-density 3 1⁄2-inch microfloppy disk on its Convertible laptop computer in 1986 and the 1.44 MB high-density version with the PS/2 line in 1987. These disk drives could be added to older PC models. In 1988 IBM introduced a drive for 2.88 MB "DSED" diskettes in its top-of-the-line PS/2 models, but this was a commercial failure. Throughout the early 1980s, limitations of the 5 1⁄4-inch format became clear. Designed to be more practical than the 8-inch format, it was itself too large. A number of solutions were developed, with drives at 2-, 2 1⁄2-, 3-, 3 1⁄4-, 3 1⁄2- and 4-inches offered by various companies, they all shared a number of advantages over the old format, including a rigid case with a sliding metal shutter over the head slot, which helped protect the delicate magnetic medium from dust and damage, a sliding write protection tab, far more convenient than the adhesive tabs used with earlier disks. The large market share of the well-established 5 1⁄4-inch format made it difficult for these diverse mutually-incompatible new formats to gain significant market share.
A variant on the Sony design, introduced in 1982 by a large number of manufacturers, was rapidly adopted. The term floppy disk persisted though style floppy disks have a rigid case around an internal floppy disk. By the end of the 1980s, 5 1⁄4-inch disks had been superseded by 3 1⁄2-inch disks. During this time, PCs came equipped with drives of both sizes. By the mid-1990s, 5 1⁄4-inch drives had disappeared, as the 3 1⁄2-inch disk became the predominant floppy disk; the advantages of the 3 1⁄2-inch disk were its higher capacity, its smaller size, its rigid case which provided better protection from dirt and other environmental risks. If a person touches the exposed disk surface of a 5 1⁄4-inch disk through the drive hole, fingerprints may foul the disk—and the disk drive head if the disk is subsequently loaded into a drive—and it is easily possible to damage a disk of this type by folding or creasing it rendering it at least unreadable; however due to its simpler construction the 5 1⁄4-inch disk unit price was lower throughout its history in the range of a third to a half that of a 3 1⁄2-inch disk.
Floppy disks became commonplace during the 1980s and 1990s in their use with personal computers to distribute software, transfer data, create backups. Before hard disks became affordable to the general population, floppy disks were used to store a computer's operating system. Most home computers from that period have an elementary OS and BASIC stored in ROM, with the option of loading a more advanced operating system from a floppy disk. By the early 1990s, the increasing software size meant large packages like Windows or Adobe Photoshop required a dozen disks or more. In 1996, there were an estimated five billion standard floppy disks in use. Distribution of larger packages was replaced by CD-ROMs, DVDs and online distribution. An
Random-access memory is a form of computer data storage that stores data and machine code being used. A random-access memory device allows data items to be read or written in the same amount of time irrespective of the physical location of data inside the memory. In contrast, with other direct-access data storage media such as hard disks, CD-RWs, DVD-RWs and the older magnetic tapes and drum memory, the time required to read and write data items varies depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement. RAM contains multiplexing and demultiplexing circuitry, to connect the data lines to the addressed storage for reading or writing the entry. More than one bit of storage is accessed by the same address, RAM devices have multiple data lines and are said to be "8-bit" or "16-bit", etc. devices. In today's technology, random-access memory takes the form of integrated circuits. RAM is associated with volatile types of memory, where stored information is lost if power is removed, although non-volatile RAM has been developed.
Other types of non-volatile memories exist that allow random access for read operations, but either do not allow write operations or have other kinds of limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash. Integrated-circuit RAM chips came into the market in the early 1970s, with the first commercially available DRAM chip, the Intel 1103, introduced in October 1970. Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order. Drum memory could be expanded at low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, out of discrete transistors, were used for smaller and faster memories such as registers; such registers were large and too costly to use for large amounts of data. The first practical form of random-access memory was the Williams tube starting in 1947.
It stored data. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access; the capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored program was implemented in the Manchester Baby computer, which first ran a program on 21 June 1948. In fact, rather than the Williams tube memory being designed for the Baby, the Baby was a testbed to demonstrate the reliability of the memory. Magnetic-core memory was developed up until the mid-1970s, it became a widespread form of random-access memory. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible.
Magnetic core memory was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. Dynamic random-access memory allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor, had to be periodically refreshed every few milliseconds before the charge could leak away; the Toshiba Toscal BC-1411 electronic calculator, introduced in 1965, used a form of DRAM built from discrete components. DRAM was developed by Robert H. Dennard in 1968. Prior to the development of integrated read-only memory circuits, permanent random-access memory was constructed using diode matrices driven by address decoders, or specially wound core rope memory planes; the two used forms of modern RAM are static RAM and dynamic RAM. In SRAM, a bit of data is stored using the state of a six transistor memory cell.
This form of RAM is more expensive to produce, but is faster and requires less dynamic power than DRAM. In modern computers, SRAM is used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a DRAM cell; the capacitor holds a high or low charge, the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers. Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment; these persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, solid-state drives.
ECC memory includes special circuitry to detect and/or correct random faults (mem
Robert Melancton Metcalfe is an engineer-entrepreneur from the United States who helped pioneer the Internet starting in 1970, co-invented Ethernet, co-founded 3Com and formulated Metcalfe's law. Starting in January 2011, he is Professor of Innovation and Entrepreneurship at The University of Texas at Austin, he is the Murchison Fellow of Free Enterprise. Metcalfe has received various awards, including the IEEE Medal of Honor and National Medal of Technology and Innovation for his work developing Ethernet technology. In addition to his accomplishments, Metcalfe is known for incorrectly predicting the demise of the Internet, wireless networks, open-source software during the 1990s. Robert Metcalfe was born in 1946 in New York, his father was a gyroscope test technician. His mother was a homemaker but became the secretary at Bay Shore High School. In 1964, Metcalfe graduated from Bay Shore High School to join the MIT Class of 1968, he graduated from MIT in 1969 with two S. B. degrees, one in electrical engineering and the other in industrial management from the MIT Sloan School of Management.
He went to Harvard for graduate school, earning his M. S. in applied mathematics in 1970 and his PhD in computer science in 1973. While pursuing a doctorate in computer science, Metcalfe took a job with MIT's Project MAC after Harvard refused to let him be responsible for connecting the school to the brand-new ARPAnet. At MAC, Metcalfe was responsible for building some of the hardware that would link MIT's minicomputers with the ARPAnet. Metcalfe was so enamored with ARPAnet, he made it the topic of his doctoral dissertation; the first version wasn't accepted. His inspiration for a new dissertation came while working at Xerox PARC, where he read a paper about the ALOHA network at the University of Hawaii, he identified and fixed some of the bugs in the AlohaNet model and made his analysis part of a revised thesis, which earned him his Harvard PhD in 1973. Metcalfe was working at PARC in 1973 when he and David Boggs invented Ethernet a standard for connecting computers over short distances. Metcalfe identifies the day Ethernet was born as May 22, 1973, the day he circulated a memo titled "Alto Ethernet" which contained a rough schematic of how it would work.
"That is the first time Ethernet appears as a word, as does the idea of using coax as ether, where the participating stations, like in AlohaNet or ARPAnet, would inject their packets of data, they'd travel around at megabits per second, there would be collisions, retransmissions, back-off," Metcalfe explained. Boggs identifies another date as the birth of Ethernet: November 11, 1973, the first day the system functioned. In 1979, Metcalfe departed PARC and co-founded 3Com, a manufacturer of computer networking equipment. In 1980 he received the ACM Grace Hopper Award for his contributions to the development of local networks Ethernet. In 1990, the board of directors chose Eric Benhamou to succeed Bill Krause as CEO of the networking company Metcalfe had founded in his Palo Alto apartment in 1979. Metcalfe left 3Com and began a 10-year stint as a publisher and pundit, writing an Internet column for InfoWorld, he is now a general partner at Polaris Venture Partners. In 1997, he cofounded Pop! Tech, an executive technology conference.
In November 2010 Metcalfe was selected to lead innovation initiatives at The University of Texas at Austin's Cockrell School of Engineering. He began his appointment in January 2011. Metcalfe was a keynote speaker at the 2016 Congress of Technology Leaders. Metcalfe was awarded the IEEE Medal of Honor in 1996 for "exemplary and sustained leadership in the development and commercialization of Ethernet." He received the 2003 Marconi Award for "For inventing the Ethernet and promulgating his Law of network utility based on the square of the nodes" Metcalfe received the National Medal of Technology from President Bush in a White House ceremony on March 14, 2003, "for leadership in the invention and commercialization of Ethernet", having been selected for the honor in 2003. In May 2007, along with 17 others, was inducted to the National Inventors Hall of Fame in Akron, for his work with Ethernet technology. In October 2008, Metcalfe received the Fellow Award from the Computer History Museum "for fundamental contributions to the invention and commercialization of Ethernet."
Outside of his technical achievements, Metcalfe is best known for his 1995 prediction that the Internet would suffer a "catastrophic collapse" the following year. During his keynote speech at the sixth International World Wide Web Conference in 1997, he took a printed copy of his column that predicted the collapse, put it in a blender with some liquid and consumed the pulpy mass; this was after he tried to eat his words printed on a large cake, but the audience would not accept this form of "eating his words." During an event where he talked about predictions at the eighth International World Wide Web Conference in 1999, a participant asked: what is the bet?. He stated. Metcalfe is known for his harsh criticism of open source software, Linux in particular, predicting that the latter would be obliterated after Microsoft released Windows 2000: The Open Source Movement's ideology is utopian balderdash reminds me of communism. Linux organic software grown in utopia by spiritualists When they bring organic fruit to market, you pay extra for small apples with open sores – the Open Sores Movement.
Application-specific integrated circuit
An application-specific integrated circuit is an integrated circuit customized for a particular use, rather than intended for general-purpose use. For example, a chip designed to run in a digital voice recorder or a high-efficiency bitcoin miner is an ASIC. Application-specific standard products are intermediate between ASICs and industry standard integrated circuits like the 7400 series or the 4000 series; as feature sizes have shrunk and design tools improved over the years, the maximum complexity possible in an ASIC has grown from 5,000 logic gates to over 100 million. Modern ASICs include entire microprocessors, memory blocks including ROM, RAM, EEPROM, flash memory and other large building blocks; such an ASIC is termed a SoC. Designers of digital ASICs use a hardware description language, such as Verilog or VHDL, to describe the functionality of ASICs. Field-programmable gate arrays are the modern-day technology for building a breadboard or prototype from standard parts. For smaller designs or lower production volumes, FPGAs may be more cost effective than an ASIC design in production.
The non-recurring engineering cost of an ASIC can run into the millions of dollars. Therefore, device manufacturers prefer FPGAs for prototyping and devices with low production volume and ASICs for large production volumes where NRE costs can be amortized across many devices; the initial ASICs used gate array technology. An early successful commercial application was the gate array circuitry found in the low-end 8-bit ZX81 and ZX Spectrum personal computers, introduced in 1981 and 1982; these were used by Sinclair Research as a low-cost I/O solution aimed at handling the computer's graphics. Customization occurred by varying a metal interconnect mask. Gate arrays had complexities of up to a few thousand gates. Versions became more generalized, with different base dies customized by both metal and polysilicon layers; some base dies include random-access memory elements. In the mid-1980s, a designer would choose an ASIC manufacturer and implement their design using the design tools available from the manufacturer.
While third-party design tools were available, there was not an effective link from the third-party design tools to the layout and actual semiconductor process performance characteristics of the various ASIC manufacturers. Most designers used factory-specific tools to complete the implementation of their designs. A solution to this problem, which yielded a much higher density device, was the implementation of standard cells; every ASIC manufacturer could create functional blocks with known electrical characteristics, such as propagation delay and inductance, that could be represented in third-party tools. Standard-cell design is the utilization of these functional blocks to achieve high gate density and good electrical performance. Standard-cell design is intermediate between § Gate-array and semi-custom design and § Full-custom design in terms of its non-recurring engineering and recurring component costs as well as performance and speed of development. By the late 1990s, logic synthesis tools became available.
Such tools could compile HDL descriptions into a gate-level netlist. Standard-cell integrated circuits are designed in the following conceptual stages referred to as electronics design flow, although these stages overlap in practice: Requirements engineering: A team of design engineers starts with a non-formal understanding of the required functions for a new ASIC derived from requirements analysis. Register-transfer level design: The design team constructs a description of an ASIC to achieve these goals using a hardware description language; this process is similar to writing a computer program in a high-level language. Functional verification: Suitability for purpose is verified by functional verification; this may include such techniques as logic simulation through test benches, formal verification, emulation, or creating and evaluating an equivalent pure software model, as in Simics. Each verification technique has advantages and disadvantages, most several methods are used together for ASIC verification.
Unlike most FPGAs, ASICs cannot be reprogrammed once fabricated and therefore ASIC designs that are not correct are much more costly, increasing the need for full test coverage. Logic synthesis: Logic synthesis transforms the RTL design into a large collection called of lower-level constructs called standard cells; these constructs are taken from a standard-cell library consisting of pre-characterized collections of logic gates performing specific functions. The standard cells are specific to the planned manufacturer of the ASIC; the resulting collection of standard cells and the needed electrical connections between them is called a gate-level netlist. Placement: The gate-level netlist is next processed by a placement tool which places the standard cells onto a region of an integrated circuit die representing the final ASIC; the placement tool attempts to find an optimized placement of the standard cells, subject to a variety of specified constraints. Routing: An electronics routing tool takes the physical placement of the standard cells and uses the netlist to create the electrical connections between them.
Since the search space is large, this process will produce a "sufficient" rather than "globally optimal" solution. The output is a file which can be used to create a set of photomasks enabling a semiconductor fabrication facility