Puppy Linux is an operating system and lightweight Linux distribution that focuses on ease of use and minimal memory footprint. The entire system can be run from RAM with current versions taking up about 210 MB, allowing the boot medium to be removed after the operating system has started. Applications such as AbiWord, Gnumeric and MPlayer are included, along with a choice of lightweight web browsers and a utility for downloading other packages; the distribution was developed by Barry Kauler and other members of the community, until Kauler retired in 2013. The tool Woof can build a Puppy Linux distribution from the binary packages of other Linux distributions. Barry Kauler started Puppy Linux in response to a trend of other distributions being stricter on system requirements with time, his own distribution, with an emphasis on speed and efficiency and being lightweight, started from "Boot disk HOWTO" and included components file-by-file until Puppy Linux was completed. Puppy Linux started as Vector Linux based until it became a independent distribution.
Puppy 0 is the initial release of Puppy Linux. It has no unionfs, extreme minimal persistence support, has no package manager or ability to install applications. Puppy 1 series will run comfortably on dated hardware, such as a Pentium computer with at least 32 MB RAM. For newer systems, the USB keydrive version might be better, it is possible to run Puppy Linux with Windows 9x/Windows Me. It is possible, if the BIOS does not support booting from USB drive, to boot from the CD and keep user state on a USB keydrive. Puppy 2 uses the Mozilla-based SeaMonkey as its Internet suite. Puppy 3 features Slackware 12 compatibility; this is accomplished by the inclusion of all the dependencies needed for the installation of Slackware packages. However, Puppy Linux is not a Slackware-based distribution. Puppy 4 is built from scratch using the T2 SDE and no longer features native Slackware 12 compatibility in order to reduce the size and include newer package versions than those found in 3. To compensate for this, an optional "compatibility collection" of packages was created that restores some of the lost compatibility.
Puppy 4.2 features changes to the user interface and backend, upgraded packages and character support, new in-house software and optimizations, while still keeping the ISO image size under 100 MB. Puppy 5 is based on a project called Woof, designed to assemble a Puppy Linux distribution from the packages of other Linux distributions. Woof includes some binaries and software derived from Ubuntu, Slackware, T2 SDE, or Arch repositories. Puppy 5 came with a stripped down version of the Midori browser to be used for reading help files and a choice of web browsers to be installed, including Chromium, SeaMonkey Internet Suite and Opera. Puppy 6 is built from Ubuntu 14.04 Trusty Tahr packages, has binary compatibility with Ubuntu 14.04 and access to the Ubuntu package repositories. Tahrpup is built from the woof-CE build system, forked from Barry Kauler's Woof late last year after he announced his retirement from Puppy development, it is built from the latest testing branch, incorporates all the latest woof-CE features and is released in PAE and noPAE ISOs, with the option to switch kernels.
Puppy 7 is built from Ubuntu 16.04 Xenial Xerus packages, has binary compatibility with Ubuntu 16.04 and access to the Ubuntu package repositories. Tahrpup is built from the woof-CE build system, forked from Barry Kauler's Woof, it is built from the latest testing branch, incorporates all the latest woof-CE features and is released in PAE and noPAE ISOs, with the option to switch kernels. It has a new UI, a new kernel update for greater hardware compatibility, redesign Puppy Package Manager, some bugfixes and base packages inclusion into the woof structure. Puppy 8 is built from Ubuntu Bionic Beaver 18.04.2 packages, has binary compatibility with Ubuntu 18.04.2 and access to the Ubuntu package repositories. BionicPup is built from the woof-CE build system, forked from Barry Kauler's Woof, it is built from the latest testing branch, incorporates all the latest woof-CE features. Puppy Linux is a complete operating system bundled with a collection of applications suited to general use tasks, it can be used as a rescue disk, a demonstration system that leaves the previous installation unaltered, as an accommodation for a system with a blank or missing hard drive, or for using modern software on legacy computers.
Puppy's compact size allows it to boot from any media. It can function as a live USB for flash devices or other USB mediums, a CD, an internal hard disk drive, an SD card, a Zip drive or LS-120/240 SuperDisk, through PXE, through a floppy boot disk that chainloads the data from other storage media, it has been ported to ARM and can run on a single board computer such as the Raspberry Pi. Puppy Linux features built-in tools which can be used to create bootable USB drives, create new Puppy CDs, or remaster a new live CD with different packages, it uses a sophisticated write-caching system with the purpose of extending the life of live USB flash drives. Puppy Linux includes the ability to use a normal persistent updating environment on a write-once multisession CD/DVD that does not require a rewritable disc. While other distributions offer live CD versions of their operating systems, none offer a similar feature. Puppy's bootloader does not connect to the network automatically; this ensures
USB flash drive
A USB flash drive known as a thumb drive, pen drive, gig stick, flash stick, jump drive, disk key, disk on key, flash-drive, memory stick, USB key, USB stick or USB memory, is a data storage device that includes flash memory with an integrated USB interface. It is removable and much smaller than an optical disc. Most weigh less than 1 oz. Since first appearing on the market in late 2000, as with all other computer memory devices, storage capacities have risen while prices have dropped; as of March 2016, flash drives with anywhere from 8 to 256 GB were sold, while 512 GB and 1 TB units were less frequent. As of 2018, 2TB flash drives were the largest available in terms of storage capacity; some allow up to 100,000 write/erase cycles, depending on the exact type of memory chip used, are thought to last between 10 and 100 years under normal circumstances. USB flash drives are used for storage, data back-up and transfer of computer files. Compared with floppy disks or CDs, they are smaller, have more capacity, are more durable due to a lack of moving parts.
Additionally, they are immune to electromagnetic interference, are unharmed by surface scratches. Until about 2005, most desktop and laptop computers were supplied with floppy disk drives in addition to USB ports, but floppy disk drives became obsolete after widespread adoption of USB ports and the larger USB drive capacity compared to the 1.44 MB 3.5-inch floppy disk. USB flash drives use the USB mass storage device class standard, supported natively by modern operating systems such as Windows, macOS and other Unix-like systems, as well as many BIOS boot ROMs. USB drives with USB 2.0 support can store more data and transfer faster than much larger optical disc drives like CD-RW or DVD-RW drives and can be read by many other systems such as the Xbox One, PlayStation 4, DVD players, automobile entertainment systems, in a number of handheld devices such as smartphones and tablet computers, though the electronically similar SD card is better suited for those devices. A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case, which can be carried in a pocket or on a key chain, for example.
The USB connector may be protected by a removable cap or by retracting into the body of the drive, although it is not to be damaged if unprotected. Most flash drives use a standard type-A USB connection allowing connection with a port on a personal computer, but drives for other interfaces exist. USB flash drives draw power from the computer via the USB connection; some devices combine the functionality of a portable media player with USB flash storage. M-Systems, an Israeli company, were granted a US patent on November 14, 2000, titled "Architecture for a -based Flash Disk", crediting the invention to Amir Ban, Dov Moran and Oron Ogdan, all M-Systems employees at the time; the patent application was filed by M-Systems in April 1999. In 1999, IBM filed an invention disclosure by one of its employees. Flash drives were sold by Trek 2000 International, a company in Singapore, which began selling in early 2000. IBM became the first to sell USB flash drives in the United States in 2000; the initial storage capacity of a flash drive was 8 MB.
Another version of the flash drive, described as a pen drive, was developed. Pua Khein-Seng from Malaysia has been credited with this invention. Patent disputes have arisen over the years, with competing companies including Singaporean company Trek Technology and Chinese company Netac Technology, attempting to enforce their patents. Trek has lost battles in other countries. Netac Technology has brought lawsuits against PNY Technologies, aigo and Taiwan's Acer and Tai Guen Enterprise Co. Flash drives are measured by the rate at which they transfer data. Transfer rates may be given in megabytes per second, megabits per second, or in optical drive multipliers such as "180X". File transfer rates vary among devices. Second generation flash drives have claimed to read at up to 30 MB/s and write at about half that rate, about 20 times faster than the theoretical transfer rate achievable by the previous model, USB 1.1, limited to 12 Mbit/s with accounted overhead. The effective transfer rate of a device is affected by the data access pattern.
By 2002, USB flash drives had USB 2.0 connectivity, which has 480 Mbit/s as the transfer rate upper bound. That same year, Intel sparked widespread use of second generation USB by including them within its laptops. Third generation USB flash drives were announced in late 2008 and became available in 2010. Like USB 2.0 before it, USB 3.0 improved data transfer rates compared to its predecessor. The USB 3.0 interface specified transfer rates up compared to USB 2.0's 480 Mbit/s. By 2010 the maximum available storage capacity for the devices had reached upwards of 128GB. USB 3.0 was slow to appear in laptops. As of 2010, the majority of laptop models still contained the 2.0. In January 2013, tech company Kingston, released a flash drive with 1TB of storage; the first USB 3.1 type-C flash drives, with read/write speeds of around 530 MB/s, were announced in March 2015. As of July 2016, flash drives within the 8 to 256 GB
RAID is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both. This was in contrast to the previous concept of reliable mainframe disk drives referred to as "single large expensive disk". Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance; the different schemes, or data distribution layouts, are named by the word "RAID" followed by a number, for example RAID 0 or RAID 1. Each scheme, or RAID level, provides a different balance among the key goals: reliability, availability and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives; the term "RAID" was invented by David Patterson, Garth A. Gibson, Randy Katz at the University of California, Berkeley in 1987.
In their June 1988 paper "A Case for Redundant Arrays of Inexpensive Disks", presented at the SIGMOD conference, they argued that the top performing mainframe disk drives of the time could be beaten on performance by an array of the inexpensive drives, developed for the growing personal computer market. Although failures would rise in proportion to the number of drives, by configuring for redundancy, the reliability of an array could far exceed that of any large single drive. Although not yet using that terminology, the technologies of the five levels of RAID named in the June 1988 paper were used in various products prior to the paper's publication, including the following: Mirroring was well established in the 1970s including, for example, Tandem NonStop Systems. In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was subsequently named RAID 4. Around 1983, DEC began. In 1986, Clark et al. at IBM filed a patent disclosing what was subsequently named RAID 5. Around 1988, the Thinking Machines' DataVault used error correction codes in an array of disk drives.
A similar approach was used in the early 1960s on the IBM 353. Industry manufacturers redefined the RAID acronym to stand for "Redundant Array of Independent Disks". Many RAID levels employ an error protection scheme called "parity", a used method in information technology to provide fault tolerance in a given set of data. Most use simple XOR, but RAID 6 uses two separate parities based on addition and multiplication in a particular Galois field or Reed–Solomon error correction. RAID can provide data security with solid-state drives without the expense of an all-SSD system. For example, a fast SSD can be mirrored with a mechanical drive. For this configuration to provide a significant speed advantage an appropriate controller is needed that uses the fast SSD for all read operations. Adaptec calls this "hybrid RAID". A number of standard schemes have evolved; these are called levels. There were five RAID levels, but many variations have evolved, notably several nested levels and many non-standard levels.
RAID levels and their associated data formats are standardized by the Storage Networking Industry Association in the Common RAID Disk Drive Format standard: RAID 0 RAID 0 consists of striping, but no mirroring or parity. Compared to a spanned volume, the capacity of a RAID 0 volume is the same, but because striping distributes the contents of each file among all disks in the set, the failure of any disk causes all files, the entire RAID 0 volume, to be lost. A broken spanned volume at least preserves the files on the unfailing disks; the benefit of RAID 0 is that the throughput of read and write operations to any file is multiplied by the number of disks because, unlike spanned volumes and writes are done concurrently, the cost is complete vulnerability to drive failures. RAID 1 RAID 1 consists of data mirroring, without striping. Data is written identically to two drives. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first, improving performance.
Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID. Write throughput is always slower because every drive must be updated, the slowest drive limits the write performance; the array continues to operate as long. RAID 2 RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity is stored on at least one parity drive; this level is of historical significance only. RAID 3 RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is stored on a dedicated parity drive. Although implementations exist, RAID 3 is not
The Linux kernel is a free and open-source, Unix-like operating system kernel. The Linux family of operating systems is based on this kernel and deployed on both traditional computer systems such as personal computers and servers in the form of Linux distributions, on various embedded devices such as routers, wireless access points, PBXes, set-top boxes, FTA receivers, smart TVs, PVRs, NAS appliances. While the adoption of the Linux kernel in desktop computer operating system is low, Linux-based operating systems dominate nearly every other segment of computing, from mobile devices to mainframes; as of November 2017, all of the world's 500 most powerful supercomputers run Linux. The Android operating system for tablet computers and smartwatches uses the Linux kernel; the Linux kernel was conceived and created in 1991 by Linus Torvalds for his personal computer and with no cross-platform intentions, but has since expanded to support a huge array of computer architectures, many more than other operating systems or kernels.
Linux attracted developers and users who adopted it as the kernel for other free software projects, notably the GNU Operating System, created as a free, non-proprietary operating system, based on UNIX as a by-product of the fallout of the Unix wars. The Linux kernel API, the application programming interface through which user programs interact with the kernel, is meant to be stable and to not break userspace programs; as part of the kernel's functionality, device drivers control the hardware. However, the interface between the kernel and loadable kernel modules, unlike in many other kernels and operating systems, is not meant to be stable by design; the Linux kernel, developed by contributors worldwide, is a prominent example of free and open source software. Day-to-day development discussions take place on the Linux kernel mailing list; the Linux kernel is released under the GNU General Public License version 2, with some firmware images released under various non-free licenses. In April 1991, Linus Torvalds, at the time a 21-year-old computer science student at the University of Helsinki, started working on some simple ideas for an operating system.
He started with a task switcher in a terminal driver. On 25 August 1991, Torvalds posted the following to comp.os.minix, a newsgroup on Usenet: I'm doing a operating system for 386 AT clones. This has been brewing since April, is starting to get ready. I'd like any feedback on things people like/dislike in minix. I've ported bash and gcc, things seem to work; this implies that I'll get something practical within a few months Yes - it's free of any minix code, it has a multi-threaded fs. It is NOT portable, it never will support anything other than AT-harddisks, as that's all I have:-(. It's in C, but most people wouldn't call what I write C, it uses every conceivable feature of the 386 I could find, as it was a project to teach me about the 386. As mentioned, it uses a MMU, for both paging and segmentation. It's the segmentation; some of my "C"-files are as much assembler as C. Unlike minix, I happen to LIKE interrupts, so interrupts are handled without trying to hide the reason behind them. After that, many people contributed code to the project.
Early on, the MINIX community contributed code and ideas to the Linux kernel. At the time, the GNU Project had created many of the components required for a free operating system, but its own kernel, GNU Hurd, was incomplete and unavailable; the Berkeley Software Distribution had not yet freed itself from legal encumbrances. Despite the limited functionality of the early versions, Linux gained developers and users. In September 1991, Torvalds released version 0.01 of the Linux kernel on the FTP server of the Finnish University and Research Network. It had 10,239 lines of code. On 5 October 1991, version 0.02 of the Linux kernel was released. Torvalds assigned version 0 to the kernel to indicate that it was for testing and not intended for productive use. In December 1991, Linux kernel 0.11 was released. This version was the first to be self-hosted as Linux kernel 0.11 could be compiled by a computer running the same kernel version. When Torvalds released version 0.12 in February 1992, he adopted the GNU General Public License version 2 over his previous self-drafted license, which had not permitted commercial redistribution.
On 19 January 1992, the first post to the new newsgroup alt.os.linux was submitted. On 31 March 1992, the newsgroup was renamed comp.os.linux. The fact that Linux is a monolithic kernel rather than a microkernel was the topic of a debate between Andrew S. Tanenbaum, the creator of MINIX, Torvalds; this discussion is known as the Tanenbaum–Torvalds debate and started in 1992 on the Usenet discussion group comp.os.minix as a general debate about Linux and kernel architecture. Tanenbaum argued that microkernels were superior to monolithic kernels and that therefore Linux was obsolete. Unlike traditional monolithic kernels, device drivers in Linux are configured as loadable kernel modules and are loaded or unloaded while
In computing, memory refers to the computer hardware integrated circuits that store information for immediate use in a computer. Computer memory operates at a high speed, for example random-access memory, as a distinction from storage that provides slow-to-access information but offers higher capacities. If needed, contents of the computer memory can be transferred to secondary storage. An archaic synonym for memory is store; the term "memory", meaning "primary storage" or "main memory", is associated with addressable semiconductor memory, i.e. integrated circuits consisting of silicon-based transistors, used for example as primary storage but other purposes in computers and other digital electronic devices. There are two main kinds of semiconductor memory and non-volatile. Examples of non-volatile memory are ROM, PROM, EPROM and EEPROM memory. Examples of volatile memory are primary storage, dynamic random-access memory, fast CPU cache memory, static random-access memory, fast but energy-consuming, offering lower memory areal density than DRAM.
Most semiconductor memory is organized into memory cells or bistable flip-flops, each storing one bit. Flash memory organization includes multiple bits per cell; the memory cells are grouped into words of fixed word length, for example 1, 2, 4, 8, 16, 32, 64 or 128 bit. Each word can be accessed by a binary address of N bit, making it possible to store 2 raised by N words in the memory; this implies that processor registers are not considered as memory, since they only store one word and do not include an addressing mechanism. Typical secondary storage devices are solid-state drives. In the early 1940s, memory technology permitted a capacity of a few bytes; the first electronic programmable digital computer, the ENIAC, using thousands of octal-base radio vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits which were held in the vacuum tube accumulators. The next significant advance in computer memory came with acoustic delay line memory, developed by J. Presper Eckert in the early 1940s.
Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information in the form of sound waves propagating through mercury, with the quartz crystals acting as transducers to read and write bits. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient. Two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred Williams would invent the Williams tube, which would be the first random-access computer memory; the Williams tube would prove less expensive. The Williams tube would prove to be frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory. Jay Forrester, Jan A. Rajchman and An Wang developed magnetic-core memory, which allowed for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor-based memory in the late 1960s.
Developments in technology and economies of scale have made possible so-called Very Large Memory computers. The term "memory" when used with reference to computers refers to random-access memory. Volatile memory is computer memory. Most modern semiconductor volatile memory is either static RAM or dynamic RAM. SRAM retains its contents as long as the power is connected and is easy for interfacing, but uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs. SRAM is not worthwhile for desktop system memory, where DRAM dominates, but is used for their cache memories. SRAM is commonplace in small embedded systems. Forthcoming volatile memory technologies that aim at replacing or competing with SRAM and DRAM include Z-RAM and A-RAM. Non-volatile memory is computer memory that can retain the stored information when not powered.
Examples of non-volatile memory include read-only memory, flash memory, most types of magnetic computer storage devices, optical discs, early computer storage methods such as paper tape and punched cards. Forthcoming non-volatile memory technologies include FERAM, CBRAM, PRAM, STT-RAM, SONOS, RRAM, racetrack memory, NRAM, 3D XPoint, millipede memory. A third category of memory is "semi-volatile"; the term is used to describe a memory which has some limited non-volatile duration after power is removed, but data is lost. A typical goal when using a semi-volatile memory is to provide high performance/durability/etc. Associated with volatile memories, while providing some benefits of a true non-volatile memory. For example, some non-volatile memory types can wear out, where a "worn" cell has increased volatility but otherwise continues to work. Data locations which are written
In computing and optical disc recording technologies, an optical disc is a flat circular disc which encodes binary data in the form of pits and lands on a special material on one of its flat surfaces. The encoding material sits atop a thicker substrate which makes up the bulk of the disc and forms a dust defocusing layer; the encoding pattern follows a continuous, spiral path covering the entire disc surface and extending from the innermost track to the outermost track. The data is stored on the disc with a laser or stamping machine, can be accessed when the data path is illuminated with a laser diode in an optical disc drive which spins the disc at speeds of about 200 to 4,000 RPM or more, depending on the drive type, disc format, the distance of the read head from the center of the disc. Most optical discs exhibit a characteristic iridescence as a result of the diffraction grating formed by its grooves; this side of the disc contains the actual data and is coated with a transparent material lacquer.
The reverse side of an optical disc has a printed label, sometimes made of paper but printed or stamped onto the disc itself. Unlike the 3½-inch floppy disk, most optical discs do not have an integrated protective casing and are therefore susceptible to data transfer problems due to scratches and other environmental problems. Optical discs are between 7.6 and 30 cm in diameter, with 12 cm being the most common size. A typical disc is about 1.2 mm thick. An optical disc is designed to support one of three recording types: read-only, recordable, or re-recordable. Write-once optical discs have an organic dye recording layer between the substrate and the reflective layer. Rewritable discs contain an alloy recording layer composed of a phase change material, most AgInSbTe, an alloy of silver, indium and tellurium. Optical discs are most used for storing music, video, or data and programs for personal computers; the Optical Storage Technology Association promotes standardized optical storage formats.
Although optical discs are more durable than earlier audio-visual and data storage formats, they are susceptible to environmental and daily-use damage. Libraries and archives enact optical media preservation procedures to ensure continued usability in the computer's optical disc drive or corresponding disc player. For computer data backup and physical data transfer, optical discs such as CDs and DVDs are being replaced with faster, smaller solid-state devices the USB flash drive; this trend is expected to continue as USB flash drives continue to increase in capacity and drop in price. Additionally, music purchased or shared over the Internet has reduced the number of audio CDs sold annually; the first recorded historical use of an optical disc was in 1884 when Alexander Graham Bell, Chichester Bell and Charles Sumner Tainter recorded sound on a glass disc using a beam of light. An early optical disc system existed in 1935, named Lichttonorgel. An early analog optical disc used for video recording was invented by David Paul Gregg in 1958 and patented in the US in 1961 and 1969.
This form of optical disc was a early form of the DVD. It is of special interest that U. S. Patent 4,893,297, filed 1989, issued 1990, generated royalty income for Pioneer Corporation's DVA until 2007 —then encompassing the CD, DVD, Blu-ray systems. In the early 1960s, the Music Corporation of America bought Gregg's patents and his company, Gauss Electrophysics. American inventor James T. Russell has been credited with inventing the first system to record a digital signal on an optical transparent foil, lit from behind by a high-power halogen lamp. Russell's patent application was first filed in 1966 and he was granted a patent in 1970. Following litigation and Philips licensed Russell's patents in the 1980s. Both Gregg's and Russell's disc are floppy media read in transparent mode, which imposes serious drawbacks. In the Netherlands in 1969, Philips Research physicist, Pieter Kramer invented an optical videodisc in reflective mode with a protective layer read by a focused laser beam U. S. Patent 5,068,846, filed 1972, issued 1991.
Kramer's physical format is used in all optical discs. In 1975, Philips and MCA began to work together, in 1978, commercially much too late, they presented their long-awaited Laserdisc in Atlanta. MCA delivered the Philips the players. However, the presentation was a commercial failure, the cooperation ended. In Japan and the U. S. Pioneer succeeded with the videodisc until the advent of the DVD. In 1979, Philips and Sony, in consortium developed the audio compact disc. In 1979, Exxon STAR Systems in Pasadena, CA built a computer controlled WORM drive that utilized thin film coatings of Tellurium and Selenium on a 12" diameter glass disk; the recording system utilized blue light at red light at 632.8 nm to read. STAR Systems was bought by Storage Technology Corporation in 1981 and moved to Boulder, CO. Development of the WORM technology was continued using 14" diameter aluminum substrates. Beta testing of the disk drives labeled the Laser
Parallel ATA AT Attachment, is an interface standard for the connection of storage devices such as hard disk drives, floppy disk drives, optical disc drives in computers. The standard is maintained by the X3/INCITS committee, it uses the underlying AT AT Attachment Packet Interface standards. The Parallel ATA standard is the result of a long history of incremental technical development, which began with the original AT Attachment interface, developed for use in early PC AT equipment; the ATA interface itself evolved in several stages from Western Digital's original Integrated Drive Electronics interface. As a result, many near-synonyms for ATA/ATAPI and its previous incarnations are still in common informal use, in particular Extended IDE and Ultra ATA. After the introduction of Serial ATA in 2003, the original ATA was renamed to Parallel ATA, or PATA for short. Parallel ATA cables have a maximum allowable length of 18 in; because of this limit, the technology appears as an internal computer storage interface.
For many years, ATA provided the least expensive interface for this application. It has been replaced by SATA in newer systems; the standard was conceived as the "AT Bus Attachment," called "AT Attachment" and abbreviated "ATA" because its primary feature was a direct connection to the 16-bit ISA bus introduced with the IBM PC/AT. The original ATA specifications published by the standards committees use the name "AT Attachment"; the "AT" in the IBM PC/AT referred to "Advanced Technology" so ATA has been referred to as "Advanced Technology Attachment". When a newer Serial ATA was introduced in 2003, the original ATA was renamed to Parallel ATA, or PATA for short; the first version of what is now called the ATA/ATAPI interface was developed by Western Digital under the name Integrated Drive Electronics. Together with Control Data Corporation and Compaq Computer, they developed the connector, the signaling protocols and so on, with the goal of remaining software compatible with the existing ST-506 hard drive interface.
The first such drives appeared in Compaq PCs in 1986. The term Integrated Drive Electronics refers not just to the connector and interface definition, but to the fact that the drive controller is integrated into the drive, as opposed to a separate controller on or connected to the motherboard; the interface cards used to connect a parallel ATA drive to, for example, a PCI slot are not drive controllers: they are bridges between the host bus and the ATA interface. Since the original ATA interface is just a 16-bit ISA bus in disguise, the bridge was simple in case of an ATA connector being located on an ISA interface card; the integrated controller presented the drive to the host computer as an array of 512-byte blocks with a simple command interface. This relieved the mainboard and interface cards in the host computer of the chores of stepping the disk head arm, moving the head arm in and out, so on, as had to be done with earlier ST-506 and ESDI hard drives. All of these low-level details of the mechanical operation of the drive were now handled by the controller on the drive itself.
This eliminated the need to design a single controller that could handle many different types of drives, since the controller could be unique for the drive. The host need only to ask for a particular sector, or block, to be read or written, either accept the data from the drive or send the data to it; the interface used by these drives was standardized in 1994 as ANSI standard X3.221-1994, AT Attachment Interface for Disk Drives. After versions of the standard were developed, this became known as "ATA-1". A short-lived, seldom-used implementation of ATA was created for the IBM XT and similar machines that used the 8-bit version of the ISA bus, it has been referred to as "XT-IDE", "XTA" or "XT Attachment". When PC motherboard makers started to include onboard ATA interfaces in place of the earlier ISA plug-in cards, there was only one ATA connector on the board, which could support up to two hard drives. At the time, in combination with the floppy drive, this was sufficient for most users; when the CD-ROM was developed, many computers would have been unable to accept these drives if they had been ATA devices, due to having two hard drives installed.
Adding the CD-ROM drive would have required removal of one of the drives. SCSI was available as a CD-ROM expansion option at the time, but devices with SCSI were more expensive than ATA devices due to the need for a smart interface, capable of bus arbitration. SCSI added US$100–300 to the cost of a storage device, in addition to the cost of a SCSI host adapter; the less expensive solution was the addition of a dedicated CD-ROM interface, included as an expansion option on a sound card. PC motherboards did not come with support for more than simple beeps from internal speakers. Sound cards included a game port joystick/gamepad port along with interfaces to control a CD-ROM and transmit CD audio to the system; the second drive interface was not well defined. It was first introduced with interfaces specific to certain CD-ROM drives such as Mitsumi, Sony or Panasonic drives, it was common to find early sound cards with two or three separate connectors each designed to match a certain brand of CD-ROM drive.
This evolved into the standard ATA interface for ease of cross-compatibility, though the sound card ATA interface still usual