USB is an industry standard that establishes specifications for cables and protocols for connection and power supply between personal computers and their peripheral devices. Released in 1996, the USB standard is maintained by the USB Implementers Forum. There have been three generations of USB specifications: USB 2.0 and USB 3.x. USB was designed to standardize the connection of peripherals like keyboards, pointing devices, digital still and video cameras, portable media players, disk drives and network adapters to personal computers, both to communicate and to supply electric power, it has replaced interfaces such as serial ports and parallel ports, has become commonplace on a wide range of devices. USB connectors have been replacing other types for battery chargers of portable devices; this section is intended to allow fast identification of USB receptacles on equipment. Further diagrams and discussion of plugs and receptacles can be found in the main article above; the Universal Serial Bus was developed to simplify and improve the interface between personal computers and peripheral devices, when compared with existing standard or ad-hoc proprietary interfaces.
From the computer user's perspective, the USB interface improved ease of use in several ways. The USB interface is self-configuring, so the user need not adjust settings on the device and interface for speed or data format, or configure interrupts, input/output addresses, or direct memory access channels. USB connectors are standardized at the host, so any peripheral can use any available receptacle. USB takes full advantage of the additional processing power that can be economically put into peripheral devices so that they can manage themselves; the USB interface is "hot pluggable", meaning devices can be exchanged without rebooting the host computer. Small devices can be powered directly from displacing extra power supply cables; because use of the USB logos is only permitted after compliance testing, the user can have confidence that a USB device will work as expected without extensive interaction with settings and configuration. Installation of a device relying on the USB standard requires minimal operator action.
When a device is plugged into a port on a running personal computer system, it is either automatically configured using existing device drivers, or the system prompts the user to locate a driver, installed and configured automatically. For hardware manufacturers and software developers, the USB standard eliminates the requirement to develop proprietary interfaces to new peripherals; the wide range of transfer speeds available from a USB interface suits devices ranging from keyboards and mice up to streaming video interfaces. A USB interface can be designed to provide the best available latency for time-critical functions, or can be set up to do background transfers of bulk data with little impact on system resources; the USB interface is generalized with no signal lines dedicated to only one function of one device. USB cables are limited in length, as the standard was meant to connect to peripherals on the same table-top, not between rooms or between buildings. However, a USB port can be connected to a gateway.
USB has "master-slave" protocol for addressing peripheral devices. Some extension to this limitation is possible through USB On-The-Go. A host cannot "broadcast" signals to all peripherals at once, each must be addressed individually; some high speed peripheral devices require sustained speeds not available in the USB standard. While converters exist between certain "legacy" interfaces and USB, they may not provide full implementation of the legacy hardware. For a product developer, use of USB requires implementation of a complex protocol and implies an "intelligent" controller in the peripheral device. Developers of USB devices intended for public sale must obtain a USB ID which requires a fee paid to the Implementers' Forum. Developers of products that use the USB specification must sign an agreement with Implementer's Forum. Use of the USB logos on the product require annual fees and membership in the organization. A group of seven companies began the development of USB in 1994: Compaq, DEC, IBM, Microsoft, NEC, Nortel.
The goal was to make it fundamentally easier to connect external devices to PCs by replacing the multitude of connectors at the back of PCs, addressing the usability issues of existing interfaces, simplifying software configuration of all devices connected to USB, as well as permitting greater data rates for external devices. Ajay Bhatt and his team worked on the standard at Intel; the original USB 1.0 specification, introduced in January 1996, defined data transfer rates of 1.5 Mbit/s Low Speed and 12 Mbit/s Full Speed. Microsoft Windows 95, OSR 2.1 provided OEM support for the devices. The first used version of USB was 1.1, released in September 1998. The 12 Mbit/s data rate was intended for higher-speed devices such as disk drives, the lower 1.5 Mbit/s rate for low data
Computer data storage
Computer data storage called storage or memory, is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers; the central processing unit of a computer is. In practice all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away; the fast volatile technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". In the Von Neumann architecture, the CPU consists of two main parts: The control unit and the arithmetic logic unit; the former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Without a significant amount of memory, a computer would be able to perform fixed operations and output the result, it would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, other specialized devices.
Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can be reprogrammed with new in-memory instructions. Most modern computers are von Neumann machines. A modern digital computer represents data using the binary numeral system. Text, pictures and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0; the most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes with one byte per character. Data are encoded by assigning a bit pattern to digit, or multimedia object.
Many standards exist for encoding. By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in storage of its ability to maintain a distinguishable value, or due to errors in inter or intra-computer communication. A random bit flip is corrected upon detection. A bit, or a group of malfunctioning physical bits is automatically fenced-out, taken out of use by the device, replaced with another functioning equivalent group in the device, where the corrected bit values are restored; the cyclic redundancy check method is used in communications and storage for error detection. A detected error is retried. Data compression methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed; this utilizes less storage for many types of data at the cost of more computation.
Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons certain types of data may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots; the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary and off-line storage is guided by cost per bit. In contemporary usage, "memory" is semiconductor storage read-write random-access memory DRAM or other forms of fast but temporary storage. "Storage" consists of storage devices and their media not directly accessible by the CPU hard disk drives, optical disc drives, other devices slower than RAM but non-volatile. Memory has been called core memory, main memory, real storage or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
Primary storage referred to as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions executes them as required. Any data operated on is stored there in uniform manner. Early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were replaced by magnetic core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive; this led to modern random-access memo
OS/2 is a series of computer operating systems created by Microsoft and IBM under the leadership of IBM software designer Ed Iacobucci. As a result of a feud between the two companies over how to position OS/2 relative to Microsoft's new Windows 3.1 operating environment, the two companies severed the relationship in 1992 and OS/2 development fell to IBM exclusively. The name stands for "Operating System/2", because it was introduced as part of the same generation change release as IBM's "Personal System/2" line of second-generation personal computers; the first version of OS/2 was released in December 1987 and newer versions were released until December 2001. OS/2 was intended as a protected-mode successor of PC DOS. Notably, basic system calls were modeled after MS-DOS calls; because of this heritage, OS/2 shares similarities with Unix and Windows NT. IBM discontinued its support for OS/2 on 31 December 2006. Since it has been updated and marketed under the name eComStation. In 2015 it was announced that a new OEM distribution of OS/2 would be released, to be called ArcaOS.
ArcaOS is available for purchase. The development of OS/2 began when IBM and Microsoft signed the "Joint Development Agreement" in August 1985, it was code-named "CP/DOS" and it took two years for the first product to be delivered. OS/2 1.0 was released in December. The original release is textmode-only, a GUI was introduced with OS/2 1.1 about a year later. OS/2 features an API for controlling the video display and handling keyboard and mouse events so that programmers writing for protected-mode need not call the BIOS or access hardware directly. Other development tools included a subset of the video and keyboard APIs as linkable libraries so that family mode programs are able to run under MS-DOS, and, in the OS/2 Extended Edition v1.0, a database engine called Database Manager or DBM. A task-switcher named Program Selector was available through the Ctrl-Esc hotkey combination, allowing the user to select among multitasked text-mode sessions. Communications and database-oriented extensions were delivered in 1988, as part of OS/2 1.0 Extended Edition: SNA, X.25/APPC/LU 6.2, LAN Manager, Query Manager, SQL.
The promised user interface, Presentation Manager, was introduced with OS/2 1.1 in October 1988. It had a similar user interface to Windows 2.1, released in May of that year.. The Extended Edition of 1.1, sold only through IBM sales channels, introduced distributed database support to IBM database systems and SNA communications support to IBM mainframe networks. In 1989, Version 1.2 introduced Installable Filesystems and, the HPFS filesystem. HPFS provided a number of improvements over the older FAT file system, including long filenames and a form of alternate data streams called Extended Attributes. In addition, extended attributes were added to the FAT file system; the Extended Edition of 1.2 introduced Ethernet support. OS/2- and Windows-related books of the late 1980s acknowledged the existence of both systems and promoted OS/2 as the system of the future; the collaboration between IBM and Microsoft unravelled in 1990, between the releases of Windows 3.0 and OS/2 1.3. During this time, Windows 3.0 became a tremendous success, selling millions of copies in its first year.
Much of its success was. OS/2, on the other hand, was available only as an additional stand-alone software package. In addition, OS/2 lacked device drivers for many common devices such as printers non-IBM hardware. Windows, on the other hand, supported a much larger variety of hardware; the increasing popularity of Windows prompted Microsoft to shift its development focus from cooperating on OS/2 with IBM to building its own business based on Windows. Several technical and practical reasons contributed to this breakup; the two companies had significant differences in vision. Microsoft favored the open hardware system approach that contributed to its success on the PC. Microsoft programmers became frustrated with IBM's bureaucracy and its use of lines of code to measure programmer productivity. IBM developers complained about the terseness and lack of comments in Microsoft's code, while Microsoft developers complained that IBM's code was bloated; the two products have significant differences in API.
OS/2 was announced when Windows 2.0 was near completion, the Windows API defined. However, IBM requested that this API be changed for OS/2. Therefore, issues surrounding application compatibility appeared immediately. OS/2 designers hoped for source code conversion tools, allowing complete migration of Windows application source code to OS/2 at some point. However, OS/2 1.x did not gain enough momentum to allow vendors to avoid developing for both OS/2 and Windows in parallel. OS/2 1.x targets DOS fundamentally doesn't. IBM insisted on supporting the 80286 processor, with its 16-bit segmented memory mode, because of commitments made to customers who had purchased many 80286-based PS/2s as a result of IBM's promises surrounding OS/2; until release 2.0 in April 1992, OS/2 ran in 16-bit protected mode and therefor
IBM PC DOS
IBM PC DOS is a discontinued operating system for the IBM Personal Computer and sold by IBM from the early 1980s into the 2000s. Before version 6.1, PC DOS was an IBM-branded version of MS-DOS. From version 6.1 on, PC DOS became IBM's independent product. The IBM task force assembled to develop the PC decided that critical components of the machine, including the operating system, would come from outside vendors; this radical break from company tradition of in-house development was one of the key decisions that made the IBM PC an industry standard. At that time the private company Microsoft, founded five years earlier by Bill Gates, was selected for the operating system. IBM wanted Microsoft to retain ownership of whatever software it developed, wanted nothing to do with helping Microsoft, other than making suggestions from afar. According to task force member Jack Sams: The reasons were internal. We had a terrible problem being sued by people claiming, it could be horribly expensive for us to have our programmers look at code that belonged to someone else because they would come back and say we stole it and made all this money.
We had lost a series of suits on this, so we didn't want to have a product, someone else's product worked on by IBM people. We went to Microsoft on the proposition. IBM first contacted Microsoft to look the company over in July 1980. Negotiations continued over the months that followed, the paperwork was signed in early November. Although IBM expected that most customers would use PC DOS, the IBM PC supported CP/M-86, which became available six months after PC DOS, UCSD p-System operating systems. IBM's expectation proved correct: one survey found that 96.3% of PCs were ordered with the $40 PC-DOS compared to 3.4% with the $240 CP/M-86. Microsoft first licensed purchased 86-DOS from Seattle Computer Products, modified for the IBM PC by Microsoft employee Bob O'Rear with assistance from SCP employee Tim Paterson. O'Rear got 86-DOS to run on the prototype PC in February 1981. 86-DOS had to be converted from 8-inch to 5.25-inch floppy disks and integrated with the BIOS, which Microsoft was helping IBM to write.
IBM had more people writing requirements for the computer. O'Rear felt overwhelmed by the number of people he had to deal with at the ESD facility in Boca Raton, Florida; the first public mention of the operating system was in July 1981, when Byte discussed rumors of a forthcoming personal computer with "a CP/M-like DOS... to be called, simply,'IBM Personal Computer DOS.'" 86-DOS was rebranded IBM PC DOS 1.0 for its August 1981 release with the IBM PC. The initial version of DOS was based on CP/M-80 1.x and most of its architecture, function calls and file-naming conventions were copied directly from the older OS. The most significant difference was the fact that it introduced a different file system, FAT12. Unlike all DOS versions, the DATE and TIME commands were separate executables rather than part of COMMAND. COM. Single-sided 160 kilobyte 5.25" floppies were the only disk format supported. In late 1981 Paterson, now at Microsoft, began writing PC DOS 1.10. It debuted in May 1982 along with the Revision B IBM PC.
Support for the new double-sided drives was added. A number of bugs were fixed, error messages and prompts were made less cryptic; the DEBUG utility was now able to load files greater than 64k in size. A group of Microsoft programmers began work on PC DOS 2.0. Rewritten, DOS 2.0 added subdirectories and hard disk support for the new IBM XT, which debuted in March 1983. A new 9-sector format bumped the capacity of floppy disks to 360 kB; the Unix-inspired kernel featured file handles in place of the CP/M-derivative file control blocks and loadable device drivers could now be used for adding hardware beyond that which the IBM PC BIOS supported. BASIC and most of the utilities provided with DOS were upgraded as well. A major undertaking that took 10 months of work, DOS 2.0 was more than twice as big as DOS 1.x, occupying around 28k of RAM compared to the 12k of its predecessor. It would form the basis for all Microsoft consumer-oriented OSes until 2001, when Windows XP was released. In October 1983 DOS 2.1 debuted.
It added support for half-height floppy drives and the new IBM PCjr. In 1983, Compaq released the Compaq Portable, the first 100% IBM PC compatible and licensed their own OEM version of DOS 1.10 from Microsoft. Other PC compatibles followed suit, most of which included hardware-specific DOS features, although some were generic. In August 1984, IBM introduced its next-generation machine. Along with this was DOS 3.00. Despite jumping a whole version number, it again proved little more than an incremental upgrade, adding nothing more substantial than support for the AT's new 1.2 megabyte floppy disks. Planned networking capabilities in DOS 3.00 were judged too buggy to be usable and Microsoft disabled them prior to the OS's release. In any case, IBM's original plans for the AT had been to equip it with a proper next-generation OS that would use its extended features, but this never materialized. PC DOS 3.1 fixed the bugs in DOS 3.00 and supported IBM's Network Adapter card on the IBM PC Network. PC DOS 3.2 added support for 3½-inch double-density 720 kB floppy disk drives, supporting the IBM PC Convertible, IBM's first co
A README file contains information about other files in a directory or archive of computer software. A form of documentation, it is a simple plain text file called READ. ME, README. TXT, README.md, README.1ST – or README. The file's name is written in uppercase letters. On Unix-like systems in particular this makes it noticed – both because lowercase filenames are more usual, because traditionally the ls command sorts and displays files in ASCIIbetical ordering, so that uppercase filenames appear first; the contents include one or more of the following: Configuration instructions Installation instructions Operating instructions A file manifest Copyright and licensing information Contact information for the distributor or programmer Known bugs Troubleshooting Credits and acknowledgments A changelog A news section It is unclear when the convention began, but there are examples dating back to the mid 1970s. In particular, there is a long history of free software and open-source software including a README file.
Since the advent of the web as a de facto standard platform for software distribution, many software packages have moved some of the above ancillary files and pieces of information to a website or wiki, sometimes including the README itself, or sometimes leaving behind only a brief README file without all of the information required by a new user of the software. In more recent times, the popular GitHub proprietary Git repository encourages a README file - if one is included in the main directory, it is automatically presented on the main web page. While traditional plain text is supported, various different file extensions and formats are supported, conversion to HTML takes account of the file extension of the file – in particular a "README.md" file would be treated as a GitHub Flavored Markdown file. The expression "readme file" is sometimes used generically, for files with a similar purpose. For example, the source code distributions of many free software packages those following the Gnits Standards or those produced with GNU Autotools, include a standard set of readme files: Other files distributed with software include a FAQ and a TODO file listing possible future changes.
Johnson, Mark. "Building a Better ReadMe". Technical Communication. Society for Technical Communication. 44: 28–36. Livingston, Brian. "Check your Readme files to avoid common Windows problems". InfoWorld. Vol. 20 no. 37. P. 34. This article is based in part on the Jargon File, in the public domain
International Business Machines Corporation is an American multinational information technology company headquartered in Armonk, New York, with operations in over 170 countries. The company began in 1911, founded in Endicott, New York, as the Computing-Tabulating-Recording Company and was renamed "International Business Machines" in 1924. IBM produces and sells computer hardware and software, provides hosting and consulting services in areas ranging from mainframe computers to nanotechnology. IBM is a major research organization, holding the record for most U. S. patents generated by a business for 26 consecutive years. Inventions by IBM include the automated teller machine, the floppy disk, the hard disk drive, the magnetic stripe card, the relational database, the SQL programming language, the UPC barcode, dynamic random-access memory; the IBM mainframe, exemplified by the System/360, was the dominant computing platform during the 1960s and 1970s. IBM has continually shifted business operations by focusing on higher-value, more profitable markets.
This includes spinning off printer manufacturer Lexmark in 1991 and the sale of personal computer and x86-based server businesses to Lenovo, acquiring companies such as PwC Consulting, SPSS, The Weather Company, Red Hat. In 2014, IBM announced that it would go "fabless", continuing to design semiconductors, but offloading manufacturing to GlobalFoundries. Nicknamed Big Blue, IBM is one of 30 companies included in the Dow Jones Industrial Average and one of the world's largest employers, with over 380,000 employees, known as "IBMers". At least 70% of IBMers are based outside the United States, the country with the largest number of IBMers is India. IBM employees have been awarded five Nobel Prizes, six Turing Awards, ten National Medals of Technology and five National Medals of Science. In the 1880s, technologies emerged that would form the core of International Business Machines. Julius E. Pitrap patented the computing scale in 1885. On June 16, 1911, their four companies were amalgamated in New York State by Charles Ranlett Flint forming a fifth company, the Computing-Tabulating-Recording Company based in Endicott, New York.
The five companies had offices and plants in Endicott and Binghamton, New York. C.. They manufactured machinery for sale and lease, ranging from commercial scales and industrial time recorders and cheese slicers, to tabulators and punched cards. Thomas J. Watson, Sr. fired from the National Cash Register Company by John Henry Patterson, called on Flint and, in 1914, was offered a position at CTR. Watson joined CTR as General Manager 11 months was made President when court cases relating to his time at NCR were resolved. Having learned Patterson's pioneering business practices, Watson proceeded to put the stamp of NCR onto CTR's companies, he implemented sales conventions, "generous sales incentives, a focus on customer service, an insistence on well-groomed, dark-suited salesmen and had an evangelical fervor for instilling company pride and loyalty in every worker". His favorite slogan, "THINK", became a mantra for each company's employees. During Watson's first four years, revenues reached $9 million and the company's operations expanded to Europe, South America and Australia.
Watson never liked the clumsy hyphenated name "Computing-Tabulating-Recording Company" and on February 14, 1924 chose to replace it with the more expansive title "International Business Machines". By 1933 most of the subsidiaries had been merged into one company, IBM. In 1937, IBM's tabulating equipment enabled organizations to process unprecedented amounts of data, its clients including the U. S. Government, during its first effort to maintain the employment records for 26 million people pursuant to the Social Security Act, the tracking of persecuted groups by Hitler's Third Reich through the German subsidiary Dehomag. In 1949, Thomas Watson, Sr. created IBM World Trade Corporation, a subsidiary of IBM focused on foreign operations. In 1952, he stepped down after 40 years at the company helm, his son Thomas Watson, Jr. was named president. In 1956, the company demonstrated the first practical example of artificial intelligence when Arthur L. Samuel of IBM's Poughkeepsie, New York, laboratory programmed an IBM 704 not to play checkers but "learn" from its own experience.
In 1957, the FORTRAN scientific programming language was developed. In 1961, IBM developed the SABRE reservation system for American Airlines and introduced the successful Selectric typewriter. In 1963, IBM employees and computers helped. A year it moved its corporate headquarters from New York City to Armonk, New York; the latter half of the 1960s saw IBM continue its support of space exploration, participating in the 1965 Gemini flights, 1966 Saturn flights and 1969 lunar mission. On April 7, 1964, IBM announced the first computer system family, the IBM System/360, it spanned the complete range of commercial and scientific applications from large to small, allowing companies for the first time to upgrade to models with greater computing capability without having to rewrite their applications. It was followed by the IBM System/370 in 1970. Together the
A floppy-disk controller is a special-purpose chip and associated disk controller circuitry that directs and controls reading from and writing to a computer's floppy disk drive. This article contains concepts common to FDCs based on the NEC µPD765 and Intel 8272A or 82072A and their descendants, as used in the IBM PC and compatibles from the 1980s and 1990s; the concepts may or may not be applicable to, or illustrative of, other controllers or architectures. A single floppy-disk controller board can support up to four floppy disk drives; the controller is linked to the system bus of the computer and appears as a set of I/O ports to the CPU. It is also connected to a channel of the DMA controller. On the x86 PC the floppy controller uses IRQ 6, on other systems other interrupt schemes may be used; the floppy disk controller performs data transmission in direct memory access mode. The diagram below shows a floppy disk controller which communicates with the CPU via an Industry Standard Architecture bus or similar bus and communicates with the Floppy Disk drive with a 34 pin ribbon cable.
An alternative arrangement, more usual in recent designs has the FDC included in a super I/O chip which communicates via a Low Pin Count bus. Most of the floppy disk controller functions are performed by the integrated circuit but some are performed by external hardware circuits; the list of functions performed by each is given below. Translate data bits into FM, MFM, M²FM, or GCR format to be able to record them Interpret and execute commands such as seek, write, etc. Error detection with checksums generation and verification, like CRC Synchronize data with phase-locked loop Selection of floppy disk drive Switching-on the floppy drive motor Reset signal for the floppy controller IC Enable/disable interrupt and DMA signals in the floppy disk controller Data separation logic Write pre-compensation logic Line drivers for signals to the controller Line receivers for signals from the controller The FDC has three I/O ports; these are: Data port Main status register Digital control portThe first two reside inside the FDC IC while the Control port is in the external hardware.
The addresses of these three ports are as follows. This port is used by the software for three different purposes: While issuing a command to the FDC IC, command and command parameter bytes are issued to the FDC IC through this port; the FDC IC stores the command in its internal registers. After a command is executed, the FDC IC stores a set of status parameters in the internal registers; these are read by the CPU through this port. The different status bytes are presented by the FDC IC in a specific sequence. In the programmed and interrupt mode of data transfer, the data port is used for transferring data between the FDC IC and the CPU IN or OUT instruction; this port is used by the software to read the overall status information regarding the FDC IC and the FDD's. Before initiating a floppy disk operation the software reads this port to confirm the readiness condition of the FDC and the disk drives to verify the status of the initiated command; the different bits of this register represent: This port is used by the software to control certain FDD and FDC IC functions.
The bit assignments of this port are: The controller connects to the drive using a flat ribbon cable with 34 connectors split between the host, the 3.5" drive, the 5.25" drive. This type of cable is called a universal connector. In the IBM PC family and compatibles, a twist in the cable is used to distinguish disk drives by the socket to which they are connected. All drives are installed with the same drive select address set, the twist in the cable interchange the drive select line at the socket; the drive, at furthest end of the cable additionally would have a terminating resistor installed to maintain signal quality. Further description of the interface signals are contained in specifications of the controllers or drives. Many mutually incompatible floppy disk formats are possible. Sides: SS – Single sided DS – Double sidedDensity: SD – Single density DD – Double density QD – Quad density HD – High density ED – Extra-high density TD – Triple density Primarily in Japan there are 3.5" high-density floppy drives that support three modes of disk formats instead of the normal two – 1440 KB, 1.2 MB and 720 kB.
The high-density mode for 3.5" floppy drives in Japan only supported a capacity of 1.2 MB instead of the 1440 KB capacity, used elsewhere. While the more common 1440 KB format spun at 300 rpm, the 1.2 MB format instead spun at 360 rpm, thereby resembling the 1.2 MB format with 15 sectors per track found on 5.25" high-density floppy drives. Japanese floppy drives incorporated support for both high-density formats, hence the name 3-mode; some BIOSes have a configuration setting to enable this mode for floppy drives supporting it. List of floppy disk formats Western Digital FD1771 Integrated Woz Machine Paula de Boyne Pollard, Jonathan. "There is no such thing as a 3.5-inch floppy disc". Given Answers. ISO/IEC 8860-1:1987 Double-Density ISO/IEC 9529-1:1989 High-Density ISO 10994-1:1992 Extra-high-density ECMA-147 NEC µPD72070 - Floppy Disk Controller Specification Version 2.0. 2.0 preliminary. NEC Corporation. October 1991. Archived from the original on 2017-03-20. Retrieved 2017-03-20. Shah, Katen A..
Intel 82077SL for Super-