Flash memory is an electronic non-volatile computer storage medium that can be electrically erased and reprogrammed. Toshiba developed flash memory from EEPROM in the early 1980s and introduced it to the market in 1984; the two main types of flash memory are named after the NOR logic gates. The individual flash memory cells exhibit internal characteristics similar to those of the corresponding gates. While EPROMs had to be erased before being rewritten, NAND-type flash memory may be written and read in blocks which are much smaller than the entire device. NOR-type flash allows a single machine word to be written – to an erased location – or read independently; the NAND type is found in memory cards, USB flash drives, solid-state drives, similar products, for general storage and transfer of data. NAND or NOR flash memory is often used to store configuration data in numerous digital products, a task made possible by EEPROM or battery-powered static RAM. One key disadvantage of flash memory is that it can only endure a small number of write cycles in a specific block.
Example applications of both types of flash memory include personal computers, PDAs, digital audio players, digital cameras, mobile phones, video games, scientific instrumentation, industrial robotics, medical electronics. In addition to being non-volatile, flash memory offers fast read access times, although not as fast as static RAM or ROM, its mechanical shock resistance helps explain its popularity over hard disks in portable devices, as does its high durability, ability to withstand high pressure and immersion in water, etc. Although flash memory is technically a type of EEPROM, the term "EEPROM" is used to refer to non-flash EEPROM, erasable in small blocks bytes; because erase cycles are slow, the large block sizes used in flash memory erasing give it a significant speed advantage over non-flash EEPROM when writing large amounts of data. As of 2013, flash memory costs much less than byte-programmable EEPROM and had become the dominant memory type wherever a system required a significant amount of non-volatile solid-state storage.
Flash memory was invented by Fujio Masuoka while working for Toshiba circa 1980. According to Toshiba, the name "flash" was suggested by Masuoka's colleague, Shōji Ariizumi, because the erasure process of the memory contents reminded him of the flash of a camera. Masuoka and colleagues presented the invention at the IEEE 1987 International Electron Devices Meeting held in San Francisco. Intel Corporation introduced the first commercial NOR type flash chip in 1988. NOR-based flash has long erase and write times, but provides full address and data buses, allowing random access to any memory location; this makes it a suitable replacement for older read-only memory chips, which are used to store program code that needs to be updated, such as a computer's BIOS or the firmware of set-top boxes. Its endurance may be from as little as 100 erase cycles for an on-chip flash memory, to a more typical 10,000 or 100,000 erase cycles, up to 1,000,000 erase cycles. NOR-based flash was the basis of early flash-based removable media.
NAND flash has reduced erase and write times, requires less chip area per cell, thus allowing greater storage density and lower cost per bit than NOR flash. However, the I/O interface of NAND flash does not provide a random-access external address bus. Rather, data must be read on a block-wise basis, with typical block sizes of hundreds to thousands of bits; this makes NAND flash unsuitable as a drop-in replacement for program ROM, since most microprocessors and microcontrollers require byte-level random access. In this regard, NAND flash is similar to other secondary data storage devices, such as hard disks and optical media, is thus suitable for use in mass-storage devices, such as memory cards; the first NAND-based removable media format was SmartMedia in 1995, many others have followed, including: MultiMediaCard Secure Digital Memory Stick, xD-Picture Card. A new generation of memory card formats, including RS-MMC, miniSD and microSD, feature small form factors. For example, the microSD card has an area of just over 1.5 cm2, with a thickness of less than 1 mm.
As of August 2017 microSD cards with capacity up to 400 GB are available. Flash memory stores information in an array of memory cells made from floating-gate transistors. In single-level cell devices, each cell stores only one bit of information. Multi-level cell devices, including triple-level cell devices, can store more than one bit per cell; the floating gate may be non-conductive. In flash memory, each memory cell resembles a standard metal-oxide-semiconductor field-effect transistor except that the transistor has two gates instead of one; the cells can be seen as an electrical switch in which current flows between two terminals and is controlled by a floating gate and a control gate. The CG is similar to the gate in other MOS transistors, but below this, there is the FG insulated all around by an oxide layer; the FG is interposed between the MOSFET channel. Because the FG is electrically isolated by its insulating layer, electrons placed on it are trapped; when the FG is charged with electrons, this charge screens the electric field from the CG, inc
Secure Digital abbreviated as SD, is a non-volatile memory card format developed by the SD Card Association for use in portable devices. The standard was introduced in August 1999 by joint efforts between SanDisk and Toshiba as an improvement over MultiMediaCards, has become the industry standard; the three companies formed SD-3C, LLC, a company that licenses and enforces intellectual property rights associated with SD memory cards and SD host and ancillary products. The companies formed the SD Association, a non-profit organization, in January 2000 to promote and create SD Card standards. SDA today has about 1,000 member companies; the SDA uses several trademarked logos owned and licensed by SD-3C to enforce compliance with its specifications and assure users of compatibility. In 1999, SanDisk and Toshiba agreed to develop and market the Secure Digital Memory Card; the card was derived from the MultiMediaCard and provided digital rights management based on the Secure Digital Music Initiative standard and for the time, a high memory density.
It was designed to compete with the Memory Stick, a DRM product that Sony had released the year before. Developers predicted; the trademarked SD logo was developed for the Super Density Disc, the unsuccessful Toshiba entry in the DVD format war. For this reason the D within the logo resembles an optical disc. At the 2000 Consumer Electronics Show trade show, the three companies announced the creation of the SD Association to promote SD cards; the SD Association, headquartered in San Ramon, United States, started with about 30 companies and today consists of about 1,000 product manufacturers that make interoperable memory cards and devices. Early samples of the SD Card became available in the first quarter of 2000, with production quantities of 32 and 64 MB cards available three months later; the miniSD form was introduced at March 2003 CeBIT by SanDisk Corporation which announced and demonstrated it. The SDA adopted the miniSD card in 2003 as a small form factor extension to the SD card standard.
While the new cards were designed for mobile phones, they are packaged with a miniSD adapter that provides compatibility with a standard SD memory card slot. In September 2006, SanDisk announced the 4 GB miniSDHC. Like the SD and SDHC, the miniSDHC card has the same form factor as the older miniSD card but the HC card requires HC support built into the host device. Devices that support miniSDHC work with miniSD and miniSDHC, but devices without specific support for miniSDHC work only with the older miniSD card. Since 2008, miniSD cards were no longer produced; the microSD removable miniaturized Secure Digital flash memory cards were named T-Flash or TF, abbreviations of TransFlash. TransFlash and microSD cards are functionally identical allowing either to operate in devices made for the other. SanDisk had conceived microSD when its chief technology officer and the chief technology officer of Motorola concluded that current memory cards were too large for mobile phones; the card was called T-Flash, but just before product launch, T-Mobile sent a cease-and-desist letter to SanDisk claiming that T-Mobile owned the trademark on T-, the name was changed to TransFlash.
At CTIA Wireless 2005, the SDA announced the small microSD form factor along with SDHC secure digital high capacity formatting in excess of 2 GB with a minimum sustained read and write speed of 17.6 Mbit/s. SanDisk induced the SDA to administer the microSD standard; the SDA approved the final microSD specification on July 13, 2005. MicroSD cards were available in capacities of 32, 64, 128 MB; the Motorola E398 was the first mobile phone to contain a TransFlash card. A few years their competitors began using microSD cards; the SDHC format, announced in January 2006, brought improvements such as 32 GB storage capacity and mandatory support for FAT32 filesystems. In April, the SDA released a detailed specification for the non-security related parts of the SD memory card standard and for the Secure Digital Input Output cards and the standard SD host controller. In January 2009, the SDA announced the SDXC family, which supports cards up to 2 TB and speeds up to 300 MB/s, it features mandatory support for the exFAT filesystem.
SDXC was announced at Consumer Electronics Show 2009. At the same show, SanDisk and Sony announced a comparable Memory Stick XC variant with the same 2 TB maximum as SDXC, Panasonic announced plans to produce 64 GB SDXC cards. On March 6, Pretec introduced the first SDXC card, a 32 GB card with a read/write speed of 400 Mbit/s, but only early in 2010 did compatible host devices come onto the market, including Sony's Handycam HDR-CX55V camcorder, Canon's EOS 550D Digital SLR camera, a USB card reader from Panasonic, an integrated SDXC card reader from JMicron. The earliest laptops to integrate SDXC card readers relied on a USB 2.0 bus, which does not have the bandwidth to support SDXC at full speed. In early 2010, commercial SDXC cards appeared from Toshiba and SanDisk. In early 2011, Centon Electronics, Inc. and Lexar began shipping SDXC cards rated at Speed Class 10. Pretec offered cards from 8 GB to 128 GB rated at Speed Class 16. In September 2011, SanDisk released a 64 GB microSDXC card. Kingmax released a comparable product in 2011.
In April 2012, Panasonic introduced MicroP2 card format for professional video applications. The cards are full-size SDHC or SDXC UHS-II cards, rated at UHS Speed Class U1. An adapter allows MicroP
In computing, a file system or filesystem controls how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops and the next begins. By separating the data into pieces and giving each piece a name, the information is isolated and identified. Taking its name from the way paper-based information systems are named, each group of data is called a "file"; the structure and logic rules used to manage the groups of information and their names is called a "file system". There are many different kinds of file systems; each one has different structure and logic, properties of speed, security and more. Some file systems have been designed to be used for specific applications. For example, the ISO 9660 file system is designed for optical discs. File systems can be used on numerous different types of storage devices that use different kinds of media; as of 2019, hard disk drives have been key storage devices and are projected to remain so for the foreseeable future.
Other kinds of media that are used include SSDs, magnetic tapes, optical discs. In some cases, such as with tmpfs, the computer's main memory is used to create a temporary file system for short-term use; some file systems are used on local data storage devices. Some file systems are "virtual", meaning that the supplied "files" are computed on request or are a mapping into a different file system used as a backing store; the file system manages access to the metadata about those files. It is responsible for arranging storage space. Before the advent of computers the term file system was used to describe a method of storing and retrieving paper documents. By 1961 the term was being applied to computerized filing alongside the original meaning. By 1964 it was in general use. A file system consists of three layers. Sometimes the layers are explicitly separated, sometimes the functions are combined; the logical file system is responsible for interaction with the user application. It provides the application program interface for file operations — OPEN, CLOSE, READ, etc. and passes the requested operation to the layer below it for processing.
The logical file system "manage open file table entries and per-process file descriptors." This layer provides "file access, directory operations and protection."The second optional layer is the virtual file system. "This interface allows support for multiple concurrent instances of physical file systems, each of, called a file system implementation."The third layer is the physical file system. This layer is concerned with the physical operation of the storage device, it processes physical blocks being written. It handles buffering and memory management and is responsible for the physical placement of blocks in specific locations on the storage medium; the physical file system interacts with the device drivers or with the channel to drive the storage device. Note: this only applies to file systems used in storage devices. File systems allocate space in a granular manner multiple physical units on the device; the file system is responsible for organizing files and directories, keeping track of which areas of the media belong to which file and which are not being used.
For example, in Apple DOS of the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used a track/sector map. This results in unused space when a file is not an exact multiple of the allocation unit, sometimes referred to as slack space. For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB; the size of the allocation unit is chosen. Choosing the allocation size based on the average size of the files expected to be in the file system can minimize the amount of unusable space; the default allocation may provide reasonable usage. Choosing an allocation size, too small results in excessive overhead if the file system will contain very large files. File system fragmentation occurs; as a file system is used, files are created and deleted. When a file is created the file system allocates space for the data; some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows.
As files are deleted the space they were allocated is considered available for use by other files. This creates alternating unused areas of various sizes; this is free space fragmentation. When a file is created and there is not an area of contiguous space available for its initial allocation the space must be assigned in fragments; when a file is modified such that it becomes larger it may exceed the space allocated to it, another allocation must be assigned elsewhere and the file becomes fragmented. A filename is used to identify a storage location in the file system. Most file systems have restrictions on the length of filenames. In some file systems, filenames are not case sensitive. Most modern file systems allow filenames to contain a wide range of characters from the Unicode character set. However, they may have restrictions on the use of certain s
The Linux kernel is a free and open-source, Unix-like operating system kernel. The Linux family of operating systems is based on this kernel and deployed on both traditional computer systems such as personal computers and servers in the form of Linux distributions, on various embedded devices such as routers, wireless access points, PBXes, set-top boxes, FTA receivers, smart TVs, PVRs, NAS appliances. While the adoption of the Linux kernel in desktop computer operating system is low, Linux-based operating systems dominate nearly every other segment of computing, from mobile devices to mainframes; as of November 2017, all of the world's 500 most powerful supercomputers run Linux. The Android operating system for tablet computers and smartwatches uses the Linux kernel; the Linux kernel was conceived and created in 1991 by Linus Torvalds for his personal computer and with no cross-platform intentions, but has since expanded to support a huge array of computer architectures, many more than other operating systems or kernels.
Linux attracted developers and users who adopted it as the kernel for other free software projects, notably the GNU Operating System, created as a free, non-proprietary operating system, based on UNIX as a by-product of the fallout of the Unix wars. The Linux kernel API, the application programming interface through which user programs interact with the kernel, is meant to be stable and to not break userspace programs; as part of the kernel's functionality, device drivers control the hardware. However, the interface between the kernel and loadable kernel modules, unlike in many other kernels and operating systems, is not meant to be stable by design; the Linux kernel, developed by contributors worldwide, is a prominent example of free and open source software. Day-to-day development discussions take place on the Linux kernel mailing list; the Linux kernel is released under the GNU General Public License version 2, with some firmware images released under various non-free licenses. In April 1991, Linus Torvalds, at the time a 21-year-old computer science student at the University of Helsinki, started working on some simple ideas for an operating system.
He started with a task switcher in a terminal driver. On 25 August 1991, Torvalds posted the following to comp.os.minix, a newsgroup on Usenet: I'm doing a operating system for 386 AT clones. This has been brewing since April, is starting to get ready. I'd like any feedback on things people like/dislike in minix. I've ported bash and gcc, things seem to work; this implies that I'll get something practical within a few months Yes - it's free of any minix code, it has a multi-threaded fs. It is NOT portable, it never will support anything other than AT-harddisks, as that's all I have:-(. It's in C, but most people wouldn't call what I write C, it uses every conceivable feature of the 386 I could find, as it was a project to teach me about the 386. As mentioned, it uses a MMU, for both paging and segmentation. It's the segmentation; some of my "C"-files are as much assembler as C. Unlike minix, I happen to LIKE interrupts, so interrupts are handled without trying to hide the reason behind them. After that, many people contributed code to the project.
Early on, the MINIX community contributed code and ideas to the Linux kernel. At the time, the GNU Project had created many of the components required for a free operating system, but its own kernel, GNU Hurd, was incomplete and unavailable; the Berkeley Software Distribution had not yet freed itself from legal encumbrances. Despite the limited functionality of the early versions, Linux gained developers and users. In September 1991, Torvalds released version 0.01 of the Linux kernel on the FTP server of the Finnish University and Research Network. It had 10,239 lines of code. On 5 October 1991, version 0.02 of the Linux kernel was released. Torvalds assigned version 0 to the kernel to indicate that it was for testing and not intended for productive use. In December 1991, Linux kernel 0.11 was released. This version was the first to be self-hosted as Linux kernel 0.11 could be compiled by a computer running the same kernel version. When Torvalds released version 0.12 in February 1992, he adopted the GNU General Public License version 2 over his previous self-drafted license, which had not permitted commercial redistribution.
On 19 January 1992, the first post to the new newsgroup alt.os.linux was submitted. On 31 March 1992, the newsgroup was renamed comp.os.linux. The fact that Linux is a monolithic kernel rather than a microkernel was the topic of a debate between Andrew S. Tanenbaum, the creator of MINIX, Torvalds; this discussion is known as the Tanenbaum–Torvalds debate and started in 1992 on the Usenet discussion group comp.os.minix as a general debate about Linux and kernel architecture. Tanenbaum argued that microkernels were superior to monolithic kernels and that therefore Linux was obsolete. Unlike traditional monolithic kernels, device drivers in Linux are configured as loadable kernel modules and are loaded or unloaded while
Linux is a family of free and open-source software operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is packaged in a Linux distribution. Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy. Popular Linux distributions include Debian and Ubuntu. Commercial distributions include SUSE Linux Enterprise Server. Desktop Linux distributions include a windowing system such as X11 or Wayland, a desktop environment such as GNOME or KDE Plasma. Distributions intended for servers may omit graphics altogether, include a solution stack such as LAMP; because Linux is redistributable, anyone may create a distribution for any purpose. Linux was developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system.
Linux is the leading operating system on servers and other big iron systems such as mainframe computers, the only OS used on TOP500 supercomputers. It is used by around 2.3 percent of desktop computers. The Chromebook, which runs the Linux kernel-based Chrome OS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux runs on embedded systems, i.e. devices whose operating system is built into the firmware and is tailored to the system. This includes routers, automation controls, digital video recorders, video game consoles, smartwatches. Many smartphones and tablet computers run other Linux derivatives; because of the dominance of Android on smartphones, Linux has the largest installed base of all general-purpose operating systems. Linux is one of the most prominent examples of open-source software collaboration; the source code may be used and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License.
The Unix operating system was conceived and implemented in 1969, at AT&T's Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna. First released in 1971, Unix was written in assembly language, as was common practice at the time. In a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie; the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, AT&T was required to license the operating system's source code to anyone who asked; as a result, Unix grew and became adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs; the GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed of free software. Work began in 1984. In 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License in 1989.
By the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers and the kernel, called GNU/Hurd, were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, he would not have decided to write his own. Although not released until 1992, due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has stated that if 386BSD had been available at the time, he would not have created Linux. MINIX was created by Andrew S. Tanenbaum, a computer science professor, released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn the operating system principles. Although the complete source code of MINIX was available, the licensing terms prevented it from being free software until the licensing changed in April 2000. In 1991, while attending the University of Helsinki, Torvalds became curious about operating systems.
Frustrated by the licensing of MINIX, which at the time limited it to educational use only, he began to work on his own operating system kernel, which became the Linux kernel. Torvalds began the development of the Linux kernel on MINIX and applications written for MINIX were used on Linux. Linux matured and further Linux kernel development took place on Linux systems. GNU applications replaced all MINIX components, because it was advantageous to use the available code from the GNU Project with the fledgling operating system. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with the Linux kernel, making a functional and free operating system. Linus Torvalds had wanted to call his invention "Freax", a portmant
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
USB flash drive
A USB flash drive known as a thumb drive, pen drive, gig stick, flash stick, jump drive, disk key, disk on key, flash-drive, memory stick, USB key, USB stick or USB memory, is a data storage device that includes flash memory with an integrated USB interface. It is removable and much smaller than an optical disc. Most weigh less than 1 oz. Since first appearing on the market in late 2000, as with all other computer memory devices, storage capacities have risen while prices have dropped; as of March 2016, flash drives with anywhere from 8 to 256 GB were sold, while 512 GB and 1 TB units were less frequent. As of 2018, 2TB flash drives were the largest available in terms of storage capacity; some allow up to 100,000 write/erase cycles, depending on the exact type of memory chip used, are thought to last between 10 and 100 years under normal circumstances. USB flash drives are used for storage, data back-up and transfer of computer files. Compared with floppy disks or CDs, they are smaller, have more capacity, are more durable due to a lack of moving parts.
Additionally, they are immune to electromagnetic interference, are unharmed by surface scratches. Until about 2005, most desktop and laptop computers were supplied with floppy disk drives in addition to USB ports, but floppy disk drives became obsolete after widespread adoption of USB ports and the larger USB drive capacity compared to the 1.44 MB 3.5-inch floppy disk. USB flash drives use the USB mass storage device class standard, supported natively by modern operating systems such as Windows, macOS and other Unix-like systems, as well as many BIOS boot ROMs. USB drives with USB 2.0 support can store more data and transfer faster than much larger optical disc drives like CD-RW or DVD-RW drives and can be read by many other systems such as the Xbox One, PlayStation 4, DVD players, automobile entertainment systems, in a number of handheld devices such as smartphones and tablet computers, though the electronically similar SD card is better suited for those devices. A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case, which can be carried in a pocket or on a key chain, for example.
The USB connector may be protected by a removable cap or by retracting into the body of the drive, although it is not to be damaged if unprotected. Most flash drives use a standard type-A USB connection allowing connection with a port on a personal computer, but drives for other interfaces exist. USB flash drives draw power from the computer via the USB connection; some devices combine the functionality of a portable media player with USB flash storage. M-Systems, an Israeli company, were granted a US patent on November 14, 2000, titled "Architecture for a -based Flash Disk", crediting the invention to Amir Ban, Dov Moran and Oron Ogdan, all M-Systems employees at the time; the patent application was filed by M-Systems in April 1999. In 1999, IBM filed an invention disclosure by one of its employees. Flash drives were sold by Trek 2000 International, a company in Singapore, which began selling in early 2000. IBM became the first to sell USB flash drives in the United States in 2000; the initial storage capacity of a flash drive was 8 MB.
Another version of the flash drive, described as a pen drive, was developed. Pua Khein-Seng from Malaysia has been credited with this invention. Patent disputes have arisen over the years, with competing companies including Singaporean company Trek Technology and Chinese company Netac Technology, attempting to enforce their patents. Trek has lost battles in other countries. Netac Technology has brought lawsuits against PNY Technologies, aigo and Taiwan's Acer and Tai Guen Enterprise Co. Flash drives are measured by the rate at which they transfer data. Transfer rates may be given in megabytes per second, megabits per second, or in optical drive multipliers such as "180X". File transfer rates vary among devices. Second generation flash drives have claimed to read at up to 30 MB/s and write at about half that rate, about 20 times faster than the theoretical transfer rate achievable by the previous model, USB 1.1, limited to 12 Mbit/s with accounted overhead. The effective transfer rate of a device is affected by the data access pattern.
By 2002, USB flash drives had USB 2.0 connectivity, which has 480 Mbit/s as the transfer rate upper bound. That same year, Intel sparked widespread use of second generation USB by including them within its laptops. Third generation USB flash drives were announced in late 2008 and became available in 2010. Like USB 2.0 before it, USB 3.0 improved data transfer rates compared to its predecessor. The USB 3.0 interface specified transfer rates up compared to USB 2.0's 480 Mbit/s. By 2010 the maximum available storage capacity for the devices had reached upwards of 128GB. USB 3.0 was slow to appear in laptops. As of 2010, the majority of laptop models still contained the 2.0. In January 2013, tech company Kingston, released a flash drive with 1TB of storage; the first USB 3.1 type-C flash drives, with read/write speeds of around 530 MB/s, were announced in March 2015. As of July 2016, flash drives within the 8 to 256 GB