Computer hardware includes the physical, tangible parts or components of a computer, such as the cabinet, central processing unit, keyboard, computer data storage, graphics card, sound card and motherboard. By contrast, software is instructions that can be run by hardware. Hardware is so-termed because it rigid with respect to changes or modifications. Intermediate between software and hardware is "firmware", software, coupled to the particular hardware of a computer system and thus the most difficult to change but among the most stable with respect to consistency of interface; the progression from levels of "hardness" to "softness" in computer systems parallels a progression of layers of abstraction in computing. Hardware is directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware components; the template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann.
This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, input and output mechanisms. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus; this is referred to as the Von Neumann bottleneck and limits the performance of the system. The personal computer known as the PC, is one of the most common types of computer due to its versatility and low price. Laptops are very similar, although they may use lower-power or reduced size components, thus lower performance; the computer case encloses most of the components of the system. It provides mechanical support and protection for internal elements such as the motherboard, disk drives, power supplies, controls and directs the flow of cooling air over internal components.
The case is part of the system to control electromagnetic interference radiated by the computer, protects internal parts from electrostatic discharge. Large tower cases provide extra internal space for multiple disk drives or other peripherals and stand on the floor, while desktop cases provide less expansion room. All-in-one style designs include a video display built into the same case. Portable and laptop computers require cases. A current development in laptop computers is a detachable keyboard, which allows the system to be configured as a touch-screen tablet. Hobbyists may decorate the cases with colored lights, paint, or other features, in an activity called case modding. A power supply unit converts alternating current electric power to low-voltage DC power for the internal components of the computer. Laptops are capable of running from a built-in battery for a period of hours; the motherboard is the main component of a computer. It is a board with integrated circuitry that connects the other parts of the computer including the CPU, the RAM, the disk drives as well as any peripherals connected via the ports or the expansion slots.
Components directly attached to or to part of the motherboard include: The CPU, which performs most of the calculations which enable a computer to function, is sometimes referred to as the brain of the computer. It is cooled by a heatsink and fan, or water-cooling system. Most newer CPUs include an on-die graphics processing unit; the clock speed of CPUs governs how fast it executes instructions, is measured in GHz. Many modern computers have the option to overclock the CPU which enhances performance at the expense of greater thermal output and thus a need for improved cooling; the chipset, which includes the north bridge, mediates communication between the CPU and the other components of the system, including main memory. Random-access memory, which stores the code and data that are being accessed by the CPU. For example, when a web browser is opened on the computer it takes up memory. RAM comes on DIMMs in the sizes 2GB, 4GB, 8GB, but can be much larger. Read-only memory, which stores the BIOS that runs when the computer is powered on or otherwise begins execution, a process known as Bootstrapping, or "booting" or "booting up".
The BIOS includes power management firmware. Newer motherboards use Unified Extensible Firmware Interface instead of BIOS. Buses that connect the CPU to various internal components and to expand cards for graphics and sound; the CMOS battery, which powers the memory for date and time in the BIOS chip. This battery is a watch battery; the video card, which processes computer graphics. More powerful graphics cards are better suited to handle strenuous tasks, such as playing intensive video games. An expansion card in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard or
A conveyor belt is the carrying medium of a belt conveyor system. A belt conveyor system is one of many types of conveyor systems. A belt conveyor system consists of two or more pulleys, with an endless loop of carrying medium—the conveyor belt—that rotates about them. One or both of the pulleys are powered, moving the material on the belt forward; the powered pulley is called the drive pulley. There are two main industrial classes of belt conveyors. Conveyors are durable and reliable components used in automated distribution and warehousing, as well as manufacturing and production facilities. In combination with computer-controlled pallet handling equipment this allows for more efficient retail and manufacturing distribution, it is considered a labor saving system that allows large volumes to move through a process, allowing companies to ship or receive higher volumes with smaller storage space and with less labor expense. Rubber conveyor belts are used to convey items with irregular bottom surfaces, small items that would fall in between rollers, or bags of product that would sag between rollers.
Belt conveyors are fairly similar in construction consisting of a metal frame with rollers at either end of a flat metal bed. The belt is looped around each of the rollers and when one of the rollers is powered the belting slides across the solid metal frame bed, moving the product. In heavy use applications the beds which the belting is pulled over are replaced with rollers; the rollers allow weight to be conveyed as they reduce the amount of friction generated from the heavier loading on the belting. Belt conveyors can now be manufactured with curved sections which use tapered rollers and curved belting to convey products around a corner; these conveyor systems are used in postal sorting offices and airport baggage handling systems. A sandwich belt conveyor uses two conveyor belts, face-to-face, to contain the item being carried, making steep incline and vertical-lift runs achievable. Belt conveyors are the most used powered conveyors because they are the most versatile and the least expensive.
Product is conveyed directly on the belt so both regular and irregular shaped objects, large or small and heavy, can be transported successfully. These conveyors should use only the highest quality premium belting products, which reduces belt stretch and results in less maintenance for tension adjustments. Belt conveyors can be used to transport product in a straight line or through changes in elevation or direction. In certain applications they can be used for static accumulation or cartons. Primitive conveyor belts were used since the 19th century. In 1892, Thomas Robins began a series of inventions which led to the development of a conveyor belt used for carrying coal and other products. In 1901, Sandvik started the production of steel conveyor belts. In 1905 Richard Sutcliffe invented the first conveyor belts for use in coal mines which revolutionized the mining industry. In 1913, Henry Ford introduced conveyor-belt assembly lines at Ford Motor Company's Highland Park, Michigan factory. In 1972, the French society REI created in New Caledonia the longest straight-belt conveyor in the world, at a length of 13.8 km.
Hyacynthe Marcel Bocchetti was the concept designer. In 1957, the B. F. Goodrich Company patented a Möbius strip conveyor belt, that it went on to produce as the "Turnover Conveyor Belt System". Incorporating a half-twist, it had the advantage over conventional belts of a longer life because it could expose all of its surface area to wear and tear; such Möbius strip belts are no longer manufactured because untwisted modern belts can be made more durable by constructing them from several layers of different materials. In 1970, Intralox, a Louisiana-based company, registered the first patent for all plastic, modular belting; the belt consists of one or more layers of material. It is common for belts to have three layers: a carcass and a bottom cover; the purpose of the carcass is to provide linear shape. The carcass is a woven or metal fabric having a warp & weft; the warp refers to longitudinal cords whose characteristics of resistance and elasticity define the running properties of the belt. The weft represents the whole set of transversal cables allowing to the belt specific resistance against cuts and impacts and at the same time high flexibility.
The most common carcass materials are steel, nylon and aramid. The covers are various rubber or plastic compounds specified by use of the belt. Steel conveyor belts are used. For example, the highest strength class conveyor belt installed is made of steel cords; this conveyor belt has a strength class of 10.000 N/mm and it operates at Chuquicamata mine, in Chile. Polyester and cotton are popular with low strength classes. Aramid is used in the range 630 - 3500 N/mm; the advantages of using aramid are enhanced lifetimes and improved productivity. As an example, a 2250 N/mm, 3400 m long underground belt installed at Baodian Coal Mine, part of in Yanzhou Coal Mining Company, was reported to provide energy savings of >15%. Today there are different types of conveyor belts that have been created for conveying different kinds of
In computer science, inter-process communication or interprocess communication refers to the mechanisms an operating system provides to allow the processes to manage shared data. Applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests. Many applications are both clients and servers, as seen in distributed computing. Methods for doing IPC are divided into categories which vary based on software requirements, such as performance and modularity requirements, system circumstances, such as network bandwidth and latency. IPC is important to the design process for microkernels and nanokernels. Microkernels reduce the number of functionalities provided by the kernel; those functionalities are obtained by communicating with servers via IPC, increasing drastically the number of IPC compared to a regular monolithic kernel. Depending on the solution, an IPC mechanism may provide synchronization or leave it up to processes and threads to communicate amongst themselves.
While synchronization will include some information it is not an information-passing communication mechanism per se. Examples of synchronization primitives are: Semaphore Spinlock Barrier Mutual exclusion Java's Remote Method Invocation ONC RPC XML-RPC or SOAP JSON-RPC Message Bus. NET Remoting The following are messaging and information systems that utilize IPC mechanisms, but don't implement IPC themselves: The following are platform or programming language-specific APIs: The following are platform or programming language specific-APIs that use IPC, but do not themselves implement it: Computer network programming Communicating Sequential Processes Data Distribution Service Protected procedure call Linux ipc man page describing System V IPC Windows IPC Unix Network Programming by W. Richard Stevens Interprocess Communication and Pipes in C DIPC, Distributed System V IPC
In computing, a file system or filesystem controls how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops and the next begins. By separating the data into pieces and giving each piece a name, the information is isolated and identified. Taking its name from the way paper-based information systems are named, each group of data is called a "file"; the structure and logic rules used to manage the groups of information and their names is called a "file system". There are many different kinds of file systems; each one has different structure and logic, properties of speed, security and more. Some file systems have been designed to be used for specific applications. For example, the ISO 9660 file system is designed for optical discs. File systems can be used on numerous different types of storage devices that use different kinds of media; as of 2019, hard disk drives have been key storage devices and are projected to remain so for the foreseeable future.
Other kinds of media that are used include SSDs, magnetic tapes, optical discs. In some cases, such as with tmpfs, the computer's main memory is used to create a temporary file system for short-term use; some file systems are used on local data storage devices. Some file systems are "virtual", meaning that the supplied "files" are computed on request or are a mapping into a different file system used as a backing store; the file system manages access to the metadata about those files. It is responsible for arranging storage space. Before the advent of computers the term file system was used to describe a method of storing and retrieving paper documents. By 1961 the term was being applied to computerized filing alongside the original meaning. By 1964 it was in general use. A file system consists of three layers. Sometimes the layers are explicitly separated, sometimes the functions are combined; the logical file system is responsible for interaction with the user application. It provides the application program interface for file operations — OPEN, CLOSE, READ, etc. and passes the requested operation to the layer below it for processing.
The logical file system "manage open file table entries and per-process file descriptors." This layer provides "file access, directory operations and protection."The second optional layer is the virtual file system. "This interface allows support for multiple concurrent instances of physical file systems, each of, called a file system implementation."The third layer is the physical file system. This layer is concerned with the physical operation of the storage device, it processes physical blocks being written. It handles buffering and memory management and is responsible for the physical placement of blocks in specific locations on the storage medium; the physical file system interacts with the device drivers or with the channel to drive the storage device. Note: this only applies to file systems used in storage devices. File systems allocate space in a granular manner multiple physical units on the device; the file system is responsible for organizing files and directories, keeping track of which areas of the media belong to which file and which are not being used.
For example, in Apple DOS of the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used a track/sector map. This results in unused space when a file is not an exact multiple of the allocation unit, sometimes referred to as slack space. For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB; the size of the allocation unit is chosen. Choosing the allocation size based on the average size of the files expected to be in the file system can minimize the amount of unusable space; the default allocation may provide reasonable usage. Choosing an allocation size, too small results in excessive overhead if the file system will contain very large files. File system fragmentation occurs; as a file system is used, files are created and deleted. When a file is created the file system allocates space for the data; some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows.
As files are deleted the space they were allocated is considered available for use by other files. This creates alternating unused areas of various sizes; this is free space fragmentation. When a file is created and there is not an area of contiguous space available for its initial allocation the space must be assigned in fragments; when a file is modified such that it becomes larger it may exceed the space allocated to it, another allocation must be assigned elsewhere and the file becomes fragmented. A filename is used to identify a storage location in the file system. Most file systems have restrictions on the length of filenames. In some file systems, filenames are not case sensitive. Most modern file systems allow filenames to contain a wide range of characters from the Unicode character set. However, they may have restrictions on the use of certain s
Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, others. Intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Microsoft, IBM, Sun Microsystems. In the early 1990s, AT&T sold its rights in Unix to Novell, which sold its Unix business to the Santa Cruz Operation in 1995; the UNIX trademark passed to The Open Group, a neutral industry consortium, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification. As of 2014, the Unix version with the largest installed base is Apple's macOS. Unix systems are characterized by a modular design, sometimes called the "Unix philosophy"; this concept entails that the operating system provides a set of simple tools that each performs a limited, well-defined function, with a unified filesystem as the main means of communication, a shell scripting and command language to combine the tools to perform complex workflows.
Unix distinguishes itself from its predecessors as the first portable operating system: the entire operating system is written in the C programming language, thus allowing Unix to reach numerous platforms. Unix was meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers; the system grew larger as the operating system started spreading in academic circles, as users added their own tools to the system and shared them with colleagues. At first, Unix was not designed to be multi-tasking. Unix gained portability, multi-tasking and multi-user capabilities in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; these concepts are collectively known as the "Unix philosophy". Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as "the idea that the power of a system comes more from the relationships among programs than from the programs themselves".
In an era when a standard computer consisted of a hard disk for storage and a data terminal for input and output, the Unix file model worked quite well, as I/O was linear. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, semaphores, network sockets were added to support communication with other hosts; as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes; the Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers. Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system.
Under Unix, the operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low-level" tasks that most programs share, schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space - although in microkernel implementations, like MINIX or Redox, functions such as network protocols may run in user space; the origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, General Electric were developing Multics, a time-sharing operating system for the GE-645 mainframe computer. Multics featured several innovations, but presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project.
The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was without organizational backing, without a name; the new operating system was a single-tasking system. In 1970, the group coined the name Unics for Uniplexed Information and Computing Service, as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that "no one can remember" the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, Peter G. Neumann credit Kernighan; the operating system was written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Version 4 Unix, still had many PDP-11 dependent codes, is not suitable for porting; the first port to other platform was made five years f
Streaming media is multimedia, received by and presented to an end-user while being delivered by a provider. The verb "to stream" refers to the process of obtaining media in this manner. A client end-user can use their media player to start playing digital video or digital audio content before the entire file has been transmitted. Distinguishing delivery method from the media distributed applies to telecommunications networks, as most of the delivery systems are either inherently streaming or inherently non-streaming. For example, in the 1930s, elevator music was among the earliest popular music available as streaming media; the term "streaming media" can apply to media other than video and audio, such as live closed captioning, ticker tape, real-time text, which are all considered "streaming text". Live streaming is the delivery of Internet content in real-time much as live television broadcasts content over the airwaves via a television signal. Live internet streaming requires a form of source media, an encoder to digitize the content, a media publisher, a content delivery network to distribute and deliver the content.
Live streaming does not need to be recorded at the origination point, although it is. There are challenges with streaming content on the Internet. If the user does not have enough bandwidth in their Internet connection, they may experience stops, lags, or slow buffering of the content; some users may not be able to stream certain content due to not having compatible computer or software systems. Some popular streaming services include the video sharing website YouTube and Mixer, which live stream the playing of video games. Netflix and Amazon Video stream movies and TV shows, Spotify, Apple Music and TIDAL stream music. In the early 1920s, George O. Squier was granted patents for a system for the transmission and distribution of signals over electrical lines, the technical basis for what became Muzak, a technology streaming continuous music to commercial customers without the use of radio. Attempts to display media on computers date back to the earliest days of computing in the mid-20th century.
However, little progress was made for several decades due to the high cost and limited capabilities of computer hardware. From the late 1980s through the 1990s, consumer-grade personal computers became powerful enough to display various media; the primary technical issues related to streaming were having enough CPU power bus bandwidth to support the required data rates, creating low-latency interrupt paths in the operating system to prevent buffer underrun, enabling skip-free streaming of the content. However, computer networks were still limited in the mid-1990s, audio and video media were delivered over non-streaming channels, such as by downloading a digital file from a remote server and saving it to a local drive on the end user's computer or storing it as a digital file and playing it back from CD-ROMs. In 1991 the first commercial Ethernet Switch was introduced, which enabled more powerful computer networks leading to the first streaming video solutions used by schools and corporations such as expanding Bloomberg Television worldwide.
In the mid 1990s the World Wide Web was established, but streaming audio would not be practical until years later. During the late 1990s and early 2000s, users had increased access to computer networks the Internet. During the early 2000s, users had access to increased network bandwidth in the "last mile"; these technological improvements facilitated the streaming of audio and video content to computer users in their homes and workplaces. There was an increasing use of standard protocols and formats, such as TCP/IP, HTTP, HTML as the Internet became commercialized, which led to an infusion of investment into the sector; the band Severe Tire Damage was the first group to perform live on the Internet. On June 24, 1993, the band was playing a gig at Xerox PARC while elsewhere in the building, scientists were discussing new technology for broadcasting on the Internet using multicasting; as proof of PARC's technology, the band's performance was broadcast and could be seen live in Australia and elsewhere.
In a March 2017 interview, band member Russ Haines stated that the band had used "half of the total bandwidth of the internet" to stream the performance, a 152-by-76 pixel video, updated eight to twelve times per second, with audio quality, "at best, a bad telephone connection". Microsoft Research developed a Microsoft TV application, compiled under MS Windows Studio Suite and tested in conjunction with Connectix QuickCam. RealNetworks was a pioneer in the streaming media markets, when it broadcast a baseball game between the New York Yankees and the Seattle Mariners over the Internet in 1995; the first symphonic concert on the Internet took place at the Paramount Theater in Seattle, Washington on November 10, 1995. The concert was a collaboration between The Seattle Symphony and various guest musicians such as Slash, Matt Cameron, Barrett Martin; when Word Magazine launched in 1995, they featured the first-ever streaming soundtracks on the Internet. Metro
Metadata Encoding and Transmission Standard
The Metadata Encoding and Transmission Standard is a metadata standard for encoding descriptive and structural metadata regarding objects within a digital library, expressed using the XML schema language of the World Wide Web Consortium. The standard is maintained as part of the MARC standards of the Library of Congress, is being developed as an initiative of the Digital Library Federation. METS is an XML Schema designed for the purpose of: Creating XML document instances that express the hierarchical structure of digital library objects. Recording the names and locations of the files that comprise those objects. Recording associated metadata. METS can, therefore, be used as a tool for modeling real world objects, such as particular document types. Depending on its use, a METS document could be used in the role of Submission Information Package, Archival Information Package, or Dissemination Information Package within the Open Archival Information System Reference Model. Maintaining a library of digital objects requires maintaining metadata about those objects.
The metadata necessary for successful management and use of digital objects is both more extensive than and different from the metadata used for managing collections of printed works and other physical materials. METS is intended to promote the preservation of, interoperability between digital libraries. Where a traditional library may record descriptive metadata regarding a book in its collection, the book will not dissolve into a series of unconnected pages if the library fails to record structural metadata regarding the book's organization, nor will scholars be unable to evaluate the book's worth if the library fails to note, for example, that the book was produced using a Ryobi offset press; the same cannot be said for a digital library. Without structural metadata, the page image or text files comprising the digital work are of little use, without technical metadata regarding the digitization process, scholars may be unsure of how accurate a reflection of the original the digital version provides.
However, in a digital library it is possible to create an eBook-like PDF file or TIFF file which can be seen as a single physical book and reflect the integrity of the original. The open flexibility of METS means that there is not a prescribed vocabulary which allows many different types of institutions, with many different document types, to utilize METS; the customization of METS makes it functional internally, but creates limitations for interoperability. Interoperability becomes difficult when the exporting and importing institutions have used vocabularies; as a workaround for this problem the creation of institutional profiles has become popular. These profiles document the implementation of METS specific to that institution helping to map content in order for exchanged METS documents to be more usable across institutions; as early as 1996 the University of California, Berkeley began working toward the development of a system that combined encoding for an outline of a digital object's structure with metadata for that object.
In 1998 this work was expanded upon by the Making of America II project. An important objective of this project was the creation of a standard for digital objects that would include defined metadata for the descriptive and structural aspects of a digital object. A type of structural and metadata encoding system using an XML Document Type Definition was the result of these efforts; the MoAII DTD was limited in that it did not provide flexibility in which metadata terms could be used for the elements in the descriptive and structural metadata portions of the object. In 2001, a new version of the DTD was developed that used namespaces separate from the system rather than the vocabulary of the previous DTD; this revision was the foundation for the current METS schema named in April of that year. Any METS document has the following features: An open standard Developed by the library community Relatively simple Extensible Modular METS header metsHdr: the METS document itself, such as its creator, etc.
Descriptive Metadata dmdSec: May contain internally embedded metadata or point to metadata external to the METS document. Multiple instances of both internal and external descriptive metadata may be included. Administrative Metadata amdSec: Provides information regarding how files were created and stored, intellectual property rights, metadata regarding the original source object from which the digital library object derives, information regarding the provenance of files comprising the digital library object; as with descriptive metadata, administrative metadata may be internally encoded or external to the METS document. File Section fileSec: Lists all files containing content which comprise the electronic versions of the digital object. File elements may be grouped within fileGrp elements to subdivide files by object version. Although this section is not required, it is included in most METS documents as it adds a level of functionality to the structure of the document. Structural Map structMap: Outlines a hierarchical structure for the digital library object, links the elements of that structure to associated content files and metadata.
The Structural Map is the only section required for all METS documents. Structural Links structLink: Allows METS creators to record the existence of hyperlinks between nodes in the Structural Map; this is of particular value in using METS to archive Websites. Behavioral behaviorSec: Used to associate executable behaviors with content in the METS object; each behavior has a mechanism element identifying a module of executable cod