In computing, a serial port is a serial communication interface through which information transfers in or out one bit at a time. Throughout most of the history of personal computers, data was transferred through serial ports to devices such as modems and various peripherals. While such interfaces as Ethernet, FireWire, USB all send data as a serial stream, the term "serial port" identifies hardware more or less compliant to the RS-232 standard, intended to interface with a modem or with a similar communication device. Modern computers without serial ports may require USB-to-serial converters to allow compatibility with RS-232 serial devices. Serial ports are still used in applications such as industrial automation systems, scientific instruments, point of sale systems and some industrial and consumer products. Server computers may use a serial port as a control console for diagnostics. Network equipment use serial console for configuration. Serial ports are still used in these areas as they are simple and their console functions are standardized and widespread.
A serial port requires little supporting software from the host system. Some computers, such as the IBM PC, use an integrated circuit called a UART; this IC converts characters to and from asynchronous serial form, implementing the timing and framing of data in hardware. Low-cost systems, such as some early home computers, would instead use the CPU to send the data through an output pin, using the bit banging technique. Before large-scale integration UART integrated circuits were common, a minicomputer would have a serial port made of multiple small-scale integrated circuits to implement shift registers, logic gates and all the other logic for a serial port. Early home computers had proprietary serial ports with pinouts and voltage levels incompatible with RS-232. Inter-operation with RS-232 devices may be impossible as the serial port cannot withstand the voltage levels produced and may have other differences that "lock in" the user to products of a particular manufacturer. Low-cost processors now allow higher-speed, but more complex, serial communication standards such as USB and FireWire to replace RS-232.
These make it possible to connect devices that would not have operated feasibly over slower serial connections, such as mass storage and video devices. Many personal computer motherboards still have at least one serial port if accessible only through a pin header. Small-form-factor systems and laptops may omit RS-232 connector ports to conserve space, but the electronics are still there. RS-232 has been standard for so long that the circuits needed to control a serial port became cheap and exist on a single chip, sometimes with circuitry for a parallel port; the individual signals on a serial port are unidirectional and when connecting two devices the outputs of one device must be connected to the inputs of the other. Devices are divided into two categories data terminal equipment and data circuit-terminating equipment. A line, an output on a DTE device is an input on a DCE device and vice versa so a DCE device can be connected to a DTE device with a straight wired cable. Conventionally and terminals are DTE while modems and peripherals are DCE.
If it is necessary to connect two DTE devices a cross-over null modem, in the form of either an adapter or a cable, must be used. Serial port connectors are gendered, only allowing connectors to mate with a connector of the opposite gender. With D-subminiature connectors, the male connectors have protruding pins, female connectors have corresponding round sockets. Either type of connector can be mounted on a panel. Connectors mounted on DTE are to be male, those mounted on DCE are to be female. However, this is far from universal. While the RS-232 standard specified a 25-pin D-type connector, many designers of personal computers chose to implement only a subset of the full standard: they traded off compatibility with the standard against the use of less costly and more compact connectors; the desire to supply serial interface cards with two ports required that IBM reduce the size of the connector to fit onto a single card back panel. A DE-9 connector fits onto a card with a second DB-25 connector.
Starting around the time of the introduction of the IBM PC-AT, serial ports were built with a 9-pin connector to save cost and space. However, presence of a 9-pin D-subminiature connector is not sufficient to indicate the connection is in fact a serial port, since this connector is used for video and other purposes; some miniaturized electronics graphing calculators and hand-held amateur and two-way radio equipment, have serial ports using a phone connector the smaller 2.5 or 3.5 mm connectors and use the most basic 3-wire interface. Many models of Macintosh favor the related RS-422 standard using German mini-DIN connectors, except in the earliest models; the Macintosh included a standard set of two ports for connection to a printer and a modem, but some PowerBook laptops had only one combined port to save space. Since most devices do not use all of the 20 signals that are defined by the standard, smaller connectors are used. For example, the 9-pin DE-9 connector is used by most IBM-compatible PCs since the IBM PC AT, has been standardized as TIA-574.
More modular connectors have been used. Most comm
Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, others. Intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Microsoft, IBM, Sun Microsystems. In the early 1990s, AT&T sold its rights in Unix to Novell, which sold its Unix business to the Santa Cruz Operation in 1995; the UNIX trademark passed to The Open Group, a neutral industry consortium, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification. As of 2014, the Unix version with the largest installed base is Apple's macOS. Unix systems are characterized by a modular design, sometimes called the "Unix philosophy"; this concept entails that the operating system provides a set of simple tools that each performs a limited, well-defined function, with a unified filesystem as the main means of communication, a shell scripting and command language to combine the tools to perform complex workflows.
Unix distinguishes itself from its predecessors as the first portable operating system: the entire operating system is written in the C programming language, thus allowing Unix to reach numerous platforms. Unix was meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers; the system grew larger as the operating system started spreading in academic circles, as users added their own tools to the system and shared them with colleagues. At first, Unix was not designed to be multi-tasking. Unix gained portability, multi-tasking and multi-user capabilities in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; these concepts are collectively known as the "Unix philosophy". Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as "the idea that the power of a system comes more from the relationships among programs than from the programs themselves".
In an era when a standard computer consisted of a hard disk for storage and a data terminal for input and output, the Unix file model worked quite well, as I/O was linear. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, semaphores, network sockets were added to support communication with other hosts; as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes; the Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers. Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system.
Under Unix, the operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low-level" tasks that most programs share, schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space - although in microkernel implementations, like MINIX or Redox, functions such as network protocols may run in user space; the origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, General Electric were developing Multics, a time-sharing operating system for the GE-645 mainframe computer. Multics featured several innovations, but presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project.
The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was without organizational backing, without a name; the new operating system was a single-tasking system. In 1970, the group coined the name Unics for Uniplexed Information and Computing Service, as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that "no one can remember" the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, Peter G. Neumann credit Kernighan; the operating system was written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Version 4 Unix, still had many PDP-11 dependent codes, is not suitable for porting; the first port to other platform was made five years f
A disposable is a product designed for a single use after which it is recycled or is disposed as solid waste. The term implies cheapness and short-term convenience rather than medium to long-term durability; the term is sometimes used for products that may last several months to distinguish from similar products that last indefinitely. The word "disposables" is not to be confused with the word "consumables", used in the mechanical world. In welding for example, welding rods, nozzles, etc. are considered to be "consumables" as they last only a certain amount of time before needing to be replaced. "Disposable" is an adjective meaning something not is disposed of after use. Many people now use the term as a noun or substantive, i.e. "a disposable" but in reality this is still an adjective as the noun is implied. Disposable income is the amount of money left over from one's salary or pay for spending, saving or whatever, after all living costs have been taken out. Disposable products are most made from paper, cotton, or polystyrene foam.
Products made from composite materials such as laminations are difficult to recycle and are more to be disposed of at the end of their use. Aluminum foil and aluminum pans Disposable dishware / drinkware Plastic cutlery Disposable table cloth Inexpensive tupperware products are reusable Cupcake wrappers, coffee filters are compostable Packages are intended for a single use; the waste hierarchy call for minimization of materials. Many package forms and materials are suited to recycling although the actual recycling percentages are low in many regions. Reuse and repurposing of packaging is increasing but containers will be recycled, incinerated, or landfilled. There are many container forms such as boxes, jars, etc. Materials include paper, metals, composites, etc. In 2002, Taiwan began taking action to reduce the use of disposable tableware at institutions and businesses, to reduce the use of plastic bags. Yearly, the nation of 17.7 million people was producing 59,000 tons of disposable tableware waste and 105,000 tons of waste plastic bags, increasing measures have been taken in the years since to reduce the amount of waste.
In 2013 Taiwan's Environmental Protection Administration banned outright the use of disposable tableware in the nation's 968 schools, government agencies and hospitals. The ban is expected to eliminate 2,600 metric tons of waste yearly. In Germany and Switzerland, laws banning use of disposable food and drink containers at large-scale events have been enacted; such a ban has been in place in Munich, since 1991, applying to all city facilities and events. This includes events of all sizes, including large ones. For small events of a few hundred people, the city has arranged for a corporation offer rental of crockery and dishwasher equipment. In part through this regulation, Munich reduced the waste generated by Oktoberfest, which attracts tens of thousands of people, from 11,000 metric tons in 1990 to 550 tons in 1999. China produces about 57 billion pairs of single-use chopsticks yearly. About 45 percent are made from trees – about 3.8 million of them – cotton wood and spruce, the remainder being made from bamboo.
Japan uses about 24 billion pairs of these disposables per year, globally the use is about 80 billion pairs are thrown away by about 1.4 million people. Reusable chopsticks in restaurants have a lifespan of 130 meals. In Japan, with disposable ones costing about 2 cents and reusable ones costing $1.17, the reusables better the $2.60 breakeven cost. Campaigns in several countries to reduce this waste are beginning to have some effect. Medical and surgical device manufacturers worldwide produce a multitude of items that are intended for one use only; the primary reason is infection control. Manufacturers of any type of medical device are obliged to abide by numerous standards and regulations. ISO 15223: Medical Devices and EN 980 cite that single use instruments or devices be labelled as such on their packaging with a universally recognized symbol to denote "do not re-use", "single use", or "use only once"; this symbol is the numeral 2, within a circle with a 45° line through it. Examples of single use items include: Hypodermic needles Toilet paper Disposable towels, paper towels Condoms and other contraception products Disposable enemas and similar products Cotton swabs and pads Medical and cleaning gloves Medical dust respirators Baby and adult diapers, training pants Shaving razors, safety razors, waxing kits and other hair control products Toothbrushes, dental floss, other oral care products Hospital aprons Disposable panties in postpartum Contact lenses Non-rechargeable batteries are considered hazardous waste and should only be disposed of as such.
Disposable ink cart Disposable cameras PlastiCuffs Garbage bags Vacuum cleaner bags, air and other filters vacuum cleaners Paper currency, withdrawn from circulation when worn Ballpoint pens and other writing implements Movie sets and theater sets Gift wrapping paper Labels and the associated release liners are single use and disposed after use. Dust respirators Aluminum foil Drinking straws Disposable tableware Consumable Extended producer responsibility Planned obsolescence Waste management Société Bic
A thin client is a lightweight computer, optimized for establishing a remote connection with a server-based computing environment. The server does most of the work, which can include launching software programs, crunching numbers, storing data; this contrasts with a conventional personal computer. Thin clients occur as components of a broader computing infrastructure, where many clients share their computations with a server or server farm; the server-side infrastructure uses cloud computing software such as application virtualization, hosted shared desktop or desktop virtualization. This combination forms what is known as a cloud-based system where desktop resources are centralized at one or more data centers; the benefits of centralization are hardware resource optimization, reduced software maintenance, improved security. Example of hardware resource optimization: Cabling, busing and I/O can be minimized while idle memory and processing power can be applied to user sessions that most need it.
Example of reduced software maintenance: Software patching and operating system migrations can be applied and activated for all users in one instance to accelerate roll-out and improve administrative efficiency. Example of improved security: Software assets are centralized and fire-walled and protected. Sensitive data is uncompromised in cases of desktop theft. Thin client hardware supports a keyboard, monitor, jacks for sound peripherals, open ports for USB devices; some thin clients include legacy serial or parallel ports to support older devices such as receipt printers, scales or time clocks. Thin client software consists of a graphical user interface, cloud access agents, a local web browser, terminal emulators, a basic set of local utilities. In using cloud-based architecture, the server takes on the processing load of several client sessions, acting as a host for each endpoint device; the client software is narrowly lightweight. One of the combined benefits of using cloud architecture with thin client desktops is that critical IT assets are centralized for better utilization of resources.
Unused memory, bussing lanes, processor cores within an individual user session, for example, can be leveraged for other active user sessions. The simplicity of thin client hardware and software results in a low total cost of ownership, but some of these initial savings can be offset by the need for a more robust cloud infrastructure required on the server side. An alternative to traditional server deployment which spreads out infrastructure costs over time is a cloud-based subscription model known as desktop as a service, which allows IT organizations to outsource the cloud infrastructure to a third party. Thin client computing is known to simplify the desktop endpoints by reducing the client-side software footprint. With a lightweight, read-only operating system, client-side setup and administration is reduced. Cloud access is the primary role of a thin client which eliminates the need for a large suite of local user applications, data storage, utilities; this architecture shifts most of the software execution burden from the endpoint to the data center.
User assets are centralized for greater visibility. Data recovery and desktop repurposing tasks are centralized for faster service and greater scalability. While the server must be robust enough to handle several client sessions at once, thin client hardware requirements are minimal compared to that of a traditional PC desktop. Most thin clients have low energy processors, flash storage, no moving parts; this reduces the cost and power consumption, making them affordable to own and easy to replace or deploy. Since thin clients consist of fewer hardware components than a traditional desktop PC, they can operate in more hostile environments, and because they don't store critical data locally, risk of theft is minimized because there is little or no user data to be compromised. Modern thin clients have come a long way to meet the demands of today's graphical computing needs. New generations of low energy chipset and CPU combinations improve processing power and graphical capabilities. To minimize latency of high resolution video sent across the network, some host software stacks leverage multimedia redirection techniques to offload video rendering to the desktop device.
Video codecs are embedded on the thin client to support these various multimedia formats. Other host software stacks makes use of User Datagram Protocol in order to accelerate fast changing pixel updates required by modern video content. Thin clients support local software agents capable of accepting and decoding UDP; some of the more graphically intense use cases, remain a challenge for thin clients. These use cases might include the applications like photo editors, 3D drawing programs, animation tools; this can be addressed at the host server using dedicated GPU cards, allocation of vGPUs, workstation cards, hardware acceleration cards. These solutions allow IT administrators to provide power-user performance where it is needed, to a generic endpoint device such as a thin client. To achieve such simplicity, thin clients some
X Window System
The X Window System is a windowing system for bitmap displays, common on Unix-like operating systems. X provides the basic framework for a GUI environment: drawing and moving windows on the display device and interacting with a mouse and keyboard. X does not mandate the user interface – this is handled by individual programs; as such, the visual styling of X-based environments varies greatly. X originated at the Massachusetts Institute of Technology in 1984; the X protocol has been version 11 since September 1987. The X. Org Foundation leads the X project, with the current reference implementation, X. Org Server, available as free and open source software under the MIT License and similar permissive licenses. X is an architecture-independent system for remote graphical user interfaces and input device capabilities; each person using a networked terminal has the ability to interact with the display with any type of user input device. In its standard distribution it is a complete, albeit simple and interface solution which delivers a standard toolkit and protocol stack for building graphical user interfaces on most Unix-like operating systems and OpenVMS, has been ported to many other contemporary general purpose operating systems.
X provides the basic framework, or primitives, for building such GUI environments: drawing and moving windows on the display and interacting with a mouse, keyboard or touchscreen. X does not mandate the user interface. Programs may use X's graphical abilities with no user interface; as such, the visual styling of X-based environments varies greatly. Unlike most earlier display protocols, X was designed to be used over network connections rather than on an integral or attached display device. X features network transparency, which means an X program running on a computer somewhere on a network can display its user interface on an X server running on some other computer on the network; the X server is the provider of graphics resources and keyboard/mouse events to X clients, meaning that the X server is running on the computer in front of a human user, while the X client applications run anywhere on the network and communicate with the user's computer to request the rendering of graphics content and receive events from input devices including keyboards and mice.
The fact that the term "server" is applied to the software in front of the user is surprising to users accustomed to their programs being clients to services on remote computers. Here, rather than a remote database being the resource for a local app, the user's graphic display and input devices become resources made available by the local X server to both local and remotely hosted X client programs who need to share the user's graphics and input devices to communicate with the user. X's network protocol is based on X command primitives; this approach allows both 2D and 3D operations by an X client application which might be running on a different computer to still be accelerated on the X server's display. For example, in classic OpenGL, display lists containing large numbers of objects could be constructed and stored in the X server by a remote X client program, each rendered by sending a single glCallList across the network. X provides no native support for audio. X uses a client–server model: an X server communicates with various client programs.
The server sends back user input. The server may function as: an application displaying to a window of another display system a system program controlling the video output of a PC a dedicated piece of hardwareThis client–server terminology – the user's terminal being the server and the applications being the clients – confuses new X users, because the terms appear reversed, but X takes the perspective of the application, rather than that of the end-user: X provides display and I/O services to applications, so it is a server. The communication protocol between server and client operates network-transparently: the client and server may run on the same machine or on different ones with different architectures and operating systems. A client and server can communicate securely over the Internet by tunneling the connection over an encrypted network session. An X client itself may emulate an X server by providing display services to other clients; this is known as "X nesting". Open-source clients such as Xnest and Xephyr support such X nesting.
To use an X client application on a remote machine, the user may do the following: on the local machine, open a terminal window use ssh with the X forwarding argument to connect to the remote machine request local display/input service The remote X client application will make a connection to the user's local X server, providing display and input to the user. Alternatively, the local machine may run a small program that connects to the remote machine and starts the client application. Practical examples of remote clients include: administering a remote machine graphically using a client application to join with large numbers of other terminal users in collaborative workgroups running a computationally intensive simulation on a remote machine and displaying the results on
Microsoft Windows is a group of several graphical operating system families, all of which are developed and sold by Microsoft. Each family caters to a certain sector of the computing industry. Active Windows families include Windows Embedded. Defunct Windows families include Windows Mobile and Windows Phone. Microsoft introduced an operating environment named Windows on November 20, 1985, as a graphical operating system shell for MS-DOS in response to the growing interest in graphical user interfaces. Microsoft Windows came to dominate the world's personal computer market with over 90% market share, overtaking Mac OS, introduced in 1984. Apple came to see Windows as an unfair encroachment on their innovation in GUI development as implemented on products such as the Lisa and Macintosh. On PCs, Windows is still the most popular operating system. However, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones.
In 2014, the number of Windows devices sold was less than 25 %. This comparison however may not be relevant, as the two operating systems traditionally target different platforms. Still, numbers for server use of Windows show one third market share, similar to that for end user use; as of October 2018, the most recent version of Windows for PCs, tablets and embedded devices is Windows 10. The most recent versions for server computers is Windows Server 2019. A specialized version of Windows runs on the Xbox One video game console. Microsoft, the developer of Windows, has registered several trademarks, each of which denote a family of Windows operating systems that target a specific sector of the computing industry; as of 2014, the following Windows families are being developed: Windows NT: Started as a family of operating systems with Windows NT 3.1, an operating system for server computers and workstations. It now consists of three operating system subfamilies that are released at the same time and share the same kernel: Windows: The operating system for mainstream personal computers and smartphones.
The latest version is Windows 10. The main competitor of this family is macOS by Apple for personal computers and Android for mobile devices. Windows Server: The operating system for server computers; the latest version is Windows Server 2019. Unlike its client sibling, it has adopted a strong naming scheme; the main competitor of this family is Linux. Windows PE: A lightweight version of its Windows sibling, meant to operate as a live operating system, used for installing Windows on bare-metal computers, recovery or troubleshooting purposes; the latest version is Windows PE 10. Windows IoT: Initially, Microsoft developed Windows CE as a general-purpose operating system for every device, too resource-limited to be called a full-fledged computer. However, Windows CE was renamed Windows Embedded Compact and was folded under Windows Compact trademark which consists of Windows Embedded Industry, Windows Embedded Professional, Windows Embedded Standard, Windows Embedded Handheld and Windows Embedded Automotive.
The following Windows families are no longer being developed: Windows 9x: An operating system that targeted consumers market. Discontinued because of suboptimal performance. Microsoft now caters to the consumer market with Windows NT. Windows Mobile: The predecessor to Windows Phone, it was a mobile phone operating system; the first version was called Pocket PC 2000. The last version is Windows Mobile 6.5. Windows Phone: An operating system sold only to manufacturers of smartphones; the first version was Windows Phone 7, followed by Windows Phone 8, the last version Windows Phone 8.1. It was succeeded by Windows 10 Mobile; the term Windows collectively describes any or all of several generations of Microsoft operating system products. These products are categorized as follows: The history of Windows dates back to 1981, when Microsoft started work on a program called "Interface Manager", it was announced in November 1983 under the name "Windows", but Windows 1.0 was not released until November 1985.
Windows 1.0 was to achieved little popularity. Windows 1.0 is not a complete operating system. The shell of Windows 1.0 is a program known as the MS-DOS Executive. Components included Calculator, Cardfile, Clipboard viewer, Control Panel, Paint, Reversi and Write. Windows 1.0 does not allow overlapping windows. Instead all windows are tiled. Only modal dialog boxes may appear over other windows. Microsoft sold as included Windows Development libraries with the C development environment, which included numerous windows samples. Windows 2.0 was released in December 1987, was more popular than its predecessor. It features several improvements to the user memory management. Windows 2.03 changed the OS from tiled windows to overlapping windows. The result of this change led to Apple Computer filing a suit against Microsoft alleging infringement on Apple's copyrights. Windows 2.0
A computer is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of called programs; these programs enable computers to perform an wide range of tasks. A "complete" computer including the hardware, the operating system, peripheral equipment required and used for "full" operation can be referred to as a computer system; this term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster. Computers are used as control systems for a wide variety of industrial and consumer devices; this includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer-aided design, general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects hundreds of millions of other computers and their users.
Early computers were only conceived as calculating devices. Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century; the first digital electronic calculating machines were developed during World War II. The speed and versatility of computers have been increasing ever since then. Conventionally, a modern computer consists of at least one processing element a central processing unit, some form of memory; the processing element carries out arithmetic and logical operations, a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices, output devices, input/output devices that perform both functions. Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.
According to the Oxford English Dictionary, the first known use of the word "computer" was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: "I haue read the truest computer of Times, the best Arithmetician that euer breathed, he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. During the latter part of this period women were hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations; the Online Etymology Dictionary gives the first attested use of "computer" in the 1640s, meaning "one who calculates". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' is from 1897."
The Online Etymology Dictionary indicates that the "modern use" of the term, to mean "programmable digital electronic computer" dates from "1945 under this name. Devices have been used to aid computation for thousands of years using one-to-one correspondence with fingers; the earliest counting device was a form of tally stick. Record keeping aids throughout the Fertile Crescent included calculi which represented counts of items livestock or grains, sealed in hollow unbaked clay containers; the use of counting rods is one example. The abacus was used for arithmetic tasks; the Roman abacus was developed from devices used in Babylonia as early as 2400 BC. Since many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, markers moved around on it according to certain rules, as an aid to calculating sums of money; the Antikythera mechanism is believed to be the earliest mechanical analog "computer", according to Derek J. de Solla Price.
It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, has been dated to c. 100 BC. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use; the planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD.
The sector, a calculating instrument used for solving problems in proportion, trigonometry and division, for various functions, such as squares and cube roots, was developed in