Printed circuit board
A printed circuit board mechanically supports and electrically connects electronic components or electrical components using conductive tracks and other features etched from one or more sheet layers of copper laminated onto and/or between sheet layers of a non-conductive substrate. Components are soldered onto the PCB to both electrically connect and mechanically fasten them to it. Printed circuit boards are used in all but the simplest electronic products, they are used in some electrical products, such as passive switch boxes. Alternatives to PCBs include wire wrap and point-to-point construction, both once popular but now used. PCBs require additional design effort to lay out the circuit, but manufacturing and assembly can be automated. Specialized CAD software is available to do much of the work of layout. Mass-producing circuits with PCBs is cheaper and faster than with other wiring methods, as components are mounted and wired in one operation. Large numbers of PCBs can be fabricated at the same time, the layout only has to be done once.
PCBs can be made manually in small quantities, with reduced benefits. PCBs can be double-sided, or multi-layer. Multi-layer PCBs allow for much higher component density, because circuit traces on the inner layers would otherwise take up surface space between components; the rise in popularity of multilayer PCBs with more than two, with more than four, copper planes was concurrent with the adoption of surface mount technology. However, multilayer PCBs make repair and field modification of circuits much more difficult and impractical; the world market for bare PCBs exceeded $60.2 billion in 2014. In 2018, the Global Single Sided Printed Circuit Board Market Analysis Report estimated that the PCB market would reach $79 billion by 2024. Before the development of printed circuit boards electrical and electronic circuits were wired point-to-point on a chassis; the chassis was a sheet metal frame or pan, sometimes with a wooden bottom. Components were attached to the chassis by insulators when the connecting point on the chassis was metal, their leads were connected directly or with jumper wires by soldering, or sometimes using crimp connectors, wire connector lugs on screw terminals, or other methods.
Circuits were large, bulky and fragile, production was labor-intensive, so the products were expensive. Development of the methods used in modern printed circuit boards started early in the 20th century. In 1903, a German inventor, Albert Hanson, described flat foil conductors laminated to an insulating board, in multiple layers. Thomas Edison experimented with chemical methods of plating conductors onto linen paper in 1904. Arthur Berry in 1913 patented a print-and-etch method in the UK, in the United States Max Schoop obtained a patent to flame-spray metal onto a board through a patterned mask. Charles Ducas in 1927 patented a method of electroplating circuit patterns; the Austrian engineer Paul Eisler invented the printed circuit as part of a radio set while working in the UK around 1936. In 1941 a multi-layer printed circuit was used in German magnetic influence naval mines. Around 1943 the USA began to use the technology on a large scale to make proximity fuses for use in World War II. After the war, in 1948, the USA released the invention for commercial use.
Printed circuits did not become commonplace in consumer electronics until the mid-1950s, after the Auto-Sembly process was developed by the United States Army. At around the same time in the UK work along similar lines was carried out by Geoffrey Dummer at the RRDE; as circuit boards became available, the point-to-point chassis construction method remained in common use in industry into at least the late 1960s. Printed circuit boards were introduced to reduce the size and cost of parts of the circuitry. In 1960, a small consumer radio receiver might be built with all its circuitry on one circuit board, but a TV set would contain one or more circuit boards. Predating the printed circuit invention, similar in spirit, was John Sargrove's 1936–1947 Electronic Circuit Making Equipment which sprayed metal onto a Bakelite plastic board; the ECME could produce three radio boards per minute. During World War II, the development of the anti-aircraft proximity fuse required an electronic circuit that could withstand being fired from a gun, could be produced in quantity.
The Centralab Division of Globe Union submitted a proposal which met the requirements: a ceramic plate would be screenprinted with metallic paint for conductors and carbon material for resistors, with ceramic disc capacitors and subminiature vacuum tubes soldered in place. The technique proved viable, the resulting patent on the process, classified by the U. S. Army, was assigned to Globe Union, it was not until 1984 that the Institute of Electrical and Electronics Engineers awarded Harry W. Rubinstein the Cledo Brunetti Award for early key contributions to the development of printed components and conductors on a common insulating substrate. Rubinstein was honored in 1984 by his alma mater, the University of Wisconsin-Madison, for his innovations in the technology of printed electronic circuits and the fabrication of capacitors; this invention represents a step in the development of integrated circuit technology, as not only wiring but passive components were fabricated on the ceramic substrate.
Every electronic component had
A photomask is an opaque plate with holes or transparencies that allow light to shine through in a defined pattern. They are used in photolithography. Lithographic photomasks are transparent fused silica blanks covered with a pattern defined with a chrome metal-absorbing film. Photomasks are used at wavelengths of 365 nm, 248 nm, 193 nm. Photomasks have been developed for other forms of radiation such as 157 nm, 13.5 nm, X-ray and ions. A set of photomasks, each defining a pattern layer in integrated circuit fabrication, is fed into a photolithography stepper or scanner, individually selected for exposure. In double patterning techniques, a photomask would correspond to a subset of the layer pattern. In photolithography for the mass production of integrated circuit devices, the more correct term is photoreticle or reticle. In the case of a photomask, there is a one-to-one correspondence between the mask pattern and the wafer pattern; this was the standard for the 1:1 mask aligners that were succeeded by steppers and scanners with reduction optics.
As used in steppers and scanners, the reticle contains only one layer of the chip.. The pattern is shrunk by four or five times onto the wafer surface. To achieve complete wafer coverage, the wafer is "stepped" from position to position under the optical column until full exposure is achieved. Features 150 nm or below in size require phase-shifting to enhance the image quality to acceptable values; this can be achieved in many ways. The two most common methods are to use an attenuated phase-shifting background film on the mask to increase the contrast of small intensity peaks, or to etch the exposed quartz so that the edge between the etched and unetched areas can be used to image nearly zero intensity. In the second case, unwanted edges would need to be trimmed out with another exposure; the former method is attenuated phase-shifting, is considered a weak enhancement, requiring special illumination for the most enhancement, while the latter method is known as alternating-aperture phase-shifting, is the most popular strong enhancement technique.
As leading-edge semiconductor features shrink, photomask features that are 4× larger must shrink as well. This could pose challenges since the absorber film will need to become thinner, hence less opaque. A recent study by IMEC has found that thinner absorbers degrade image contrast and therefore contribute to line-edge roughness, using state-of-the-art photolithography tools. One possibility is to eliminate absorbers altogether and use "chromeless" masks, relying on phase-shifting for imaging; the emergence of immersion lithography has a strong impact on photomask requirements. The used attenuated phase-shifting mask is more sensitive to the higher incidence angles applied in "hyper-NA" lithography, due to the longer optical path through the patterned film. Leading-edge photomasks images of the final chip patterns magnified by four times; this magnification factor has been a key benefit in reducing pattern sensitivity to imaging errors. However, as features continue to shrink, two trends come into play: the first is that the mask error factor begins to exceed one, i.e. the dimension error on the wafer may be more than 1/4 the dimension error on the mask, the second is that the mask feature is becoming smaller, the dimension tolerance is approaching a few nanometers.
For example, a 25 nm wafer pattern should correspond to a 100 nm mask pattern, but the wafer tolerance could be 1.25 nm, which translates into 5 nm on the photomask. The variation of electron beam scattering in directly writing the photomask pattern can well exceed this; the term "pellicle" is used to mean "film", "thin film", or "membrane." Beginning in the 1960s, thin film stretched on a metal frame known as a "pellicle", was used as a beam splitter for optical instruments. It has been used in a number of instruments to split a beam of light without causing an optical path shift due to its small film thickness. In 1978, Shea et al. at IBM patented a process to use the "pellicle" as a dust cover to protect a photomask or reticle. In the context of this entry, "pellicle" means "thin film dust cover to protect a photomask". Particle contamination can be a significant problem in semiconductor manufacturing. A photomask is protected from particles by a pellicle – a thin transparent film stretched over a frame, glued over one side of the photomask.
The pellicle is far enough away from the mask patterns so that moderate-to-small sized particles that land on the pellicle will be too far out of focus to print. Although they are designed to keep particles away, pellicles become a part of the imaging system and their optical properties need to be taken into account. Pellicles material are Nitrocellulose and made for various Transmission Wavelengths; the SPIE Annual Conference, Photomask Technology reports the SEMATECH Mask Industry Assessment which includes current industry analysis and the results of their annual photomask manufacturers survey. The following companies are listed in order of their global market share: Dai Nippon Printing Toppan Photomasks Photronics Inc Hoya Corporation Taiwan Mask Corporation CompugraphicsMajor chipmakers such as Intel, Globalfoundries, IBM, NEC, TSMC, UMC, Micron Technology, have their own large maskmaking facilities or joint ventures with the abovementioned companies; the worldwide photomask market was estimated as $3.2 billion in 2012 and $3.1 billion in 2013.
Half of the mark
Photographic processing or development is the chemical means by which photographic film or paper is treated after photographic exposure to produce a negative or positive image. Photographic processing transforms the latent image into a visible image, makes this permanent and renders it insensitive to light. All processes based upon the gelatin-silver process are similar, regardless of the film or paper's manufacturer. Exceptional variations include instant films such as those made by Polaroid and thermally developed films. Kodachrome required Kodak's proprietary K-14 process. Kodachrome film production ceased in 2009, K-14 processing is no longer available as of December 30, 2010. Ilfochrome materials use the dye destruction process. All photographic processing use a series of chemical baths. Processing the development stages, requires close control of temperature and time; the film may be soaked in water to swell the gelatin layer, facilitating the action of the subsequent chemical treatments.
The developer converts the latent image to macroscopic particles of metallic silver. A stop bath,† a dilute solution of acetic acid or citric acid, halts the action of the developer. A rinse with clean water may be substituted; the fixer makes the image light-resistant by dissolving remaining silver halide. A common fixer is hypo ammonium thiosulfate. Washing in clean water removes any remaining fixer. Residual fixer can corrode the silver image, leading to discolouration and fading; the washing time can be reduced and the fixer more removed if a hypo clearing agent is used after the fixer. Film may be rinsed in a dilute solution of a non-ionic wetting agent to assist uniform drying, which eliminates drying marks caused by hard water. Film is dried in a dust-free environment and placed into protective sleeves. Once the film is processed, it is referred to as a negative; the negative may now be printed. Many different techniques can be used during the enlargement process. Two examples of enlargement techniques are burning.
Alternatively, the negative may be scanned for digital printing or web viewing after adjustment, and/or manipulation. † In modern automatic processing machines, the stop bath is replaced by mechanical squeegee or pinching rollers. These treatments remove much of the carried-over alkaline developer, the acid, when used, neutralizes the alkalinity to reduce the contamination of the fixing bath with the developer; this process has three additional stages: Following the stop bath, the film is bleached to remove the developed negative image. The film contains a latent positive image formed from unexposed and undeveloped silver halide salts; the film is fogged, either chemically or by exposure to light. The remaining silver halide salts are developed in the second developer, converting them into a positive image; the film is fixed, washed and cut. Chromogenic materials use dye couplers to form colour images. Modern colour negative film is developed with the C-41 process and colour negative print materials with the RA-4 process.
These processes are similar, with differences in the first chemical developer. The C-41 and RA-4 processes consist of the following steps: The colour developer develops the silver negative image, byproducts activate the dye couplers to form the colour dyes in each emulsion layer. A rehalogenising bleach converts the developed silver image into silver halides. A fixer removes the silver salts; the film is washed, stabilised and cut. In the RA-4 process, the bleach and fix are combined; this is optional, reduces the number of processing steps. Transparency films, except Kodachrome, are developed using the E-6 process, which has the following stages: A black and white developer develops the silver in each image layer. Development is stopped with a stop bath; the film is fogged in the reversal step. The fogged silver halides are developed and oxidized developing agents couple with the dye couplers in each layer; the film is bleached, fixed and dried as described above. In some old processes, the film emulsion was hardened during the process before the bleach.
Such a hardening bath used aldehydes, such as formaldehyde and glutaraldehyde. In modern processing, these hardening steps are unnecessary because the film emulsion is sufficiently hardened to withstand the processing chemicals. Black and white emulsions both negative and positive, may be further processed; the image silver may be reacted with elements such as selenium or sulphur to increase image permanence and for aesthetic reasons. This process is known as toning. In selenium toning, the image silver is changed to silver selenide; these chemicals are more resistant to atmospheric oxidising agents than silver. If colour negative film is processed in conventional black and white developer, fixed and bleached with a bath containing hydrochloric acid and potassium dichromate solution, the resultant film, once exposed to light, can be redeveloped in colour developer to produce an unusual pastel colour effect. Before processing, the film must be removed from the camera and from its cassette, spool or holder in a light-proof room or container.
In amateur processing, the film is removed from the camera and wound onto a reel in complete darkness (usually inside a darkroom with the safelight turned off or a lightproof bag with
The micrometre or micrometer commonly known by the previous name micron, is an SI derived unit of length equalling 1×10−6 metre. The micrometre is a common unit of measurement for wavelengths of infrared radiation as well as sizes of biological cells and bacteria, for grading wool by the diameter of the fibres; the width of a single human hair ranges from 10 to 200 μm. The longest human chromosome is 10 μm in length. Between 1 μm and 10 μm: 1–10 μm – length of a typical bacterium 10 μm – Size of fungal hyphae 5 μm – length of a typical human spermatozoon's head 3–8 μm – width of strand of spider web silk about 10 μm – size of a fog, mist, or cloud water droplet Between 10 μm and 100 μm about 10–12 μm – thickness of plastic wrap 10 to 55 μm – width of wool fibre 17 to 181 μm – diameter of human hair 70 to 180 μm – thickness of paper The term micron and the symbol μ were accepted for use in isolation to denote the micrometre in 1879, but revoked by the International System of Units in 1967; this became necessary because the older usage was incompatible with the official adoption of the unit prefix micro-, denoted μ, during the creation of the SI in 1960.
In the SI, the systematic name micrometre became the official name of the unit, μm became the official unit symbol. In practice, "micron" remains a used term in preference to "micrometre" in many English-speaking countries, both in academic science and in applied science and industry. Additionally, in American English, the use of "micron" helps differentiate the unit from the micrometer, a measuring device, because the unit's name in mainstream American spelling is a homograph of the device's name. In spoken English, they may be distinguished by pronunciation, as the name of the measuring device is invariably stressed on the second syllable, whereas the systematic pronunciation of the unit name, in accordance with the convention for pronouncing SI units in English, places the stress on the first syllable; the plural of micron is "microns", though "micra" was used before 1950. The official symbol for the SI prefix micro- is a Greek lowercase mu. In Unicode, there is a micro sign with the code point U+00B5, distinct from the code point U+03BC of the Greek letter lowercase mu.
According to the Unicode Consortium, the Greek letter character is preferred, but implementations must recognize the micro sign as well. Most fonts use the same glyph for the two characters. Metric prefix Metric system Orders of magnitude Wool measurement The dictionary definition of micrometre at Wiktionary
Photolithography called optical lithography or UV lithography, is a process used in microfabrication to pattern parts of a thin film or the bulk of a substrate. It uses light to transfer a geometric pattern from a photomask to a photosensitive chemical photoresist on the substrate. A series of chemical treatments either etches the exposure pattern into the material or enables deposition of a new material in the desired pattern upon the material underneath the photoresist. In complex integrated circuits, a CMOS wafer may go through the photolithographic cycle as many as 50 times. Photolithography shares some fundamental principles with photography in that the pattern in the photresist etching is created by exposing it to light, either directly or with a projected image using a photomask; this procedure is comparable to a high precision version of the method used to make printed circuit boards. Subsequent stages in the process have more in common with etching than with lithographic printing; this method can create small patterns, down to a few tens of nanometers in size.
It provides precise control of the shape and size of the objects it creates and can create patterns over an entire surface cost-effectively. Its main disadvantages are that it requires a flat substrate to start with, it is not effective at creating shapes that are not flat, it can require clean operating conditions. Photolithography is the standard method of printed circuit microprocessor fabrication; the root words photo and graphy all have Greek origins, with the meanings'light','stone' and'writing' respectively. As suggested by the name compounded from them, photolithography is a printing method in which light plays an essential role. In the 1820s, Nicephore Niepce invented a photographic process that used Bitumen of Judea, a natural asphalt, as the first photoresist. A thin coating of the bitumen on a sheet of metal, glass or stone became less soluble where it was exposed to light; the light-sensitivity of bitumen was poor and long exposures were required, but despite the introduction of more sensitive alternatives, its low cost and superb resistance to strong acids prolonged its commercial life into the early 20th century.
In 1940, Oskar Süß created a positive photoresist by using diazonaphthoquinone, which worked in the opposite manner: the coating was insoluble and was rendered soluble where it was exposed to light. In 1954, Louis Plambeck Jr. developed the Dycryl polymeric letterpress plate, which made the platemaking process faster. In 1952, the U. S. military assigned Jay W. Lathrop and James R. Nall at the National Bureau of Standards with the task of finding a way to reduce the size of electronic circuits in order to better fit the necessary circuitry in the limited space available inside a proximity fuze. Inspired by the application of photoresist, a photosensitive liquid used to mark the boundaries of rivet holes in metal aircraft wings, Nall determined that a similar process can be used to protect the germanium in the transistors and pattern the surface with light. During development and Nall were successful in creating a 2D miniaturized hybrid integrated circuit with transistors using this technique.
In 1958, during the IRE Professional Group on Electron Devices conference in Washington, D. C. they presented the first paper to describe the fabrication of transistors using photographic techniques and adopted the term “photolithography” to describe the process, marking the first published use of the term to describe semiconductor device patterning. Despite the fact that photolithography of electronic components concerns etching metal duplicates, rather than etching stone to produce a "master" as in conventional lithographic printing and Nall chose the term “photolithography” over “photoetching” because the former sounded “high tech.” A year after the conference and Nall’s patent on photolithography was formally approved on June 9, 1959. Photolithography would contribute to the development of the first semiconductor ICs as well as the first microchips. A single iteration of photolithography combines several steps in sequence. Modern cleanrooms use robotic wafer track systems to coordinate the process.
The procedure described here omits some advanced treatments, such as thinning agents or edge-bead removal. If organic or inorganic contaminations are present on the wafer surface, they are removed by wet chemical treatment, e.g. the RCA clean procedure based on solutions containing hydrogen peroxide. Other solutions made with trichloroethylene, acetone or methanol can be used to clean; the wafer is heated to a temperature sufficient to drive off any moisture that may be present on the wafer surface, 150 °C for ten minutes is sufficient. Wafers that have been in storage must be chemically cleaned to remove contamination. A liquid or gaseous "adhesion promoter", such as Bisamine, is applied to promote adhesion of the photoresist to the wafer; the surface layer of silicon dioxide on the wafer reacts with HMDS to form tri-methylated silicon-dioxide, a water repellent layer not unlike the layer of wax on a car's paint. This water repellent layer prevents the aqueous developer from penetrating between the photoresist layer and the wafer's
Photographic film is a strip or sheet of transparent plastic film base coated on one side with a gelatin emulsion containing microscopically small light-sensitive silver halide crystals. The sizes and other characteristics of the crystals determine the sensitivity and resolution of the film; the emulsion will darken if left exposed to light, but the process is too slow and incomplete to be of any practical use. Instead, a short exposure to the image formed by a camera lens is used to produce only a slight chemical change, proportional to the amount of light absorbed by each crystal; this creates an invisible latent image in the emulsion, which can be chemically developed into a visible photograph. In addition to visible light, all films are sensitive to ultraviolet, X-rays and high-energy particles. Unmodified silver halide crystals are sensitive only to the blue part of the visible spectrum, producing unnatural-looking renditions of some colored subjects; this problem was resolved with the discovery that certain dyes, called sensitizing dyes, when adsorbed onto the silver halide crystals made them respond to other colors as well.
First orthochromatic and panchromatic films were developed. Panchromatic film renders all colors in shades of gray matching their subjective brightness. By similar techniques, special-purpose films can be made sensitive to the infrared region of the spectrum. In black-and-white photographic film, there is one layer of silver halide crystals; when the exposed silver halide grains are developed, the silver halide crystals are converted to metallic silver, which blocks light and appears as the black part of the film negative. Color film has at least three sensitive layers, incorporating different combinations of sensitizing dyes; the blue-sensitive layer is on top, followed by a yellow filter layer to stop any remaining blue light from affecting the layers below. Next comes a green-and-blue sensitive layer, a red-and-blue sensitive layer, which record the green and red images respectively. During development, the exposed silver halide crystals are converted to metallic silver, just as with black-and-white film.
But in a color film, the by-products of the development reaction combine with chemicals known as color couplers that are included either in the film itself or in the developer solution to form colored dyes. Because the by-products are created in direct proportion to the amount of exposure and development, the dye clouds formed are in proportion to the exposure and development. Following development, the silver is converted back to silver halide crystals in the bleach step, it is removed from the film during the process of fixing the image on the film with a solution of ammonium thiosulfate or sodium thiosulfate. Fixing leaves behind only the formed color dyes, which combine to make up the colored visible image. Color films, like Kodacolor II, have as many as 12 emulsion layers, with upwards of 20 different chemicals in each layer; the earliest practical photographic process was the daguerreotype. The light-sensitive chemicals were formed on the surface of a silver-plated copper sheet; the calotype process produced paper negatives.
Beginning in the 1850s, thin glass plates coated with photographic emulsion became the standard material for use in the camera. Although fragile and heavy, the glass used for photographic plates was of better optical quality than early transparent plastics and was, at first, less expensive. Glass plates continued to be used long after the introduction of film, were used for astrophotography and electron micrography until the early 2000s, when they were supplanted by digital recording methods. Ilford continues to manufacture glass plates for special scientific applications; the first flexible photographic roll film was sold by George Eastman in 1885, but this original "film" was a coating on a paper base. As part of the processing, the image-bearing layer was stripped from the paper and attached to a sheet of hardened clear gelatin; the first transparent plastic roll film followed in 1889. It was made from flammable nitrocellulose, now called "nitrate film". Although cellulose acetate or "safety film" had been introduced by Kodak in 1908, at first it found only a few special applications as an alternative to the hazardous nitrate film, which had the advantages of being tougher more transparent, cheaper.
The changeover was completed for X-ray films in 1933, but although safety film was always used for 16 mm and 8 mm home movies, nitrate film remained standard for theatrical 35 mm films until it was discontinued in 1951. Hurter and Driffield began pioneering work on the light sensitivity of photographic emulsions in 1876, their work enabled the first quantitative measure of film speed to be devised. They developed H&D curves, which are specific for each paper; these curves plot the photographic density against the log of the exposure, to determine sensitivity or speed of the emulsion and enabling correct exposure. Early photographic plates and films were usefully sensitive only to blue and ultraviolet light; as a result, the relative tonal values in a scene registered as they would appear if viewed through a piece of deep blue glass. Blue skies with interesting cloud formations photographed as a white blank. Any detail visible in masses of green foliage was due to the colorless surface gloss. Bright yellows and reds appeared nearly black.
Most skin tones came out unnaturally dark, uneven or freckled complexions were exaggerated. Photographers sometimes compensated by adding in skies from
A computer is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of called programs; these programs enable computers to perform an wide range of tasks. A "complete" computer including the hardware, the operating system, peripheral equipment required and used for "full" operation can be referred to as a computer system; this term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster. Computers are used as control systems for a wide variety of industrial and consumer devices; this includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer-aided design, general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects hundreds of millions of other computers and their users.
Early computers were only conceived as calculating devices. Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century; the first digital electronic calculating machines were developed during World War II. The speed and versatility of computers have been increasing ever since then. Conventionally, a modern computer consists of at least one processing element a central processing unit, some form of memory; the processing element carries out arithmetic and logical operations, a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices, output devices, input/output devices that perform both functions. Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.
According to the Oxford English Dictionary, the first known use of the word "computer" was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: "I haue read the truest computer of Times, the best Arithmetician that euer breathed, he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. During the latter part of this period women were hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations; the Online Etymology Dictionary gives the first attested use of "computer" in the 1640s, meaning "one who calculates". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' is from 1897."
The Online Etymology Dictionary indicates that the "modern use" of the term, to mean "programmable digital electronic computer" dates from "1945 under this name. Devices have been used to aid computation for thousands of years using one-to-one correspondence with fingers; the earliest counting device was a form of tally stick. Record keeping aids throughout the Fertile Crescent included calculi which represented counts of items livestock or grains, sealed in hollow unbaked clay containers; the use of counting rods is one example. The abacus was used for arithmetic tasks; the Roman abacus was developed from devices used in Babylonia as early as 2400 BC. Since many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, markers moved around on it according to certain rules, as an aid to calculating sums of money; the Antikythera mechanism is believed to be the earliest mechanical analog "computer", according to Derek J. de Solla Price.
It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, has been dated to c. 100 BC. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use; the planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD.
The sector, a calculating instrument used for solving problems in proportion, trigonometry and division, for various functions, such as squares and cube roots, was developed in