Application-specific integrated circuit
An application-specific integrated circuit is an integrated circuit customized for a particular use, rather than intended for general-purpose use. For example, a chip designed to run in a digital voice recorder or a high-efficiency bitcoin miner is an ASIC. Application-specific standard products are intermediate between ASICs and industry standard integrated circuits like the 7400 series or the 4000 series; as feature sizes have shrunk and design tools improved over the years, the maximum complexity possible in an ASIC has grown from 5,000 logic gates to over 100 million. Modern ASICs include entire microprocessors, memory blocks including ROM, RAM, EEPROM, flash memory and other large building blocks; such an ASIC is termed a SoC. Designers of digital ASICs use a hardware description language, such as Verilog or VHDL, to describe the functionality of ASICs. Field-programmable gate arrays are the modern-day technology for building a breadboard or prototype from standard parts. For smaller designs or lower production volumes, FPGAs may be more cost effective than an ASIC design in production.
The non-recurring engineering cost of an ASIC can run into the millions of dollars. Therefore, device manufacturers prefer FPGAs for prototyping and devices with low production volume and ASICs for large production volumes where NRE costs can be amortized across many devices; the initial ASICs used gate array technology. An early successful commercial application was the gate array circuitry found in the low-end 8-bit ZX81 and ZX Spectrum personal computers, introduced in 1981 and 1982; these were used by Sinclair Research as a low-cost I/O solution aimed at handling the computer's graphics. Customization occurred by varying a metal interconnect mask. Gate arrays had complexities of up to a few thousand gates. Versions became more generalized, with different base dies customized by both metal and polysilicon layers; some base dies include random-access memory elements. In the mid-1980s, a designer would choose an ASIC manufacturer and implement their design using the design tools available from the manufacturer.
While third-party design tools were available, there was not an effective link from the third-party design tools to the layout and actual semiconductor process performance characteristics of the various ASIC manufacturers. Most designers used factory-specific tools to complete the implementation of their designs. A solution to this problem, which yielded a much higher density device, was the implementation of standard cells; every ASIC manufacturer could create functional blocks with known electrical characteristics, such as propagation delay and inductance, that could be represented in third-party tools. Standard-cell design is the utilization of these functional blocks to achieve high gate density and good electrical performance. Standard-cell design is intermediate between § Gate-array and semi-custom design and § Full-custom design in terms of its non-recurring engineering and recurring component costs as well as performance and speed of development. By the late 1990s, logic synthesis tools became available.
Such tools could compile HDL descriptions into a gate-level netlist. Standard-cell integrated circuits are designed in the following conceptual stages referred to as electronics design flow, although these stages overlap in practice: Requirements engineering: A team of design engineers starts with a non-formal understanding of the required functions for a new ASIC derived from requirements analysis. Register-transfer level design: The design team constructs a description of an ASIC to achieve these goals using a hardware description language; this process is similar to writing a computer program in a high-level language. Functional verification: Suitability for purpose is verified by functional verification; this may include such techniques as logic simulation through test benches, formal verification, emulation, or creating and evaluating an equivalent pure software model, as in Simics. Each verification technique has advantages and disadvantages, most several methods are used together for ASIC verification.
Unlike most FPGAs, ASICs cannot be reprogrammed once fabricated and therefore ASIC designs that are not correct are much more costly, increasing the need for full test coverage. Logic synthesis: Logic synthesis transforms the RTL design into a large collection called of lower-level constructs called standard cells; these constructs are taken from a standard-cell library consisting of pre-characterized collections of logic gates performing specific functions. The standard cells are specific to the planned manufacturer of the ASIC; the resulting collection of standard cells and the needed electrical connections between them is called a gate-level netlist. Placement: The gate-level netlist is next processed by a placement tool which places the standard cells onto a region of an integrated circuit die representing the final ASIC; the placement tool attempts to find an optimized placement of the standard cells, subject to a variety of specified constraints. Routing: An electronics routing tool takes the physical placement of the standard cells and uses the netlist to create the electrical connections between them.
Since the search space is large, this process will produce a "sufficient" rather than "globally optimal" solution. The output is a file which can be used to create a set of photomasks enabling a semiconductor fabrication facility
In contract theory, signalling is the idea that one party credibly conveys some information about itself to another party. For example, in Michael Spence's job-market signalling model, employees send a signal about their ability level to the employer by acquiring education credentials; the informational value of the credential comes from the fact that the employer believes the credential is positively correlated with having greater ability and difficult for low ability employees to obtain. Thus the credential enables the employer to reliably distinguish low ability workers from high ability workers. Although signalling theory was developed by Michael Spence based on observed knowledge gaps between organisations and prospective employees, its intuitive nature led it to be adapted to many other domains, such as Human Resource Management and financial markets. Signalling took root in the idea of asymmetric information, which says that in some economic transactions, inequalities in access to information upset the normal market for the exchange of goods and services.
In his seminal 1973 article, Michael Spence proposed that two parties could get around the problem of asymmetric information by having one party send a signal that would reveal some piece of relevant information to the other party. That party would interpret the signal and adjust her purchasing behaviour accordingly—usually by offering a higher price than if she had not received the signal. There are, of course, many problems that these parties would run into. How much time, energy, or money should the sender spend on sending the signal? How can the receiver trust the signal to be an honest declaration of information? Assuming there is a signalling equilibrium under which the sender signals and the receiver trusts that information, under what circumstances will that equilibrium break down? In the job market, potential employees seek to sell their services to employers for some wage, or price. Employers are willing to pay higher wages to employ better workers. While the individual may know his or her own level of ability, the hiring firm is not able to observe such an intangible trait—thus there is an asymmetry of information between the two parties.
Education credentials can be used as a signal to the firm, indicating a certain level of ability that the individual may possess. This is beneficial to both parties as long as the signal indicates a desirable attribute—a signal such as a criminal record may not be so desirable. Michael Spence considers hiring as a type of investment under uncertainty, analogous to buying a lottery ticket, refers to the attributes of an applicant which are observable to the employer as indices. Of these, attributes which the applicant can manipulate are termed signals. Applicant age is thus an index, but is not a signal since it does not change at the discretion of the applicant; the employer is supposed to have conditional probability assessments of productive capacity, based on previous experience of the market, for each combination of indices and signals. The employer updates those assessments upon observing each employee's characteristics; the paper is concerned with a risk-neutral employer. The offered wage is the expected marginal product.
Signals may be acquired by sustaining signalling costs. If everyone invests in the signal in the the same way the signal can't be used as discriminatory, therefore a critical assumption is made: the costs of signalling are negatively correlated with productivity; this situation as described is a feedback loop: the employer updates his beliefs upon new market information and updates the wage schedule, applicants react by signalling, recruitment takes place. Michael Spence studies the signalling equilibrium, he began his 1973 model with an hypothetical example: suppose that there are two types of employees—good and bad—and that employers are willing to pay a higher wage to the good type than the bad type. Spence assumes that for employers, there's no real way to tell in advance which employees will be of the good or bad type. Bad employees aren't upset about this, because they get a free ride from the hard work of the good employees, but good employees know that they deserve to be paid more for their higher productivity, so they desire to invest in the signal—in this case, some amount of education.
But he does make one key assumption: good-type employees pay less for one unit of education than bad-type employees. The cost he refers to is not the cost of tuition and living expenses, sometimes called out of pocket expenses, as one could make the argument that higher ability persons tend to enroll in "better" institutions. Rather, the cost Spence is referring to is the opportunity cost; this is a combination of'costs', monetary and otherwise, including psychological, effort and so on. Of key importance to the value of the signal is the differing cost structure between "good" and "bad" workers; the cost of obtaining identical credentials is lower for the "good" employee than it is for the "bad" employee. The differing cost structure need not preclude "bad" workers from obtaining the credential. All, necessary for the signal to have value is that the group with the signal is positively correlated with the unobservable group of "good" workers. In general, the degree to which a signal is thought to be correlated to unknown or unobservable attributes is directly related to its value.
Fast Fourier transform
A fast Fourier transform is an algorithm that computes the discrete Fourier transform of a sequence, or its inverse. Fourier analysis converts a signal from its original domain to a representation in the frequency domain and vice versa; the DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is too slow to be practical. An FFT computes such transformations by factorizing the DFT matrix into a product of sparse factors; as a result, it manages to reduce the complexity of computing the DFT from O, which arises if one applies the definition of DFT, to O, where n is the data size. The difference in speed can be enormous for long data sets where N may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.
Fast Fourier transforms are used for applications in engineering and mathematics. The basic ideas were popularized in 1965, but some algorithms had been derived as early as 1805. In 1994, Gilbert Strang described the FFT as "the most important numerical algorithm of our lifetime", it was included in Top 10 Algorithms of 20th Century by the IEEE journal Computing in Science & Engineering; the best-known FFT algorithms depend upon the factorization of N, but there are FFTs with O complexity for all N for prime N. Many FFT algorithms only depend on the fact that e − 2 π i / N is an N-th primitive root of unity, thus can be applied to analogous transforms over any finite field, such as number-theoretic transforms. Since the inverse DFT is the same as the DFT, but with the opposite sign in the exponent and a 1/N factor, any FFT algorithm can be adapted for it; the development of fast algorithms for DFT can be traced to Gauss's unpublished work in 1805 when he needed it to interpolate the orbit of asteroids Pallas and Juno from sample observations.
His method was similar to the one published in 1965 by Cooley and Tukey, who are credited for the invention of the modern generic FFT algorithm. While Gauss's work predated Fourier's results in 1822, he did not analyze the computation time and used other methods to achieve his goal. Between 1805 and 1965, some versions of FFT were published by other authors. Frank Yates in 1932 published his version called interaction algorithm, which provided efficient computation of Hadamard and Walsh transforms. Yates' algorithm is still used in the field of statistical analysis of experiments. In 1942, G. C. Danielson and Cornelius Lanczos published their version to compute DFT for x-ray crystallography, a field where calculation of Fourier transforms presented a formidable bottleneck. While many methods in the past had focused on reducing the constant factor for O computation by taking advantage of "symmetries", Danielson and Lanczos realized that one could use the "periodicity" and apply a "doubling trick" to get O runtime.
James Cooley and John Tukey published a more general version of FFT in 1965, applicable when N is composite and not a power of 2. Tukey came up with the idea during a meeting of President Kennedy's Science Advisory Committee where a discussion topic involved detecting nuclear tests by the Soviet Union by setting up sensors to surround the country from outside. To analyze the output of these sensors, a fast Fourier transform algorithm would be needed. In discussion with Tukey, Richard Garwin recognized the general applicability of the algorithm not just to national security problems, but to a wide range of problems including one of immediate interest to him, determining the periodicities of the spin orientations in a 3-D crystal of Helium-3. Garwin gave Tukey's idea to Cooley for implementation. Cooley and Tukey published the paper in a short time of six months; as Tukey did not work at IBM, the patentability of the idea was doubted and the algorithm went into the public domain, through the computing revolution of the next decade, made FFT one of the indispensable algorithms in digital signal processing.
Let x0.... XN−1 be complex numbers; the DFT is defined by the formula X k = ∑ n = 0 N − 1 x n e − i 2 π k n / N = ∑ n = 0 N − 1 x n w − k n k = 0, …, N − 1. Where w = e i 2 π / N
Passivity is a property of engineering systems, used in a variety of engineering disciplines, but most found in analog electronics and control systems. A passive component, depending on field, may be either a component that consumes but does not produce energy or a component, incapable of power gain. A component, not passive is called an active component. An electronic circuit consisting of passive components is called a passive circuit and has the same properties as a passive component. Used out-of-context and without a qualifier, the term passive is ambiguous. Analog designers use this term to refer to incrementally passive components and systems, while control systems engineers will use this to refer to thermodynamically passive ones. Systems for which the small signal model is not passive are sometimes called locally active. Systems that can generate power about a time-variant unperturbed state are called parametrically active. In control systems and circuit network theory, a passive component or circuit is one that consumes energy, but does not produce energy.
Under this methodology and current sources are considered active, while resistors, inductors, tunnel diodes and other dissipative and energy-neutral components are considered passive. Circuit designers will sometimes refer to this class of components as dissipative, or thermodynamically passive. While many books give definitions for passivity, many of these contain subtle errors in how initial conditions are treated and the definitions do not generalize to all types of nonlinear time-varying systems with memory. Below is a correct, formal definition, taken from Wyatt et al. which explains the problems with many other definitions. Given an n-port R with a state representation S, initial state x, define available energy EA as: E A = sup x → T ≥ 0 ∫ 0 T − ⟨ v, i ⟩ d t where the notation supx→T≥0 indicates that the supremum is taken over all T ≥ 0 and all admissible pairs with the fixed initial state x. A system is considered passive if EA is finite for all initial states x. Otherwise, the system is considered active.
Speaking, the inner product ⟨ v, i ⟩ is the instantaneous power, EA is the upper bound on the integral of the instantaneous power. This upper bound is the available energy in the system for the particular initial condition x. If, for all possible initial states of the system, the energy available is finite the system is called passive. In circuit design, passive components refer to ones that are not capable of power gain. Under this definition, passive components include capacitors, resistors, transformers, voltage sources, current sources, they exclude devices like transistors, vacuum tubes, tunnel diodes, glow tubes. Formally, for a memoryless two-terminal element, this means that the current–voltage characteristic is monotonically increasing. For this reason, control systems and circuit network theorists refer to these devices as locally passive, incrementally passive, monotone increasing, or monotonic, it is not clear how this definition would be formalized to multiport devices with memory – as a practical matter, circuit designers use this term informally, so it may not be necessary to formalize it.
This term is used colloquially in a number of other contexts: A passive USB to PS/2 adapter consists of wires, resistors and similar passive components. An active USB to PS/2 adapter consists of logic to translate signals A passive mixer consists of just resistors, whereas an active mixer includes components capable of gain. In audio work one can find both passive and active converters between balanced and unbalanced lines. A passive bal/unbal converter is just a transformer along with, of course, the requisite connectors, while an active one consists of a differential drive or an instrumentation amplifier. In some informal settings, passivity may refer to the simplicity of the device, although this definition is now universally considered incorrect. Here, devices like diodes would be considered active, only simple devices like capacitors and resistors are considered passive. In some cases, the term "linear element" may be a more appropriate term than "passive device." In other cases, "solid state device" may be a more appropriate term than "active device."
Passivity, in most cases, can be used to demonstrate that passive circuits will be stable under specific criteria. Note that this only works if only one of the above definitions of passivity is used – if components from the two are mixed, the systems may be unstable under any criteria. In addition, passive circuits will not be stable under all stabilit
A transducer is a device that converts energy from one form to another. A transducer converts a signal in one form of energy to a signal in another. Transducers are employed at the boundaries of automation and control systems, where electrical signals are converted to and from other physical quantities; the process of converting one form of energy to another is known as transduction. Transducers that convert physical quantities into mechanical ones are called mechanical transducers. Examples are a thermocouple that changes temperature differences into a small voltage, or a Linear variable differential transformer used to measure displacement. Transducers can be categorized by which direction information passes through them: A sensor is a transducer that receives and responds to a signal or stimulus from a physical system, it produces a signal, which represents information about the system, used by some type of telemetry, information or control system. An actuator is a device, responsible for moving or controlling a mechanism or system.
It is controlled by a signal from a control manual control. It is operated by a source of energy, which can be mechanical force, electrical current, hydraulic fluid pressure, or pneumatic pressure, converts that energy into motion. An actuator is the mechanism; the control system can be software-based, a human, or any other input. Bidirectional transducers convert physical phenomena to electrical signals and convert electrical signals into physical phenomena. An example of an inherently bidirectional transducer is an antenna, which can convert radio waves into an electrical signal to be processed by a radio receiver, or translate an electrical signal from a transmitter into radio waves. Another example is voice coils, which are used in loudspeakers to translate an electrical audio signal into sound and in dynamic microphones to translate sound waves into an audio signal. Active sensors require an external power source to operate, called an excitation signal; the signal is modulated by the sensor to produce an output signal.
For example, a thermistor does not generate any electrical signal, but by passing an electric current through it, its resistance can be measured by detecting variations in the current or voltage across the thermistor. Passive sensors, in contrast, generate an electric current in response to an external stimulus which serves as the output signal without the need of an additional energy source; such examples are a photodiode, a piezoelectric sensor, thermocouple. Some specifications that are used to rate transducers Dynamic range: This is the ratio between the largest amplitude signal and the smallest amplitude signal the transducer can translate. Transducers with larger dynamic range are more "sensitive" and precise. Repeatability: This is the ability of the transducer to produce an identical output when stimulated by the same input. Noise: All transducers add some random noise to their output. In electrical transducers this may be electrical noise due to thermal motion of charges in circuits.
Noise corrupts small signals more than large ones. Hysteresis: This is a property in which the output of the transducer depends on not only its current input but its past input. For example, an actuator which uses a gear train may have some backlash, which means that if the direction of motion of the actuator reverses, there will be a dead zone before the output of the actuator reverses, caused by play between the gear teeth. Electromagnetic: Antennae – converts propagating electromagnetic waves to and from conducted electrical signals magnetic cartridges – converts relative physical motion to and from electrical signals Tape head, disk read-and-write heads – converts magnetic fields on a magnetic medium to and from electrical signals Hall effect sensors – converts a magnetic field level into an electrical signal Electrochemical: pH probes Electro-galvanic oxygen sensors Hydrogen sensors Electromechanical: Accelerometers Air flow sensors Electroactive polymers Rotary motors, linear motors Galvanometers Linear variable differential transformers or rotary variably differential transformers Load cells – converts force to mV/V electrical signal using strain gauges Microelectromechanical systems Potentiometers Pressure sensors String potentiometers Tactile sensors Vibration powered generators Vibrating structure gyroscopes Electroacoustic: Loudspeakers, earphones – converts electrical signals into sound Microphones – converts sound into an electrical signal Pickup – converts motion of metal strings into an electrical signal Tactile transducers – converts electrical signal into vibration Piezoelectric crystals – converts deformations of solid-state crystals to and from electrical signals Geophones – converts a ground movement into voltage Gramophone pickups – Hydrophones – converts changes in water pressure into an electrical signal Sonar transponders Ultrasonic transceivers, transmitt
In communication systems, signal processing, electrical engineering, a signal is a function that "conveys information about the behavior or attributes of some phenomenon". In its most common usage, in electronics and telecommunication, this is a time varying voltage, current or electromagnetic wave used to carry information. A signal may be defined as an "observable change in a quantifiable entity". In the physical world, any quantity exhibiting variation in time or variation in space is a signal that might provide information on the status of a physical system, or convey a message between observers, among other possibilities; the IEEE Transactions on Signal Processing states that the term "signal" includes audio, speech, communication, sonar, radar and musical signals. In a effort of redefining a signal, anything, only a function of space, such as an image, is excluded from the category of signals, it is stated that a signal may or may not contain any information. In nature, signals can take the form of any action by one organism able to be perceived by other organisms, ranging from the release of chemicals by plants to alert nearby plants of the same type of a predator, to sounds or motions made by animals to alert other animals of the presence of danger or of food.
Signaling occurs in organisms all the way down to the cellular level, with cell signaling. Signaling theory, in evolutionary biology, proposes that a substantial driver for evolution is the ability for animals to communicate with each other by developing ways of signaling. In human engineering, signals are provided by a sensor, the original form of a signal is converted to another form of energy using a transducer. For example, a microphone converts an acoustic signal to a voltage waveform, a speaker does the reverse; the formal study of the information content of signals is the field of information theory. The information in a signal is accompanied by noise; the term noise means an undesirable random disturbance, but is extended to include unwanted signals conflicting with the desired signal. The prevention of noise is covered in part under the heading of signal integrity; the separation of desired signals from a background is the field of signal recovery, one branch of, estimation theory, a probabilistic approach to suppressing random disturbances.
Engineering disciplines such as electrical engineering have led the way in the design and implementation of systems involving transmission and manipulation of information. In the latter half of the 20th century, electrical engineering itself separated into several disciplines, specialising in the design and analysis of systems that manipulate physical signals. Definitions specific to sub-fields are common. For example, in information theory, a signal is a codified message, that is, the sequence of states in a communication channel that encodes a message. In the context of signal processing, signals are analog and digital representations of analog physical quantities. In terms of their spatial distributions, signals may be categorized as point source signals and distributed source signals. In a communication system, a transmitter encodes a message to create a signal, carried to a receiver by the communications channel. For example, the words "Mary had a little lamb" might be the message spoken into a telephone.
The telephone transmitter converts the sounds into an electrical signal. The signal is transmitted to the receiving telephone by wires. In telephone networks, for example common-channel signaling, refers to phone number and other digital control information rather than the actual voice signal. Signals can be categorized in various ways; the most common distinction is between discrete and continuous spaces that the functions are defined over, for example discrete and continuous time domains. Discrete-time signals are referred to as time series in other fields. Continuous-time signals are referred to as continuous signals. A second important distinction is between continuous-valued. In digital signal processing, a digital signal may be defined as a sequence of discrete values associated with an underlying continuous-valued physical process. In digital electronics, digital signals are the continuous-time waveform signals in a digital system, representing a bit-stream. Another important property of a signal is its information content.
Two main types of signals encountered in practice are digital. The figure shows a digital signal that results from approximating an analog signal by its values at particular time instants. Digital signals are quantized. An analog signal is any continuous signal for which the time varying feature of the signal is a representation of some other time varying quantity, i.e. analogous to another time varying signal. For example, in an analog audio signal, the instantaneous voltage of the signal varies continuously with the pressure of the sound waves, it differs from a digital signal, in which the continuous quantity is a representation of a sequence of discrete values which can only take on one of a finite number of values. The term analog signal refers to electrical signals. An analog signal uses some property of the medium to convey the signal's information. For ex
A computer is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of called programs; these programs enable computers to perform an wide range of tasks. A "complete" computer including the hardware, the operating system, peripheral equipment required and used for "full" operation can be referred to as a computer system; this term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster. Computers are used as control systems for a wide variety of industrial and consumer devices; this includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer-aided design, general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects hundreds of millions of other computers and their users.
Early computers were only conceived as calculating devices. Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century; the first digital electronic calculating machines were developed during World War II. The speed and versatility of computers have been increasing ever since then. Conventionally, a modern computer consists of at least one processing element a central processing unit, some form of memory; the processing element carries out arithmetic and logical operations, a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices, output devices, input/output devices that perform both functions. Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.
According to the Oxford English Dictionary, the first known use of the word "computer" was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: "I haue read the truest computer of Times, the best Arithmetician that euer breathed, he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. During the latter part of this period women were hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations; the Online Etymology Dictionary gives the first attested use of "computer" in the 1640s, meaning "one who calculates". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' is from 1897."
The Online Etymology Dictionary indicates that the "modern use" of the term, to mean "programmable digital electronic computer" dates from "1945 under this name. Devices have been used to aid computation for thousands of years using one-to-one correspondence with fingers; the earliest counting device was a form of tally stick. Record keeping aids throughout the Fertile Crescent included calculi which represented counts of items livestock or grains, sealed in hollow unbaked clay containers; the use of counting rods is one example. The abacus was used for arithmetic tasks; the Roman abacus was developed from devices used in Babylonia as early as 2400 BC. Since many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, markers moved around on it according to certain rules, as an aid to calculating sums of money; the Antikythera mechanism is believed to be the earliest mechanical analog "computer", according to Derek J. de Solla Price.
It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, has been dated to c. 100 BC. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use; the planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD.
The sector, a calculating instrument used for solving problems in proportion, trigonometry and division, for various functions, such as squares and cube roots, was developed in