A finite-state machine or finite-state automaton, finite automaton, or a state machine, is a mathematical model of computation. It is an abstract machine that can be in one of a finite number of states at any given time; the FSM can change from one state to another in response to some external inputs. An FSM is defined by a list of its states, its initial state, the conditions for each transition. Finite state machines are of two types – deterministic finite state machines and non-deterministic finite state machines. A deterministic finite-state machine can be constructed equivalent to any non-deterministic one; the behavior of state machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. Simple examples are vending machines, which dispense products when the proper combination of coins is deposited, whose sequence of stops is determined by the floors requested by riders, traffic lights, which change sequence when cars are waiting, combination locks, which require the input of combination numbers in the proper order.
The finite state machine has less computational power than some other models of computation such as the Turing machine. The computational power distinction means there are computational tasks that a Turing machine can do but a FSM cannot; this is because a FSM's memory is limited by the number of states it has. FSMs are studied in the more general field of automata theory. An example of a simple mechanism that can be modeled by a state machine is a turnstile. A turnstile, used to control access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway; the arms are locked, blocking the entry, preventing patrons from passing through. Depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again. Considered as a state machine, the turnstile has two possible states: Unlocked. There are two possible inputs that affect its state: pushing the arm.
In the locked state, pushing on the arm has no effect. Putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect. However, a customer pushing through the arms, giving a push input, shifts the state back to Locked; the turnstile state machine can be represented by a state transition table, showing for each possible state, the transitions between them and the outputs resulting from each input: The turnstile state machine can be represented by a directed graph called a state diagram. Each state is represented by a node. Edges show the transitions from one state to another; each arrow is labeled with the input. An input that doesn't cause a change of state is represented by a circular arrow returning to the original state; the arrow into the Locked node from the black dot indicates. A state is a description of the status of a system, waiting to execute a transition. A transition is a set of actions to be executed when a condition is fulfilled or when an event is received.
For example, when using an audio system to listen to the radio, receiving a "next" stimulus results in moving to the next station. When the system is in the "CD" state, the "next" stimulus results in moving to the next track. Identical stimuli trigger different actions depending on the current state. In some finite-state machine representations, it is possible to associate actions with a state: an entry action: performed when entering the state, an exit action: performed when exiting the state. Several state transition table types are used; the most common representation is shown below: the combination of current state and input shows the next state. The complete action's information is not directly described in the table and can only be added using footnotes. A FSM definition including the full actions information is possible using state tables; the Unified Modeling Language has a notation for describing state machines. UML state machines overcome the limitations of traditional finite state machines while retaining their main benefits.
UML state machines introduce the new concepts of hierarchically nested states and orthogonal regions, while extending the notion of actions. UML state machines have the characteristics of Moore machines, they support actions that depend on both the state of the system and the triggering event, as in Mealy machines, as well as entry and exit actions, which are associated with states rather than transitions, as in Moore machines. The Specification and Description Language is a standard from ITU that includes graphical symbols to describe actions in the transition: send an event receive an event start a timer cancel a timer start another concurrent state machine decisionSDL embeds basic data types called "Abstract Data Types", an action language, an execution semantic in order to make the finite state machine executable. There are a large number of variants to represent an FSM such as the one in figure 3. In addition to their use in modeling reactive systems
A microprocessor is a computer processor that incorporates the functions of a central processing unit on a single integrated circuit, or at most a few integrated circuits. The microprocessor is a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, provides results as output. Microprocessors contain sequential digital logic. Microprocessors operate on symbols represented in the binary number system; the integration of a whole CPU onto a single or a few integrated circuits reduced the cost of processing power. Integrated circuit processors are produced in large numbers by automated processes, resulting in a low unit price. Single-chip processors increase reliability because there are many fewer electrical connections that could fail; as microprocessor designs improve, the cost of manufacturing a chip stays the same according to Rock's law. Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits.
Microprocessors combined this into a few large-scale ICs. Continued increases in microprocessor capacity have since rendered other forms of computers completely obsolete, with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers; the complexity of an integrated circuit is bounded by physical limitations on the number of transistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, the heat that the chip can dissipate. Advancing technology makes more powerful chips feasible to manufacture. A minimal hypothetical microprocessor might include only an arithmetic logic unit, a control logic section; the ALU performs addition and operations such as AND or OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation.
The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction. A single operation code might affect many individual data paths and other elements of the processor; as integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger. Additional features were added to the processor architecture. Floating-point arithmetic, for example, was not available on 8-bit microprocessors, but had to be carried out in software. Integration of the floating point unit first as a separate integrated circuit and as part of the same microprocessor chip sped up floating point calculations. Physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. While this required extra logic to handle, for example and overflow within each slice, the result was a system that could handle, for example, 32-bit words using integrated circuits with a capacity for only four bits each.
The ability to put large numbers of transistors on one chip makes it feasible to integrate memory on the same die as the processor. This CPU cache has the advantage of faster access than off-chip memory and increases the processing speed of the system for many applications. Processor clock frequency has increased more than external memory speed, so cache memory is necessary if the processor is not delayed by slower external memory. A microprocessor is a general-purpose entity. Several specialized processing devices have followed: A digital signal processor is specialized for signal processing. Graphics processing units are processors designed for realtime rendering of images. Other specialized units exist for video machine vision. Microcontrollers integrate a microprocessor with peripheral devices in embedded systems. Systems on chip integrate one or more microprocessor or microcontroller cores. Microprocessors can be selected for differing applications based on their word size, a measure of their complexity.
Longer word sizes allow each clock cycle of a processor to carry out more computation, but correspond to physically larger integrated circuit dies with higher standby and operating power consumption. 4, 8 or 12 bit processors are integrated into microcontrollers operating embedded systems. Where a system is expected to handle larger volumes of data or require a more flexible user interface, 16, 32 or 64 bit processors are used. An 8- or 16-bit processor may be selected over a 32-bit processor for system on a chip or microcontroller applications that require low-power electronics, or are part of a mixed-signal integrated circuit with noise-sensitive on-chip analog electronics such as high-resolution analog to digital converters, or both. Running 32-bit arithmetic on an 8-bit chip could end up using more power, as the chip must execute software with multiple instructions. Thousands of items that were traditionally not computer-related inc
Computational complexity theory
Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used; the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e. the amount of resources needed to solve them, such as time and storage. Other measures of complexity are used, such as the amount of communication, the number of gates in a circuit and the number of processors. One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do; the P versus NP problem, one of the seven Millennium Prize Problems, is dedicated to the field of computational complexity.
Related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically. A computational problem can be viewed as an infinite collection of instances together with a solution for every instance; the input string for a computational problem is referred to as a problem instance, should not be confused with the problem itself.
In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing; the instance is a number and the solution is "yes" if the number is prime and "no" otherwise. Stated another way, the instance is a particular input to the problem, the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
When considering computational problems, a problem instance is a string over an alphabet. The alphabet is taken to be the binary alphabet, thus the strings are bitstrings; as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Though some proofs of complexity-theoretic theorems assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding; this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, the non-members are those instances whose output is no.
The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following; the input is an arbitrary graph. The problem consists in deciding; the formal language associated with this decision problem is the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. A function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the integer factorization problem, it is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not the case, since function problems can be recast as decision problems.
For example, the multiplication of two integers can be expressed as the set of triples such that the relation a × b = c holds. Deciding whether a given triple is a member of this set corresponds to solving
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, other tasks; as an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input, the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states producing "output" and terminating at a final ending state; the transition from one state to the next is not deterministic. The concept of algorithm has existed for centuries. Greek mathematicians used algorithms in the sieve of Eratosthenes for finding prime numbers, the Euclidean algorithm for finding the greatest common divisor of two numbers; the word algorithm itself is derived from the 9th century mathematician Muḥammad ibn Mūsā al-Khwārizmī, Latinized Algoritmi.
A partial formalization of what would become the modern concept of algorithm began with attempts to solve the Entscheidungsproblem posed by David Hilbert in 1928. Formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, Alan Turing's Turing machines of 1936–37 and 1939. The word'algorithm' has its roots in Latinizing the name of Muhammad ibn Musa al-Khwarizmi in a first step to algorismus. Al-Khwārizmī was a Persian mathematician, astronomer and scholar in the House of Wisdom in Baghdad, whose name means'the native of Khwarazm', a region, part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, translated into Latin during the 12th century under the title Algoritmi de numero Indorum; this title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.
Al-Khwarizmi was the most read mathematician in Europe in the late Middle Ages through another of his books, the Algebra. In late medieval Latin, English'algorism', the corruption of his name meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός'number', the Latin word was altered to algorithmus, the corresponding English term'algorithm' is first attested in the 17th century. In English, it was first used in about 1230 and by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu, it begins thus: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as: Algorism is the art by which at present we use those Indian figures, which number two times five; the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals.
An informal definition could be "a set of rules that defines a sequence of operations". Which would include all computer programs, including programs that do not perform numeric calculations. A program is only an algorithm if it stops eventually. A prototypical example of an algorithm is the Euclidean algorithm to determine the maximum common divisor of two integers. Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† to list all members of an enumerably infinite set by writing out their names, one after another, in some notation, but humans can do something useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human, capable of carrying out only elementary operations on symbols.
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large, thus an algorithm can be an algebraic equation such as y = m + n – two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of: Precise instructions for a fast, efficient, "good" process that specifies the "moves" of "the computer" to find and process arbitrary input integers/symbols m and n, symbols + and =... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format
Computer engineering is a branch of engineering that integrates several fields of computer science and electronic engineering required to develop computer hardware and software. Computer engineers have training in electronic engineering, software design, hardware-software integration instead of only software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microcontrollers, personal computers, supercomputers, to circuit design; this field of engineering not only focuses on how computer systems themselves work but how they integrate into the larger picture. Usual tasks involving computer engineers include writing software and firmware for embedded microcontrollers, designing VLSI chips, designing analog sensors, designing mixed signal circuit boards, designing operating systems. Computer engineers are suited for robotics research, which relies on using digital systems to control and monitor electrical systems like motors and sensors.
In many institutions of higher learning, computer engineering students are allowed to choose areas of in-depth study in their junior and senior year because the full breadth of knowledge used in the design and application of computers is beyond the scope of an undergraduate degree. Other institutions may require engineering students to complete one or two years of general engineering before declaring computer engineering as their primary focus. Computer engineering began in 1939 when John Vincent Atanasoff and Clifford Berry began developing the world's first electronic digital computer through physics and electrical engineering. John Vincent Atanasoff was once a physics and mathematics teacher for Iowa State University and Clifford Berry a former graduate under electrical engineering and physics. Together, they created the Atanasoff-Berry computer known as the ABC which took 5 years to complete. While the original ABC was dismantled and discarded in the 1940s a tribute was made to the late inventors, a replica of the ABC was made in 1997 where it took a team of researchers and engineers four years and $350,000 to build.
The first computer engineering degree program in the United States was established in 1971 at Case Western Reserve University in Cleveland, Ohio. As of 2015, there were 250 ABET-accredited computer engineering programs in the U. S. In Europe, accreditation of computer engineering schools is done by a variety of agencies part of the EQANIE network. Due to increasing job requirements for engineers who can concurrently design hardware, software and manage all forms of computer systems used in industry, some tertiary institutions around the world offer a bachelor's degree called computer engineering. Both computer engineering and electronic engineering programs include analog and digital circuit design in their curriculum; as with most engineering disciplines, having a sound knowledge of mathematics and science is necessary for computer engineers. Computer engineering is referred to as computer engineering at some universities. Most entry-level computer engineering jobs require at least a bachelor's degree in computer engineering.
One must learn an array of mathematics such as calculus and trigonometry and some computer science classes. Sometimes a degree in electronic engineering is accepted, due to the similarity of the two fields; because hardware engineers work with computer software systems, a strong background in computer programming is necessary. According to BLS, "a computer engineering major is similar to electrical engineering but with some computer science courses added to the curriculum"; some large firms or specialized jobs require a master's degree. It is important for computer engineers to keep up with rapid advances in technology. Therefore, many continue learning throughout their careers; this can be helpful when it comes to learning new skills or improving existing ones. For example, as the relative cost of fixing a bug increases the further along it is in the software development cycle, there can be greater cost savings attributed to developing and testing for quality code as soon as possible in the process, before release.
There are two major specialties in computer engineering: software. According to the BLS, Job Outlook employment for computer hardware engineers, the expected ten-year growth from 2014 to 2024 for computer hardware engineering was an estimated 3% and there was a total of 77,700 jobs that same year." And is down from 7% for the 2012 to 2022 BLS estimate and is further down from 9% in the BLS 2010 to 2020 estimate." Today, computer hardware is somehow equal to electronic and computer engineering and has divided into many subcategories, the most significant of them is Embedded system design. According to the U. S. Bureau of Labor Statistics, "computer applications software engineers and computer systems software engineers are projected to be among the faster than average growing occupations" The expected ten-year growth as of 2014 for computer software engineering was an estimated seventeen percent and there was a total of 1,114,000 jobs that same year; this is down from the 2012 to 2022 BLS estimate of 22% for software developers.
And, further down from the 30% 2010 to 2020 BLS estimate. In addition, growing concerns over cybersecurity add up to put computer software engineering high above the average rate of increase for all fields. However, some of the work will be outsourced in foreign countries. Due to this, job growth will not be as fast as during the last
A Turing machine is a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, given any computer algorithm, a Turing machine capable of simulating that algorithm's logic can be constructed; the machine operates on an infinite memory tape divided into discrete "cells". The machine positions its "head" over a cell and "reads" or "scans" the symbol there; as per the symbol and its present place in a "finite table" of user-specified instructions, the machine writes a symbol in the cell either moves the tape one cell left or right either proceeds to a subsequent instruction or halts the computation. The Turing machine was invented in 1936 by Alan Turing, who called it an "a-machine". With this model, Turing was able to answer two questions in the negative: Does a machine exist that can determine whether any arbitrary machine on its tape is "circular", thus by providing a mathematical description of a simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, the uncomputability of the Entscheidungsproblem.
Thus, Turing machines prove fundamental limitations on the power of mechanical computation. While they can express arbitrary computations, their minimalistic design makes them unsuitable for computation in practice: real-world computers are based on different designs that, unlike Turing machines, use random-access memory. Turing completeness is the ability for a system of instructions to simulate a Turing machine. A programming language, Turing complete is theoretically capable of expressing all tasks accomplishable by computers. A Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More it is a machine capable of enumerating some arbitrary subset of valid strings of an alphabet. A Turing machine has a tape of infinite length on which it can perform write operations. Assuming a black box, the Turing machine cannot know whether it will enumerate any one specific string of the subset with a given program.
This is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which further implies that it is capable of robustly evaluating first-order logic in an infinite number of ways; this is famously demonstrated through lambda calculus. A Turing machine, able to simulate any other Turing machine is called a universal Turing machine. A more mathematically oriented definition with a similar "universal" nature was introduced by Alonzo Church, whose work on lambda calculus intertwined with Turing's in a formal theory of computation known as the Church–Turing thesis; the thesis states that Turing machines indeed capture the informal notion of effective methods in logic and mathematics, provide a precise definition of an algorithm or "mechanical procedure". Studying their abstract properties yields many insights into computer science and complexity theory. In his 1948 essay, "Intelligent Machinery", Turing wrote that his machine consisted of:...an unlimited memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed.
At any moment there is one symbol in the machine. The machine can alter the scanned symbol, its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behavior of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore have an innings; the Turing machine mathematically models a machine. On this tape are symbols, which the machine can read and write, one at a time, using a tape head. Operation is determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1. In the original article, Turing imagines not a mechanism, but a person whom he calls the "computer", who executes these deterministic mechanical rules slavishly. More a Turing machine consists of: A tape divided into cells, one next to the other; each cell contains a symbol from some finite alphabet. The alphabet contains one or more other symbols.
The tape is assumed to be arbitrarily extendable to the left and to the right, i.e. the Turing machine is always supplied with as much tape as it needs for its computation. Cells that have not been written before are assumed to be filled with the blank symbol. In some models the tape has a left e
A thought experiment considers some hypothesis, theory, or principle for the purpose of thinking through its consequences. Given the structure of the experiment, it may not be possible to perform it, if it could be performed, there need not be an intention to perform it; the common goal of a thought experiment is to explore the potential consequences of the principle in question: "A thought experiment is a device with which one performs an intentional, structured process of intellectual deliberation in order to speculate, within a specifiable problem domain, about potential consequents for a designated antecedent". Examples of thought experiments include Schrödinger's cat, illustrating quantum indeterminacy through the manipulation of a sealed environment and a tiny bit of radioactive substance, Maxwell's demon, which attempts to demonstrate the ability of a hypothetical finite being to violate the 2nd law of thermodynamics; the ancient Greek δείκνυμι, or thought experiment, "was the most ancient pattern of mathematical proof", existed before Euclidean mathematics, where the emphasis was on the conceptual, rather than on the experimental part of a thought-experiment.
The key experiment in the history of modern science is Galileo's demonstration that falling objects must fall at the same rate regardless of their masses. This is thought to have been a straightforward physical demonstration, involving climbing up the Leaning Tower of Pisa and dropping two heavy weights off it, whereas in fact, it was a logical demonstration, using the'thought experiment' technique. The'experiment' is described by Galileo in Discorsi e dimostrazioni matematiche thus: Salviati. If we take two bodies whose natural speeds are different, it is clear that on uniting the two, the more rapid one will be retarded by the slower, the slower will be somewhat hastened by the swifter. Do you not agree with me in this opinion? Simplicio. You are unquestionably right. Salviati, but if this is true, if a large stone moves with a speed of, eight while a smaller moves with a speed of four when they are united, the system will move with a speed less than eight. Hence the heavier body moves with less speed than the lighter.
Thus you see how, from your assumption that the heavier body moves more than the lighter one, I infer that the heavier body moves more slowly. Although the extract does not convey the elegance and power of the'demonstration' well, it is clear that it is a'thought' experiment, rather than a practical one. Strange as Cohen says, that philosophers and scientists alike refuse to acknowledge either Galileo in particular, or the thought experiment technique in general for its pivotal role in both science and philosophy. Instead, many philosophers prefer to consider'Thought Experiments' to be the use of a hypothetical scenario to help understand the way things are. Thought experiments have been used in a variety of fields, including philosophy, law and mathematics. In philosophy, they have been used at least since some pre-dating Socrates. In law, they were well-known to Roman lawyers quoted in the Digest. In physics and other sciences, notable thought experiments date from the 19th and the 20th century, but examples can be found at least as early as Galileo.
Johann Witt-Hansen established that Hans Christian Ørsted was the first to use the Latin-German mixed term Gedankenexperiment circa 1812. Ørsted was the first to use its German equivalent, Gedankenversuch, in 1820. Much Ernst Mach used the term Gedankenexperiment in a different way, to denote the imaginary conduct of a real experiment that would be subsequently performed as a real physical experiment by his students. Physical and mental experimentation could be contrasted: Mach asked his students to provide him with explanations whenever the results from their subsequent, physical experiment differed from those of their prior, imaginary experiment; the English term thought experiment was coined from Mach's Gedankenexperiment, it first appeared in the 1897 English translation of one of Mach’s papers. Prior to its emergence, the activity of posing hypothetical questions that employed subjunctive reasoning had existed for a long time. However, people had no way of speaking about it; this helps to explain the wide and diverse range of the application of the term "thought experiment" once it had been introduced into English.
Thought experiments, which are well-structured, well-defined hypothetical questions that employ subjunctive reasoning – "What might happen if... " – have been used to pose questions in philosophy at least since Greek antiquity, some pre-dating Socrates. In physics and other sciences many thought experiments date from the 19th and the 20th Century, but examples can be found at least as early as Galileo. In thought experiments we gain new information by rearranging or reorganizing known empirical data in a new way and drawing new inferences from them or by looking at these data from a different and unusual perspective. In Galileo’s thought experiment, for ex