The electron is a subatomic particle, symbol e− or β−, whose electric charge is negative one elementary charge. Electrons belong to the first generation of the lepton particle family, are thought to be elementary particles because they have no known components or substructure; the electron has a mass, 1/1836 that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum of a half-integer value, expressed in units of the reduced Planck constant, ħ; as it is a fermion, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: they can collide with other particles and can be diffracted like light; the wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy. Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism and thermal conductivity, they participate in gravitational and weak interactions.
Since an electron has charge, it has a surrounding electric field, if that electron is moving relative to an observer, it will generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications such as electronics, cathode ray tubes, electron microscopes, radiation therapy, gaseous ionization detectors and particle accelerators. Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics; the Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms.
Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge'electron' in 1891, J. J. Thomson and his team of British physicists identified it as a particle in 1897. Electrons can participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere; the antiparticle of the electron is called the positron. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons.
The ancient Greeks noticed. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electrica, to refer to those substances with property similar to that of amber which attract small objects after being rubbed. Both electric and electricity are derived from the Latin ēlectrum, which came from the Greek word for amber, ἤλεκτρον. In the early 1700s, Francis Hauksbee and French chemist Charles François du Fay independently discovered what they believed were two kinds of frictional electricity—one generated from rubbing glass, the other from rubbing resin. From this, du Fay theorized that electricity consists of two electrical fluids and resinous, that are separated by friction, that neutralize each other when combined. American scientist Ebenezer Kinnersley also independently reached the same conclusion. A decade Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess or deficit.
He gave them the modern charge nomenclature of negative respectively. Franklin thought of the charge carrier as being positive, but he did not identify which situation was a surplus of the charge carrier, which situation was a deficit. Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges. Beginning in 1846, German physicist William Weber theorized that electricity was composed of positively and negatively charged fluids, their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion, he was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis. However, Stoney could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity".
Stoney coined the term
Fusion energy gain factor
The fusion energy gain factor expressed with the symbol Q, is the ratio of fusion power produced in a nuclear fusion reactor to the power required to maintain the plasma in steady state. The condition of Q = 1, when the power being released by the fusion reactions is equal to the required heating power, is referred to as breakeven, or in some sources, scientific breakeven; the power given off by the fusion reactions may be captured within the fuel, leading to self-heating. Most fusion reactions release at least some of their energy in a form that cannot be captured within the plasma, so a system at Q = 1 will cool without external heating. With typical fuels, self-heating in fusion reactors is not expected to match the external sources until at least Q = 5. If Q increases past this point, increasing self-heating removes the need for external heating. At this point the reaction becomes self-sustaining, a condition called ignition. Ignition corresponds to infinite Q, is regarded as desirable for a practical reactor design.
Over time, several related terms have entered the fusion lexicon. As a reactor does not cover its own heating losses until about Q = 5, the term engineering breakeven is sometimes used to describe a reactor that produces enough electricity to provide that heating. Above engineering breakeven a machine would produce more electricity than it uses, could sell that excess. A machine that can sell enough electricity to cover its operating costs, estimated to require at least Q = 20, is sometimes known as economic breakeven. Additionally, fusion fuels tritium, are expensive, so many experiments run on various test gasses like hydrogen or deuterium. A reactor running on these fuels that reaches the conditions for breakeven if tritium was introduced is said to be operating at extrapolated breakeven; as of 2017, the record for Q is held by the JET tokamak in the UK, at Q = / ≈ 0.67, first attained in 1997. ITER was designed to reach ignition, but is designed to reach Q = 10, producing 500 MW of fusion power from 50 MW of injected thermal power.
The highest record for extrapolated breakeven was posted by the JT-60 device, with Qext = 1.25. Q is the comparison of the power being released by the fusion reactions in a reactor, Pfus, to the constant heating power being supplied, Pheat. However, there are several definitions of breakeven. In 1955, John Lawson was the first to explore the energy balance mechanisms in detail in classified works but published in a now-famous 1957 paper. In this paper he considered and refined work by earlier researchers, notably Hans Thirring, Peter Thonemann, a review article by Richard Post. Expanding on all of these, Lawson's paper made detailed predictions for the amount of power that would be lost through various mechanisms, compared that to the energy needed to sustain the reaction; this balance is today known as the Lawson criterion. In a successful fusion reactor design, the fusion reactions generate an amount of power designated Pfus; some amount of this energy, Ploss, is lost through a variety of mechanisms convection of the fuel to the walls of the reactor chamber and various forms of radiation that cannot be captured to generate power.
In order to keep the reaction going, the system has to provide heating to make up for these losses, where Ploss = Pheat to maintain thermal equilibrium. The most basic definition of breakeven is; some works refer to this definition as scientific breakeven. However, this usage is rare outside certain areas the inertial confinement fusion field, where the term is much more used. Since the 1950s, most commercial fusion reactor designs have been based on a mix of deuterium and tritium as their primary fuel; as tritium is radioactive bioactive and mobile, it represents a significant safety concern and adds to the cost of designing and operating such a reactor. In order to lower costs, many experimental machines are designed to run on test fuels of hydrogen or deuterium alone, leaving out the tritium. In this case, the term extrapolated breakeven is used to define the expected performance of the machine running on D-T fuel based on the performance when running on hydrogen or deuterium alone; the records for extrapolated breakeven are higher than the records for scientific breakeven.
Both JET and JT-60 have reached values around 1.25 while running on D-D fuel. When running on D-T, only possible in JET, the maximum performance is about half the extrapolated value. Another related term, engineering breakeven, considers the need to extract the energy from the reactor, turn that into electrical energy, feed that back into the heating system; this closed. In this case, the basic definition changes by adding additional terms to the Pfus side to consider the efficiencies of these processes. Most fusion reactions release energy in a variety of forms neutrons and a variety of charged particles like alpha particles. Neutrons are electrically neutral and will travel out of any magnetic confinement fusion design, in spite of the high densities found in inertial confinement fusion designs, they tend to escape the fuel mass in these designs as well; this means that only the charged particles from the reactions can be captured within the fuel mass and give rise to self-heating. If the fraction of the energy being released in the charged particles is fch the power in these particles is Pch = fchPfus.
If this self-heating process is perfect, that is, all of Pch is capt
In thermodynamics, the term exothermic process describes a process or reaction that releases energy from the system to its surroundings in the form of heat, but in a form of light, electricity, or sound. Its etymology stems from the Greek prefix έξω and the Greek word θερμικός; the term exothermic was first coined by Marcellin Berthelot. The opposite of an exothermic process is an endothermic process, one that absorbs energy in the form of heat; the concept is applied in the physical sciences to chemical reactions, where as in chemical bond energy that will be converted to thermal energy. Exothermic describe two types of chemical systems found in nature, as follows. Stated, after an exothermic reaction, more energy has been released to the surroundings than was absorbed to initiate and maintain the reaction. An example would be the burning of a candle, wherein the sum of calories produced by combustion exceeds the number of calories absorbed in lighting the flame and in the flame maintaining itself..
On the other hand, in an endothermic reaction or system, energy is taken from the surroundings in the course of the reaction. An example of an endothermic reaction is a first aid cold pack, in which the reaction of two chemicals, or dissolving of one in another, requires calories from the surroundings, the reaction cools the pouch and surroundings by absorbing heat from them. An endothermic system is seen in the production of wood: trees absorb radiant energy, from the sun, use it in endothermic reactions such as taking apart CO2 and H2O and combining the carbon and hydrogen generated to produce cellulose and other organic chemicals; these products, in the form of wood, may be burned in a fireplace, producing CO2 and water, releasing energy in the form of heat and light to their surroundings, e.g. to a home's interior and chimney gasses. Exothermic refers to a transformation in which a system releases energy to the surroundings, expressed by Q < 0. When the transformation occurs at constant pressure, one has for the enthalpy ∆H < 0,and constant volume, one has for the internal energy ∆U < 0.
In an adiabatic system, an exothermic process results in an increase in temperature of the system. In exothermic chemical reactions, the heat, released by the reaction takes the form of electromagnetic energy; the transition of electrons from one quantum energy level to another causes light to be released. This light is equivalent in energy to the stabilization energy of the energy for the chemical reaction, i.e. the bond energy. This light, released can be absorbed by other molecules in solution to give rise to molecular vibrations or rotytions, which gives rise to the classical understanding of heat. In contrast, when endothermic reactions occur, energy is absorbed to place an electron in a higher energy state, such that the electron can associate with another atom to form a chemical complex. Net energy is absorbed by an endothermic reaction. In an exothermic reaction, the energy needed to start the reaction is less than el energy, subsequently released, so there is a net release of energy.
This is the physical understanding of endothermic reactions. Some examples of exothermic processes are: Combustion of fuels such as wood and oil petroleum Thermite reaction Reaction of alkali metals and other electropositive metals with water Condensation of rain from water vapor Mixing water and strong acids or strong bases Mixing acids and bases Dehydration of carbohydrates by sulfuric acid The setting of cement and concrete Some polymerisation reactions such as the setting of epoxy resin Reaction of most metals with halogens or oxygen Nuclear fusion in hydrogen bombs and in stellar cores Nuclear fission of heavy elements Chemical exothermic reactions are more spontaneous than their counterparts, endothermic reactions. In a thermochemical reaction, exothermic, the heat may be listed among the products of the reaction; because of historical accident, students encounter a source of possible confusion between the terminology of physics and biology. Whereas the thermodynamic terms "exothermic" and "endothermic" refer to processes that give out heat energy and processes that absorb heat energy, in biology the sense is inverted.
The metabolic terms "ectothermic" and "endothermic" refer to organisms that rely on external heat to achieve a full working temperature, to organisms that produce heat from within as a major factor in controlling their bodily temperature. Http://chemistry.about.com/b/a/184556.htm Observe exothermic reactions in a simple experiment
In physics, mass–energy equivalence states that anything having mass has an equivalent amount of energy and vice versa, with these fundamental quantities directly relating to one another by Albert Einstein's famous formula: This formula states that the equivalent energy can be calculated as the mass multiplied by the speed of light squared. Anything having energy exhibits a corresponding mass m given by its energy E divided by the speed of light squared c2; because the speed of light is a large number in everyday units, the formula implies that an everyday object at rest with a modest amount of mass has a large amount of energy intrinsically. Chemical and other energy transformations may cause a system to lose some of its energy content, releasing it as the radiant energy of light or as thermal energy for example. Mass–energy equivalence arose from special relativity as a paradox described by Henri Poincaré. Einstein proposed it on 21 November 1905, in the paper Does the inertia of a body depend upon its energy-content?, one of his Annus Mirabilis papers.
Einstein was the first to propose that the equivalence of mass and energy is a general principle and a consequence of the symmetries of space and time. A consequence of the mass–energy equivalence is that if a body is stationary, it still has some internal or intrinsic energy, called its rest energy, corresponding to its rest mass; when the body is in motion, its total energy is greater than its rest energy, equivalently its total mass is greater than its rest mass. This rest mass is called the intrinsic or invariant mass because it remains the same regardless of this motion for the extreme speeds or gravity considered in special and general relativity; the mass–energy formula serves to convert units of mass to units of energy, no matter what system of measurement units is used. The formula was written in many different notations, its interpretation and justification was further developed in several steps. In "Does the inertia of a body depend upon its energy content?", Einstein used V to mean the speed of light in a vacuum and L to mean the energy lost by a body in the form of radiation.
The equation E = mc2 was not written as a formula but as a sentence in German saying that "if a body gives off the energy L in the form of radiation, its mass diminishes by L/V2." A remark placed above it informed that the equation was approximated by neglecting "magnitudes of fourth and higher orders" of a series expansion. In May 1907, Einstein explained that the expression for energy ε of a moving mass point assumes the simplest form, when its expression for the state of rest is chosen to be ε0 = μV2, in agreement with the "principle of the equivalence of mass and energy". In addition, Einstein used the formula μ = E0/V2, with E0 being the energy of a system of mass points, to describe the energy and mass increase of that system when the velocity of the differently moving mass points is increased. In June 1907, Max Planck rewrote Einstein's mass–energy relationship as M = E0 + pV0/c2, where p is the pressure and V the volume to express the relation between mass, its latent energy, thermodynamic energy within the body.
Subsequently, in October 1907, this was rewritten as M0 = E0/c2 and given a quantum interpretation by Johannes Stark, who assumed its validity and correctness. In December 1907, Einstein expressed the equivalence in the form M = μ + E0/c2 and concluded: "A mass μ is equivalent, as regards inertia, to a quantity of energy μc2, it appears far more natural to consider every inertial mass as a store of energy."In 1909, Gilbert N. Lewis and Richard C. Tolman used two variations of the formula: m = E/c2 and m0 = E0/c2, with E being the relativistic energy, E0 is the rest energy, m is the relativistic mass, m0 is the rest mass; the same relations in different notation were used by Hendrik Lorentz in 1913, though he placed the energy on the left-hand side: ε = Mc2 and ε0 = mc2, with ε being the total energy of a moving material point, ε0 its rest energy, M the relativistic mass, m the invariant mass. In 1911, Max von Laue gave a more comprehensive proof of M0 = E0/c2 from the stress–energy tensor, generalized by Felix Klein.
Einstein returned to the topic once again after World War II and this time he wrote E = mc2 in the title of his article intended as an explanation for a general reader by analogy. Mass and energy can be seen as two names for the same conserved physical quantity. Thus, the laws of conservation of energy and conservation of mass are equivalent and both hold true. Einstein elaborated in a 1946 essay that "the principle of the conservation of mass proved inadequate in the face of the special theory of relativity, it was therefore merged with the energy conservation principle—just as, about 60 years before, the principle of the conservation of mechanical energy had been combined with the principle of the conservation of heat. We might say that the principle of the conservation of energy, having swallowed up that of the conservation of heat, now proceeded to swallow that of the conservation of mass—and holds the field alone."If the conservation of mass law is interpreted as conservation of rest mass, it does not hold true in special relativity.
The rest energy of a part
A proton is a subatomic particle, symbol p or p+, with a positive electric charge of +1e elementary charge and a mass less than that of a neutron. Protons and neutrons, each with masses of one atomic mass unit, are collectively referred to as "nucleons". One or more protons are present in the nucleus of every atom; the number of protons in the nucleus is the defining property of an element, is referred to as the atomic number. Since each element has a unique number of protons, each element has its own unique atomic number; the word proton is Greek for "first", this name was given to the hydrogen nucleus by Ernest Rutherford in 1920. In previous years, Rutherford had discovered that the hydrogen nucleus could be extracted from the nuclei of nitrogen by atomic collisions. Protons were therefore a candidate to be a fundamental particle, hence a building block of nitrogen and all other heavier atomic nuclei. In the modern Standard Model of particle physics, protons are hadrons, like neutrons, the other nucleon, are composed of three quarks.
Although protons were considered fundamental or elementary particles, they are now known to be composed of three valence quarks: two up quarks of charge +2/3e and one down quark of charge –1/3e. The rest masses of quarks contribute only about 1% of a proton's mass, however; the remainder of a proton's mass is due to quantum chromodynamics binding energy, which includes the kinetic energy of the quarks and the energy of the gluon fields that bind the quarks together. Because protons are not fundamental particles, they possess a physical size, though not a definite one. At sufficiently low temperatures, free protons will bind to electrons. However, the character of such bound protons does not change, they remain protons. A fast proton moving through matter will slow by interactions with electrons and nuclei, until it is captured by the electron cloud of an atom; the result is a protonated atom, a chemical compound of hydrogen. In vacuum, when free electrons are present, a sufficiently slow proton may pick up a single free electron, becoming a neutral hydrogen atom, chemically a free radical.
Such "free hydrogen atoms" tend to react chemically with many other types of atoms at sufficiently low energies. When free hydrogen atoms react with each other, they form neutral hydrogen molecules, which are the most common molecular component of molecular clouds in interstellar space. Protons are composed of three valence quarks, making them baryons; the two up quarks and one down quark of a proton are held together by the strong force, mediated by gluons. A modern perspective has a proton composed of the valence quarks, the gluons, transitory pairs of sea quarks. Protons have a positive charge distribution which decays exponentially, with a mean square radius of about 0.8 fm. Protons and neutrons are both nucleons, which may be bound together by the nuclear force to form atomic nuclei; the nucleus of the most common isotope of the hydrogen atom is a lone proton. The nuclei of the heavy hydrogen isotopes deuterium and tritium contain one proton bound to one and two neutrons, respectively. All other types of atomic nuclei are composed of two or more protons and various numbers of neutrons.
The concept of a hydrogen-like particle as a constituent of other atoms was developed over a long period. As early as 1815, William Prout proposed that all atoms are composed of hydrogen atoms, based on a simplistic interpretation of early values of atomic weights, disproved when more accurate values were measured. In 1886, Eugen Goldstein discovered canal rays and showed that they were positively charged particles produced from gases. However, since particles from different gases had different values of charge-to-mass ratio, they could not be identified with a single particle, unlike the negative electrons discovered by J. J. Thomson. Wilhelm Wien in 1898 identified the hydrogen ion as particle with highest charge-to-mass ratio in ionized gases. Following the discovery of the atomic nucleus by Ernest Rutherford in 1911, Antonius van den Broek proposed that the place of each element in the periodic table is equal to its nuclear charge; this was confirmed experimentally by Henry Moseley in 1913 using X-ray spectra.
In 1917, Rutherford proved that the hydrogen nucleus is present in other nuclei, a result described as the discovery of protons. Rutherford had earlier learned to produce hydrogen nuclei as a type of radiation produced as a product of the impact of alpha particles on nitrogen gas, recognize them by their unique penetration signature in air and their appearance in scintillation detectors; these experiments were begun when Rutherford had noticed that, when alpha particles were shot into air, his scintillation detectors showed the signatures of typical hydrogen nuclei as a product. After experimentation Rutherford traced the reaction to the nitrogen in air, found that when alphas were produced into pure nitrogen gas, the effect was larger. Rutherford determined that this hydrogen could have come only from the nitrogen, therefore nitrogen must contain hydrogen nuclei. One hydrogen nucleus was being knocked off by the impact of the alpha particle, producing oxygen-17 in the process; this was 14N + α → 17O + p.
(This reaction wo
Enthalpy, a property of a thermodynamic system, is equal to the system's internal energy plus the product of its pressure and volume. In a system enclosed so as to prevent matter transfer, for processes at constant pressure, the heat absorbed or released equals the change in enthalpy; the unit of measurement for enthalpy in the International System of Units is the joule. Other historical conventional units still in use include the calorie. Enthalpy comprises a system's internal energy, the energy required to create the system, plus the amount of work required to make room for it by displacing its environment and establishing its volume and pressure. Enthalpy is defined as a state function that depends only on the prevailing equilibrium state identified by the system's internal energy and volume, it is an extensive quantity. Enthalpy is the preferred expression of system energy changes in many chemical and physical measurements at constant pressure, because it simplifies the description of energy transfer.
In a system enclosed so as to prevent matter transfer, at constant pressure, the enthalpy change equals the energy transferred from the environment through heat transfer or work other than expansion work. The total enthalpy, H, of a system cannot be measured directly; the same situation exists in classical mechanics: only a change or difference in energy carries physical meaning. Enthalpy itself is a thermodynamic potential, so in order to measure the enthalpy of a system, we must refer to a defined reference point; the ΔH is a positive change in endothermic reactions, negative in heat-releasing exothermic processes. For processes under constant pressure, ΔH is equal to the change in the internal energy of the system, plus the pressure-volume work p ΔV done by the system on its surroundings; this means that the change in enthalpy under such conditions is the heat absorbed or released by the system through a chemical reaction or by external heat transfer. Enthalpies for chemical substances at constant pressure refer to standard state: most 1 bar pressure.
Standard state does not speaking, specify a temperature, but expressions for enthalpy reference the standard heat of formation at 25 °C. Enthalpy of ideal gases and incompressible solids and liquids does not depend on pressure, unlike entropy and Gibbs energy. Real materials at common temperatures and pressures closely approximate this behavior, which simplifies enthalpy calculation and use in practical designs and analyses; the word enthalpy was coined late, in the early 20th century, in analogy with the 19th-century terms energy and entropy. Where energy uses the root of the Greek word ἔργον "work" to express the idea of "work-content" and where entropy uses the Greek word τροπή "transformation" to express the idea of "transformation-content", so by analogy, enthalpy uses the root of the Greek word θάλπος "warmth, heat" to express the idea of "heat-content"; the term does in fact stand in for the older term "heat content", a term, now deprecated as misleading, as dH refers to the amount of heat absorbed in a process at constant pressure only, but not in the general case.
Josiah Willard Gibbs used the term "a heat function for constant pressure" for clarity. Introduction of the concept of "heat content" H is associated with Benoît Paul Émile Clapeyron and Rudolf Clausius; the term enthalpy first appeared in print in 1909. It is attributed to Heike Kamerlingh Onnes, who most introduced it orally the year before, at the first meeting of the Institute of Refrigeration in Paris, it gained currency only in the 1920s, notably with the Mollier Steam Tables and Diagrams, published in 1927. Until the 1920s, the symbol H was used, somewhat inconsistently, for "heat" in general; the definition of H as limited to enthalpy or "heat content at constant pressure" was formally proposed by Alfred W. Porter in 1922; the enthalpy of a thermodynamic system is defined as H = U + p V, where H is enthalpy U is the internal energy of the system p is pressure V is the volume of the systemEnthalpy is an extensive property. This means, it is convenient to introduce the specific enthalpy h = H m, where m is the mass of the system, or the molar enthalpy H m = H n, where n is the number of moles.
For inhomogeneous systems the enthalpy is the sum of the enthalpies of the composing subsystems: H = ∑ k H k, where H is the total enthalpy of all the subsystems k refers to the various subsystems H k refers to the enthalpy of each subsystem ∑ k
In physics, the kinetic energy of an object is the energy that it possesses due to its motion. It is defined as the work needed to accelerate a body of a given mass from rest to its stated velocity. Having gained this energy during its acceleration, the body maintains this kinetic energy unless its speed changes; the same amount of work is done by the body when decelerating from its current speed to a state of rest. In classical mechanics, the kinetic energy of a non-rotating object of mass m traveling at a speed v is 1 2 m v 2. In relativistic mechanics, this is a good approximation only when v is much less than the speed of light; the standard unit of kinetic energy is the joule. The imperial unit of kinetic energy is the foot-pound; the adjective kinetic has its roots in the Greek word κίνησις kinesis, meaning "motion". The dichotomy between kinetic energy and potential energy can be traced back to Aristotle's concepts of actuality and potentiality; the principle in classical mechanics that E ∝ mv2 was first developed by Gottfried Leibniz and Johann Bernoulli, who described kinetic energy as the living force, vis viva.
Willem's Gravesande of the Netherlands provided experimental evidence of this relationship. By dropping weights from different heights into a block of clay, Willem's Gravesande determined that their penetration depth was proportional to the square of their impact speed. Émilie du Châtelet published an explanation. The terms kinetic energy and work in their present scientific meanings date back to the mid-19th century. Early understandings of these ideas can be attributed to Gaspard-Gustave Coriolis, who in 1829 published the paper titled Du Calcul de l'Effet des Machines outlining the mathematics of kinetic energy. William Thomson Lord Kelvin, is given the credit for coining the term "kinetic energy" c. 1849–51. Energy occurs in many forms, including chemical energy, thermal energy, electromagnetic radiation, gravitational energy, electric energy, elastic energy, nuclear energy, rest energy; these can be categorized in two main classes: kinetic energy. Kinetic energy is the movement energy of an object.
Kinetic energy can be transformed into other kinds of energy. Kinetic energy may be best understood by examples that demonstrate how it is transformed to and from other forms of energy. For example, a cyclist uses chemical energy provided by food to accelerate a bicycle to a chosen speed. On a level surface, this speed can be maintained without further work, except to overcome air resistance and friction; the chemical energy has been converted into kinetic energy, the energy of motion, but the process is not efficient and produces heat within the cyclist. The kinetic energy in the moving cyclist and the bicycle can be converted to other forms. For example, the cyclist could encounter a hill just high enough to coast up, so that the bicycle comes to a complete halt at the top; the kinetic energy has now been converted to gravitational potential energy that can be released by freewheeling down the other side of the hill. Since the bicycle lost some of its energy to friction, it never regains all of its speed without additional pedaling.
The energy is not destroyed. Alternatively, the cyclist could connect a dynamo to one of the wheels and generate some electrical energy on the descent; the bicycle would be traveling slower at the bottom of the hill than without the generator because some of the energy has been diverted into electrical energy. Another possibility would be for the cyclist to apply the brakes, in which case the kinetic energy would be dissipated through friction as heat. Like any physical quantity, a function of velocity, the kinetic energy of an object depends on the relationship between the object and the observer's frame of reference. Thus, the kinetic energy of an object is not invariant. Spacecraft use chemical energy to launch and gain considerable kinetic energy to reach orbital velocity. In an circular orbit, this kinetic energy remains constant because there is no friction in near-earth space. However, it becomes apparent at re-entry. If the orbit is elliptical or hyperbolic throughout the orbit kinetic and potential energy are exchanged.
Without loss or gain, the sum of the kinetic and potential energy remains constant. Kinetic energy can be passed from one object to another. In the game of billiards, the player imposes kinetic energy on the cue ball by striking it with the cue stick. If the cue ball collides with another ball, it slows down and the ball it hit accelerates its speed as the kinetic energy is passed on to it. Collisions in billiards are elastic collisions, in which kinetic energy is preserved. In inelastic collisions, kinetic energy is dissipated in various forms of energy, such as heat, binding energy. Flywheels have been developed as a method of energy storage; this illustrates that kinetic energy is stored in rotational motion. Several mathematical descriptions of kinetic energy exist that describe it in the appropriate physical situation. For objects and processes in common human experience, the formula ½mv² given by Newtonian mechanics is suitable. However, if the speed of the object is comparabl