The rapid neutron-capture process, or so-called r-process, is a set of nuclear reactions that in nuclear astrophysics is responsible for the creation of half the abundances of the atomic nuclei heavier than iron synthesizing the entire abundance of the two most neutron-rich stable isotopes of each heavy element. Chemical elements heavier than iron are enabled by the force between nucleons to be capable of six to ten stable isotopic forms having the same nuclear charge Z but differing in neutron number N, each of whose natural abundances contribute to the natural abundance of the chemical element; each isotope is characterized by the number of neutrons. The r-process synthesizes new nuclei of the heaviest four isotopes of any heavy element, being responsible for the abundances of its two heaviest isotopes, which are referred to as r-only nuclei; the most abundant of these contribute to the r-process abundance peaks near atomic weights A = 82, A = 130 and A = 196. The r-process entails a succession of rapid neutron captures by one or more heavy seed nuclei beginning with nuclei in the abundance peak centered on 56Fe.
The captures must be rapid in the sense that the nuclei must not have time to undergo radioactive decay before another neutron arrives to be captured, a sequence, halted only when the neutron-rich nuclei cannot physically retain another neutron. The r-process therefore must occur in locations. Early studies reasoned that 1024 free neutrons per cm3 would be required if the temperature were about one billion degrees in order that the waiting points, at which no more neutrons can be captured, be at the atomic numbers of the abundance peaks for r-process nuclei; this amounts to a gram of free neutrons in every cubic centimeter, an astonishing number requiring extreme locations. Traditionally this suggested the material ejected from the reexpanded core of a core-collapse supernova or decompression of neutron-star matter thrown off by a binary neutron star merger; the relative contributions of these sources to the astrophysical abundance of r-process elements is a matter of ongoing research. A limited r-process-like series of neutron captures occurs to a minor extent in thermonuclear weapon explosions.
These led to the discovery of the elements fermium in nuclear weapon fallout. The r-process contrasts with the s-process, the other predominant mechanism for the production of heavy elements, nucleosynthesis by means of slow captures of neutrons; the s-process occurs within ordinary stars AGB stars, where the neutron flux is sufficient to cause neutron captures to recur every 10–100 years, much too slow for the r-process, which requires 100 captures per second. The s-process is secondary, meaning that it requires pre-existing heavy isotopes as seed nuclei to be converted into other heavy nuclei by a slow sequence of captures of free neutrons; the r-process scenarios create their own seed nuclei, so they might proceed in massive stars that contain no heavy seed nuclei. Taken together, the r- and s-processes account for the entire abundance of chemical elements heavier than iron; the historical challenge has been to locate physical settings appropriate for their time scales. The need for a physical setting providing rapid capture of neutrons was seen from the relative abundances of isotopes of heavy chemical elements given in a table of abundances by Hans Suess and Harold Urey in 1956.
Not only did their abundance table reveal larger than average abundances of natural isotopes containing magic numbers of neutrons but abundance peaks about 10 amu lighter than those containing magic numbers of neutrons in their structure. They realized that captures of free neutrons should be part of any explanation because of the lack of electric repulsion between nuclei and chargeless neutrons; this phenomenology suggested that these lighter subsidiary abundance peaks could result from radioactive nuclei having the magic neutron numbers but ten fewer protons. To achieve this would require radioactive neutron-rich isotopes to capture another neutron faster than they can undergo beta decay in order to create abundance peaks that will decay subsequently to germanium and platinum, elements prominent in the r-process abundance peaks. According to the nuclear shell model, radioactive nuclei that would decay into isotopes of these elements must have closed neutron shells near the neutron drip line, where more neutrons cannot be added.
The neutron-capture flow must therefore wait for beta decay at those so-called waiting points, which therefore grow larger in abundance similar to water in a dammed up river. For those hitherto unexplained abundance peaks, which are 10 u lighter than the s-process abundance peaks, to be created by rapid neutron capture implied that other neutron-rich nuclei would be synthesized by the same process; that process, rapid neutron capture by neutron-rich isotopes, is called the r-process. A table apportioning the heavy isotopes phenomenologically between s-process and r-process isotopes was published in 1957 in the famous B2FH review paper which named the r-process and outlined the physics that guides it. Alastair G. W. Cameron published a smaller study about the r-process in the same year; the stationary r-process as described by the B2FH paper was first demonstrated in a time-dependent calculation at Caltech by Phillip A. Seeger, William A. Fowler and Donald D. Clayton, who found that no single temporal snapshot matched the solar r-process abundances, but
A levitated dipole is a type of nuclear fusion reactor design using a superconducting torus, magnetically levitated inside the reactor chamber. The name refers to the magnetic dipole that forms within the reaction chamber, similar to Earth's or Jupiter's magnetospheres, it is believed that such an apparatus could contain plasma more efficiently than other fusion reactor designs. The Levitated Dipole Experiment was funded by the US Department of Energy's Office of Fusion Energy; the machine was run in a collaboration between Columbia University. Funding for the LDX was ended in November 2011 to concentrate resources on tokamak designs; the Earth's magnetic field is generated by the circulation of charges in the Earth's molten core. The resulting magnetic dipole field forms a shape with magnetic field lines passing through the Earth's center, reaching the surface near the poles and extending far into space above the equator. Charged particles entering the field will tend to follow the lines of force, moving south.
As they reach the polar regions, the magnetic lines begin to cluster together, this increasing field can cause particles below a certain energy threshold to reflect, begin travelling in the opposite direction. Such particles bounce forth between the poles until the collide with other particles. Particles with greater energy continue towards the Earth, impacting the atmosphere and causing the aurora; this basic concept is used in the magnetic mirror approach to fusion energy. The mirror uses a solenoid to confine the plasma in the center of a cylinder, two magnets at either end to force the magnetic lines closer together to create reflecting areas. One of the most promising of the early approaches to fusion, the mirror proved to be "leaky", with the fuel refusing to properly reflect from the ends as the density and energy were increased. Annoyingly, it was the particles with the most energy, those most to undergo fusion, that preferentially escaped. Research into large mirror machines ended in the 1980s as it became clear they would not reach fusion breakeven in a sized device.
The levitated dipole can be thought of, in some ways, as a toroidal mirror, much more similar to the Earth's field than the linear system in a traditional mirror. In this case, the confinement area is not the linear area between the mirrors, but the toroidal area around the outside of the central magnet, similar to the area around the Earth's equator. Particles in this area that move up or down see increasing magnetic density and tend to move back towards the equator area again; this gives the system some level of natural stability. Particles with higher energy, the ones that would escape a traditional mirror, instead follow the field lines through the hollow center of the magnet, recirculating back into the equatorial area again; this makes the Levitated Dipole unique. In those experiments, small fluctuations can cause significant energy loss. By contrast, in a dipolar magnetic field, fluctuations tend to compress the plasma, without energy loss; this compression effect was first noticed by Akira Hasegawa after participating in the Voyager 2 encounter with Uranus.
Adapting this concept to a fusion experiment was first proposed by Dr. Jay Kesner and Dr. Michael Mauel in the mid to late nineties; the pair raised money to build the machine. They achieved first plasma on Friday, August 13, 2004 at 12:53 PM. First plasma was done by levitating the dipole magnet and RF heating the plasma; the LDX team has since conducted several levitation tests, including a 40-minute suspension of the superconducting coil on February 9, 2007. Shortly after, the coil was damaged in a control test in February 2007 and replaced in May 2007; the replacement coil was inferior, a copper wound electromagnet, water cooled. Scientific results, including the observation of an inward turbulent pinch, were reported in Nature Physics; this experiment needed a special free-floating electromagnet, which created the unique "toilet-bowl" magnetic field. The magnetic field was made of two counter-wound rings of currents; each ring contained a 19-strand niobium-tin Rutherford cable. These looped around inside a Inconel magnet.
The donut was charged using induction. Once charged, it generated a magnetic field for an 8-hour period. Overall, the ring levitated 1.6 meters above a superconducting ring. The ring produced a 5-tesla field; this superconductor was encased inside a liquid helium, which kept the electromagnet below 10 kelvins. This design is similar to the D20 dipole experiment at Berkeley and the RT-1 experiment at the University of Tokyo; the dipole was suspended inside a mushroom-shaped vacuum chamber, about 5 meters in diameter and ~3 meters high. At the base of the chamber was a charging coil; this coil is used using induction. The coil exposing the dipole to a varying magnetic field. Next, the dipole is raised into the center of the chamber; this could be using the field itself. Around the outside of this chamber were Helmholtz coils, which were used to produce a uniform surrounding magnetic field; this external field would interact with the dipole field. It was in this surrounding field; the plasma forms inside the chamber.
The plasma is formed by heating a low pressure gas. The gas is heated using a radio frequency microwaving the plasma in a 17-kilowatt field; the machine was monitored
Plasma is one of the four fundamental states of matter, was first described by chemist Irving Langmuir in the 1920s. Plasma can be artificially generated by heating or subjecting a neutral gas to a strong electromagnetic field to the point where an ionized gaseous substance becomes electrically conductive, long-range electromagnetic fields dominate the behaviour of the matter. Plasma and ionized gases have properties and display behaviours unlike those of the other states, the transition between them is a matter of nomenclature and subject to interpretation. Based on the surrounding environmental temperature and density ionized or ionized forms of plasma may be produced. Neon signs and lightning are examples of ionized plasma; the Earth's ionosphere is a plasma and the magnetosphere contains plasma in the Earth's surrounding space environment. The interior of the Sun is an example of ionized plasma, along with the solar corona and stars. Positive charges in ions are achieved by stripping away electrons orbiting the atomic nuclei, where the total number of electrons removed is related to either increasing temperature or the local density of other ionized matter.
This can be accompanied by the dissociation of molecular bonds, though this process is distinctly different from chemical processes of ion interactions in liquids or the behaviour of shared ions in metals. The response of plasma to electromagnetic fields is used in many modern technological devices, such as plasma televisions or plasma etching. Plasma may be the most abundant form of ordinary matter in the universe, although this hypothesis is tentative based on the existence and unknown properties of dark matter. Plasma is associated with stars, extending to the rarefied intracluster medium and the intergalactic regions; the word plasma comes from Ancient Greek πλάσμα, meaning'moldable substance' or'jelly', describes the behaviour of the ionized atomic nuclei and the electrons within the surrounding region of the plasma. Each of these nuclei are suspended in a movable sea of electrons. Plasma was first identified in a Crookes tube, so described by Sir William Crookes in 1879; the nature of this "cathode ray" matter was subsequently identified by British physicist Sir J.
J. Thomson in 1897; the term "plasma" was coined by Irving Langmuir in 1928. Lewi Tonks and Harold Mott-Smith, both of whom worked with Irving Langmuir in the 1920s, recall that Langmuir first used the word "plasma" in analogy with blood. Mott-Smith recalls, in particular, that the transport of electrons from thermionic filaments reminded Langmuir of "the way blood plasma carries red and white corpuscles and germs."Langmuir described the plasma he observed as follows: "Except near the electrodes, where there are sheaths containing few electrons, the ionized gas contains ions and electrons in about equal numbers so that the resultant space charge is small. We shall use the name plasma to describe this region containing balanced charges of ions and electrons." Plasma is a state of matter in which an ionized gaseous substance becomes electrically conductive to the point that long-range electric and magnetic fields dominate the behaviour of the matter. The plasma state can be contrasted with the other states: solid and gas.
Plasma is an electrically neutral medium of unbound negative particles. Although these particles are unbound, they are not "free" in the sense of not experiencing forces. Moving charged particles generate an electric current within a magnetic field, any movement of a charged plasma particle affects and is affected by the fields created by the other charges. In turn this governs collective behaviour with many degrees of variation. Three factors define a plasma: The plasma approximation: The plasma approximation applies when the plasma parameter, Λ, representing the number of charge carriers within a sphere surrounding a given charged particle, is sufficiently high as to shield the electrostatic influence of the particle outside of the sphere. Bulk interactions: The Debye screening length is short compared to the physical size of the plasma; this criterion means that interactions in the bulk of the plasma are more important than those at its edges, where boundary effects may take place. When this criterion is satisfied, the plasma is quasineutral.
Plasma frequency: The electron plasma frequency is large compared to the electron-neutral collision frequency. When this condition is valid, electrostatic interactions dominate over the processes of ordinary gas kinetics. Plasma temperature is measured in kelvin or electronvolts and is, informally, a measure of the thermal kinetic energy per particle. High temperatures are needed to sustain ionisation, a defining feature of a plasma; the degree of plasma ionisation is determined by the electron temperature relative to the ionization energy, in a relationship called the Saha equation. At low temperatures and electrons tend to recombine into bound states—atoms—and the plasma will become a gas. In most cases the electrons are close enough to thermal equilibrium that their temperature is well-defined; because of the large difference in ma
A nova or classical nova is a transient astronomical event that causes the sudden appearance of a bright "new" star, that fades over several weeks or many months. Novae involve an interaction between two stars that cause the flareup, perceived as a new entity, much brighter than the stars involved. Causes of the dramatic appearance of a nova vary, depending on the circumstances of the two progenitor stars. All observed novae involve located binary stars, either a pair of red dwarfs in the process of merging, or a white dwarf and another star; the main sub-classes of novae are classical novae, recurrent novae, dwarf novae. They are all considered to be cataclysmic variable stars. Luminous red novae share the name and are cataclysmic variables, but are a different type of event caused by a stellar merger. With similar names are the much more energetic supernovae and kilonovae. Classical nova eruptions are the most common type of nova, they are created in a close binary star system consisting of a white dwarf and either a main sequence, sub-giant, or red giant star.
When the orbital period falls in the range of several days to one day, the white dwarf is close enough to its companion star to start drawing accreted matter onto the surface of the white dwarf, which creates a dense but shallow atmosphere. This atmosphere is hydrogen and is thermally heated by the hot white dwarf, which reaches a critical temperature causing rapid runaway ignition by fusion. From the dramatic and sudden energies created, the now hydrogen-burnt atmosphere is dramatically expelled into interstellar space, its brightened envelope is seen as the visible light created from the nova event, was mistaken as a "new" star. A few novae produce short-lived nova remnants, lasting for several centuries. Recurrent nova processes are the same as the classical nova, except that the fusion ignition may be repetitive because the companion star can again feed the dense atmosphere of the white dwarf. Novae most occur in the sky along the path of the Milky Way near the observed galactic centre in Sagittarius.
They occur far more than galactic supernovae, averaging about ten per year. Most are found telescopically only one every year to eighteen months reaching naked-eye visibility. Novae reaching first or second magnitude occur only several times per century; the last bright nova was V1369 Centauri reaching 3.3 magnitude on 14 December 2013. During the sixteenth century, astronomer Tycho Brahe observed the supernova SN 1572 in the constellation Cassiopeia, he described it in his book De nova stella. In this work he argued that a nearby object should be seen to move relative to the fixed stars, that the nova had to be far away. Although this event was a supernova and not a nova, the terms were considered interchangeable until the 1930s. After this, novae were classified as classical novae to distinguish them from supernovae, as their causes and energies were thought to be different, based in the observational evidence. Despite the term "stella nova" meaning "new star", novae most take place as a result of white dwarfs: remnants of old stars.
Evolution of potential novae begins with two main sequence stars in a binary system. One of the two evolves into a red giant, leaving its remnant white dwarf core in orbit with the remaining star; the second star—which may be either a main sequence star or an aging giant—begins to shed its envelope onto its white dwarf companion when it overflows its Roche lobe. As a result, the white dwarf captures matter from the companion's outer atmosphere in an accretion disk, in turn, the accreted matter falls into the atmosphere; as the white dwarf consists of degenerate matter, the accreted hydrogen does not inflate, but its temperature increases. Runaway fusion occurs when the temperature of this atmospheric layer reaches ~20 million K, initiating nuclear burning, via the CNO cycle. Hydrogen fusion may occur in a stable manner on the surface of the white dwarf for a narrow range of accretion rates, giving rise to a super soft X-ray source, but for most binary system parameters, the hydrogen burning is unstable thermally and converts a large amount of the hydrogen into other, heavier chemical elements in a runaway reaction, liberating an enormous amount of energy.
This blows the remaining gases away from the surface of the white dwarf surface and produces an bright outburst of light. The rise to peak brightness may be rapid, or gradual; this is related to the speed class of the nova. The time taken for a nova to decay by around 2 or 3 magnitudes from maximum optical brightness is used for classification, via its speed class. Fast novae will take fewer than 25 days to decay by 2 magnitudes, while slow novae will take more than 80 days. In spite of their violence the amount of material ejected in novae is only about 1⁄10,000 of a solar mass, quite small relative to the mass of the white dwarf. Furthermore, only five percent of the accreted mass is fused during the power outburst. Nonetheless, this is enough energy to accelerate nova ejecta to velocities as high as several thousand kilometers per second—higher for fast novae than slow ones—with a concurrent rise in luminosity from a few times solar to 50,000–100,000 times solar. In 2010 scientists using NASA's Fermi Gamma-ray Space Telescope discovered that a nova can emit gamma-rays.
A white dwarf can generate multiple novae over t
A nova remnant is made up of the material either left behind by a sudden explosive fusion eruption by classical novae, or from multiple ejections by recurrent novae. Over their short lifetimes, nova shells show expansion velocities of around 1000 km/s, whose faint nebulosities are illuminated by their progenitor stars via light echos as observed with the spherical shell of Nova Persei 1901 or the energies remaining in the expanding bubbles like T Pyxidis. Most novae require a close binary system, with a white dwarf and a main sequence, sub-giant, or red giant star, or the merging of two red dwarfs, so all nova remnants must be associated with binaries; this theoretically means these nebula shapes might be affected by their central progenitor stars and the amount of matter ejected by novae. The shapes of these nova nebulae are of much interest to modern astrophysicists. Nova remnants when compared to supernova remnants or planetary nebulae generate much less both in energy and mass, they can be observed for a few centuries.
Examples of novae displaying nebula shells or remnants include: GK Per, RR Pic, DQ Her, FH Ser, V476 Cyg, V1974 Cyg, HR Del and V1500 Cyg. Notably, more nova remnants have been found with the new novae, due to improve imaging technology like CCD and at other wavelengths. Planetary nebula Supernova remnant Hypatia Libyan desert glass T Pyxidis Nova Remnant Double-star systems cycle between big and small blasts Nova Remnant comparison table Nova Remnant
The neutron is a subatomic particle, symbol n or n0, with no net electric charge and a mass larger than that of a proton. Protons and neutrons constitute the nuclei of atoms. Since protons and neutrons behave within the nucleus, each has a mass of one atomic mass unit, they are both referred to as nucleons, their properties and interactions are described by nuclear physics. The chemical and nuclear properties of the nucleus are determined by the number of protons, called the atomic number, the number of neutrons, called the neutron number; the atomic mass number is the total number of nucleons. For example, carbon has atomic number 6, its abundant carbon-12 isotope has 6 neutrons, whereas its rare carbon-13 isotope has 7 neutrons; some elements occur in nature with only one stable isotope, such as fluorine. Other elements occur with many stable isotopes, such as tin with ten stable isotopes. Within the nucleus and neutrons are bound together through the nuclear force. Neutrons are required for the stability of nuclei, with the exception of the single-proton hydrogen atom.
Neutrons are produced copiously in nuclear fusion. They are a primary contributor to the nucleosynthesis of chemical elements within stars through fission and neutron capture processes; the neutron is essential to the production of nuclear power. In the decade after the neutron was discovered by James Chadwick in 1932, neutrons were used to induce many different types of nuclear transmutations. With the discovery of nuclear fission in 1938, it was realized that, if a fission event produced neutrons, each of these neutrons might cause further fission events, etc. in a cascade known as a nuclear chain reaction. These events and findings led to the first self-sustaining nuclear reactor and the first nuclear weapon. Free neutrons, while not directly ionizing atoms, cause ionizing radiation; as such they can be a biological hazard, depending upon dose. A small natural "neutron background" flux of free neutrons exists on Earth, caused by cosmic ray showers, by the natural radioactivity of spontaneously fissionable elements in the Earth's crust.
Dedicated neutron sources like neutron generators, research reactors and spallation sources produce free neutrons for use in irradiation and in neutron scattering experiments. An atomic nucleus is formed by a number of protons, Z, a number of neutrons, N, bound together by the nuclear force; the atomic number defines the chemical properties of the atom, the neutron number determines the isotope or nuclide. The terms isotope and nuclide are used synonymously, but they refer to chemical and nuclear properties, respectively. Speaking, isotopes are two or more nuclides with the same number of protons; the atomic mass number, symbol A, equals Z+N. Nuclides with the same atomic mass number are called isobars; the nucleus of the most common isotope of the hydrogen atom is a lone proton. The nuclei of the heavy hydrogen isotopes deuterium and tritium contain one proton bound to one and two neutrons, respectively. All other types of atomic nuclei are composed of two or more protons and various numbers of neutrons.
The most common nuclide of the common chemical element lead, 208Pb, has 82 protons and 126 neutrons, for example. The table of nuclides comprises all the known nuclides. Though it is not a chemical element, the neutron is included in this table; the free neutron has 1.674927471 × 10 − 27 kg, or 1.00866491588 u. The neutron has a mean square radius of about 0.8×10−15 m, or 0.8 fm, it is a spin-½ fermion. The neutron has no measurable electric charge. With its positive electric charge, the proton is directly influenced by electric fields, whereas the neutron is unaffected by electric fields; the neutron has a magnetic moment, however. The neutron's magnetic moment has a negative value, because its orientation is opposite to the neutron's spin. A free neutron is unstable, decaying to a proton and antineutrino with a mean lifetime of just under 15 minutes; this radioactive decay, known as beta decay, is possible because the mass of the neutron is greater than the proton. The free proton is stable. Neutrons or protons bound in a nucleus can be stable or unstable, depending on the nuclide.
Beta decay, in which neutrons decay to protons, or vice versa, is governed by the weak force, it requires the emission or absorption of electrons and neutrinos, or their antiparticles. Protons and neutrons behave identically under the influence of the nuclear force within the nucleus; the concept of isospin, in which the proton and neutron are viewed as two quantum states of the same particle, is used to model the interactions of nucleons by the nuclear or weak forces. Because of the strength of the nuclear force at short distances, the binding energy of nucleons is more than seven orders of magnitude larger than the electromagnetic energy binding electrons in atoms. Nuclear reactions therefore have an energy density, more than ten million times that of chemical reactions; because of the mass–energy equivalence, nuclear binding energies reduce the mass of nuclei. The ability of the nuclear force to store energy arising from the electromagnetic repulsion of nuclear components is the basis for most of the energy that makes nuclear reactors or bombs possible.
In nuclear fission, the absorption of a neutron by a heavy nuclide causes the nuclide to become unstable and break into light nuclides and additional neu
Stellar nucleosynthesis is the theory explaining the creation of chemical elements by nuclear fusion reactions between atoms within stars. Stellar nucleosynthesis has occurred continuously since the original creation of hydrogen and lithium during the Big Bang, it is a predictive theory that today yields excellent agreement between calculations based upon it and the observed abundances of the elements. It explains why the observed abundances of elements in the universe grow over time and why some elements and their isotopes are much more abundant than others; the theory was proposed by Fred Hoyle in 1946, who refined it in 1954. Further advances were made to nucleosynthesis by neutron capture of the elements heavier than iron, by Margaret Burbidge, Geoffrey Burbidge, William Alfred Fowler and Hoyle in their famous 1957 B2FH paper, which became one of the most cited papers in astrophysics history. Stars evolve because of changes in their composition over their lifespans, first by burning hydrogen helium, progressively burning higher elements.
However, this does not by itself alter the abundances of elements in the universe as the elements are contained within the star. In its life, a low-mass star will eject its atmosphere via stellar wind, forming a planetary nebula, while a higher–mass star will eject mass via a sudden catastrophic event called a supernova; the term supernova nucleosynthesis is used to describe the creation of elements during the evolution and explosion of a pre-supernova massive star. Those massive stars are the most prolific source of new isotopes from carbon to nickel; the advanced sequence of burning fuels is driven by gravitational collapse and its associated heating, resulting in the subsequent burning of carbon and silicon. However, most of the nucleosynthesis in the mass range A = 28–56 is caused by the upper layers of the star collapsing onto the core, creating a compressional shock wave rebounding outward; the shock front raises temperatures by 50%, thereby causing furious burning for about a second. This final burning in massive stars, called explosive nucleosynthesis or supernova nucleosynthesis, is the final epoch of stellar nucleosynthesis.
A stimulus to the development of the theory of nucleosynthesis was the discovery of variations in the abundances of elements found in the universe. The need for a physical description was inspired by the relative abundances of isotopes of the chemical elements in the solar system; those abundances, when plotted on a graph as a function of atomic number of the element, have a jagged sawtooth shape that varies by factors of tens of millions. This suggested a natural process, not random. A second stimulus to understanding the processes of stellar nucleosynthesis occurred during the 20th century, when it was realized that the energy released from nuclear fusion reactions accounted for the longevity of the Sun as a source of heat and light. In 1920, Arthur Eddington, on the basis of the precise measurements of atomic masses by F. W. Aston and a preliminary suggestion by Jean Perrin, proposed that stars obtained their energy from nuclear fusion of hydrogen to form helium and raised the possibility that the heavier elements are produced in stars.
This was a preliminary step toward the idea of nucleosynthesis. In 1928, George Gamow derived what is now called the Gamow factor, a quantum-mechanical formula that gave the probability of bringing two nuclei sufficiently close for the strong nuclear force to overcome the Coulomb barrier; the Gamow factor was used in the decade that followed by Atkinson and Houtermans and by Gamow himself and Edward Teller to derive the rate at which nuclear reactions would proceed at the high temperatures believed to exist in stellar interiors. In 1939, in a paper entitled "Energy Production in Stars", Hans Bethe analyzed the different possibilities for reactions by which hydrogen is fused into helium, he defined two processes. The first one, the proton–proton chain reaction, is the dominant energy source in stars with masses up to about the mass of the Sun; the second process, the carbon–nitrogen–oxygen cycle, considered by Carl Friedrich von Weizsäcker in 1938, is more important in more massive main-sequence stars.
These works concerned the energy generation capable of keeping stars hot. A clear physical description of the proton–proton chain and of the CNO cycle appears in a 1968 textbook. Bethe's two papers did not address the creation of heavier nuclei, however; that theory was begun by Fred Hoyle in 1946 with his argument that a collection of hot nuclei would assemble thermodynamically into iron Hoyle followed that in 1954 with a large paper describing how advanced fusion stages within massive stars would synthesize the elements from carbon to iron in mass. This is the first work of stellar nucleosynthesis, it and Hoyle's 1954 paper provided the roadmap to how the most abundant elements on Earth had been synthesized within stars from their initial hydrogen and helium, making clear how those abundant elements increased their galactic abundances as the galaxy aged. Hoyle's theory was expanded to other processes, beginning with the publication of a review paper in 1957 by Burbidge, Burbidge and Hoyle.
This review paper collected and refined earlier research into a cited picture that gave promise of accounting for the observed relative abundances of the elements.