A nova remnant is made up of the material either left behind by a sudden explosive fusion eruption by classical novae, or from multiple ejections by recurrent novae. Over their short lifetimes, nova shells show expansion velocities of around 1000 km/s, whose faint nebulosities are illuminated by their progenitor stars via light echos as observed with the spherical shell of Nova Persei 1901 or the energies remaining in the expanding bubbles like T Pyxidis. Most novae require a close binary system, with a white dwarf and a main sequence, sub-giant, or red giant star, or the merging of two red dwarfs, so all nova remnants must be associated with binaries; this theoretically means these nebula shapes might be affected by their central progenitor stars and the amount of matter ejected by novae. The shapes of these nova nebulae are of much interest to modern astrophysicists. Nova remnants when compared to supernova remnants or planetary nebulae generate much less both in energy and mass, they can be observed for a few centuries.
Examples of novae displaying nebula shells or remnants include: GK Per, RR Pic, DQ Her, FH Ser, V476 Cyg, V1974 Cyg, HR Del and V1500 Cyg. Notably, more nova remnants have been found with the new novae, due to improve imaging technology like CCD and at other wavelengths. Planetary nebula Supernova remnant Hypatia Libyan desert glass T Pyxidis Nova Remnant Double-star systems cycle between big and small blasts Nova Remnant comparison table Nova Remnant
A magnetic mirror, known as a magnetic trap in Russia and as a pyrotron in the US, is a type of magnetic confinement device used in fusion power to trap high temperature plasma using magnetic fields. The mirror was one of the earliest major approaches to fusion power, along with the stellarator and z-pinch machines. In a magnetic mirror, a configuration of electromagnets is used to create an area with an increasing density of magnetic field lines at either end of the confinement area. Particles approaching the ends experience an increasing force that causes them to reverse direction and return to the confinement area; this mirror effect will only occur for particles within a limited range of velocities and angles of approach, those outside the limits will escape, making mirrors inherently "leaky". An analysis of early fusion devices by Edward Teller pointed out that the basic mirror concept is inherently unstable. In 1960, Soviet researchers introduced a new "minimum-B" configuration to address this, modified by UK researchers into the "baseball coil" and by the US to "yin-yang magnet" layout.
Each of these introductions led to further increases in performance, damping out various instabilities, but required ever-large magnet systems. The tandem mirror concept, developed in the US and Russia at about the same time, offered a way to make energy-positive machines without requiring enormous magnets and power input. By the late 1970s, many of the design problems were considered solved, Lawrence Livermore Laboratory began the design of the Mirror Fusion Test Facility based on these concepts; the machine was completed in 1986, but by this time, experiments on the smaller Tandem Mirror Experiment revealed new problems. In a round of budget cuts, MFTF was mothballed, scrapped; the mirror approach has since seen less development, in favor of the tokamak, but mirror research continues today in countries like Japan and Russia. A fusion reactor concept called the Bumpy torus made use of a series of magnetic mirrors joined in a ring, it was investigated at the Oak Ridge National Laboratory until 1986.
The concept of magnetic-mirror plasma confinement was proposed in the mid-1950s independently by Gersh Budker at the Kurchatov Institute and Richard F. Post at the Lawrence Livermore National Laboratory in the US. With the formation of Project Sherwood in 1951, Post began development of a small device to test the mirror configuration; this consisted of a linear pyrex tube with magnets around the outside. The magnets were arranged in two sets, one set of small magnets spaced evenly along the length of the tube, another pair of much larger magnets at either end. In 1952 they were able to demonstrate that plasma within the tube was confined for much longer times when the mirror magnets at the end were turned on. At the time, he referred to this device as the "pyrotron". In a now-famous talk on fusion in 1954, Edward Teller noted that any device with convex magnetic field lines would be unstable, a problem today known as the flute instability; the mirror has such a configuration, but continued experiments seemed to suggest that the experimental machines were not suffering from this problem, although there were many more practical issues limiting their performance.
In Russia, the first small-scale mirror was built in 1959 at the Budker Institute of Nuclear Physics in Novosibirsk, Russia. They saw the problem Teller had warned them about. To fix the problem, magnetic fields should ideally be concave; this was solved by M. S. Ioffe, who added a series of additional current-carrying bars inside the reactor, such that the resulting magnetic field took on the shape of a twisted bow-tie, known as the minimum-B configuration, they demonstrated that this improved the confinement times to the order of milliseconds. The mystery of why the US's simple mirrors were not seeing this problem was discovered at a meeting in 1961. Lev Artsimovich inquired how the US team had concluded they had stable plasmas lasting on the order of milliseconds; this turned out to be due to the readings of one diagnostic instrument. When Artsimovich learned they had not accounted for the measurement delay in these instruments, it became clear the US mirrors had been suffering from this problem all along.
With this discovery, "Ioffe bars" were taken up by researchers in the US, UK, Japan. A group at the Culham Centre for Fusion Energy noted that the arrangement could be improved by combining the original rings and the bars into a single new arrangement similar to the seam on a tennis ball; this concept was picked up in the US. These "baseball coils" had the great advantage that they left the internal volume of the reactor open, allowing easy access for diagnostic instruments. On the downside, the size of the magnet in comparison to the volume of plasma was inconvenient, required powerful magnets. Post introduced a further improvement, the "yin-yang coils", which used two C-shaped magnets to produce the same field configuration, but in a smaller volume. With the major instability addressed, researchers now discovered that the original leakiness of the design was far higher than expected; this was traced to a host of newly discovered "microinstabilities" that caused fuel to enter the "escape cone" of the reactor and flow out the ends of the mirror.
Suppressing these new problems filled much of the 1960s. By the late 1960s, magnetic mirror confinement was considered a viable technique for producing fusion energy. In the United States, efforts were funded under the United States Atomic Energy Commissions' Project Sherwood. A machine design was first published in 1967; the concept w
Inertial confinement fusion
Inertial confinement fusion is a type of fusion energy research that attempts to initiate nuclear fusion reactions by heating and compressing a fuel target in the form of a pellet that most contains a mixture of deuterium and tritium. Typical fuel pellets contain around 10 milligrams of fuel. To compress and heat the fuel, energy is delivered to the outer layer of the target using high-energy beams of laser light, electrons or ions, although for a variety of reasons all ICF devices as of 2015 have used lasers; the heated outer layer explodes outward, producing a reaction force against the remainder of the target, accelerating it inwards, compressing the target. This process is designed to create shock waves. A sufficiently powerful set of shock waves can compress and heat the fuel at the center so much that fusion reactions occur. ICF is one of two major branches of fusion energy research, the other being magnetic confinement fusion; when it was first proposed in the early 1970s, ICF appeared to be a practical approach to power production and the field flourished.
Experiments during the 1970s and'80s demonstrated that the efficiency of these devices was much lower than expected, reaching ignition would not be easy. Throughout the 1980s and'90s, many experiments were conducted in order to understand the complex interaction of high-intensity laser light and plasma; these led to the design of newer machines, much larger, that would reach ignition energies. The largest operational ICF experiment is the National Ignition Facility in the US, designed using the decades-long experience of earlier experiments. Like those earlier experiments, however, NIF has failed to reach ignition and is, as of 2015, generating about 1⁄3 of the required energy levels. Fusion reactions combine lighter atoms, such as hydrogen, together to form larger ones; the reactions take place at such high temperatures that the atoms have been ionized, their electrons stripped off by the heat. Nuclei are positively charged, thus repel each other due to the electrostatic force. Overcoming this repulsion costs a considerable amount of energy, known as the Coulomb barrier or fusion barrier energy.
Less energy will be needed to cause lighter nuclei to fuse, as they have less charge and thus a lower barrier energy, when they do fuse, more energy will be released. As the mass of the nuclei increase, there is a point where the reaction no longer gives off net energy—the energy needed to overcome the energy barrier is greater than the energy released in the resulting fusion reaction; the best fuel from an energy perspective is a one-to-one mix of tritium. The D-T mix has a low barrier because of its high ratio of neutrons to protons; the presence of neutral neutrons in the nuclei helps pull them together via the nuclear force, while the presence of positively charged protons pushes the nuclei apart via electrostatic force. Tritium has one of the highest ratios of neutrons to protons of any stable or moderately unstable nuclide—two neutrons and one proton. Adding protons or removing neutrons increases the energy barrier. A mix of D-T at standard conditions does not undergo fusion. In the hot, dense center of the sun, the average proton will exist for billions of years before it fuses.
For practical fusion power systems, the rate must be increased by heating the fuel to tens of millions of degrees, and/or compressing it to immense pressures. The temperature and pressure required for any particular fuel to fuse is known as the Lawson criterion; these conditions have been known since the 1950s. To meet the Lawson Criterion is difficult on Earth, which explains why fusion research has taken many years to reach the current high state of technical prowess. In a hydrogen bomb, the fusion fuel is heated with a separate fission bomb. A variety of mechanisms transfers the energy of the fission "primary" explosion into the fusion fuel. A primary mechanism is that the flash of x-rays given off by the primary is trapped within the engineered case of the bomb, causing the volume between the case and the bomb to fill with an x-ray "gas"; these x-rays evenly illuminate the outside of the fusion section, the "secondary" heating it until it explodes outward. This outward blowoff causes the rest of the secondary to be compressed inward until it reaches the temperature and density where fusion reactions begin.
The requirement of a fission bomb makes the method impractical for power generation. Not only would the triggers be prohibitively expensive to produce, but there is a minimum size that such a bomb can be built, defined by the critical mass of the plutonium fuel used, it seems difficult to build nuclear devices smaller than about 1 kiloton in yield, the fusion secondary would add to this. This makes it a difficult engineering problem to extract power from the resulting explosions. One of the PACER participants, John Nuckolls, began to explore what happened to the size of the primary required to start the fusion reaction as the size of the secondary was scaled down, he discovered that as the secondary reaches the miligram size, the amount of energy needed to spark it fell into the megajoule range. This was far below what was needed for a bomb, where the primary was in the tera
Stellar nucleosynthesis is the theory explaining the creation of chemical elements by nuclear fusion reactions between atoms within stars. Stellar nucleosynthesis has occurred continuously since the original creation of hydrogen and lithium during the Big Bang, it is a predictive theory that today yields excellent agreement between calculations based upon it and the observed abundances of the elements. It explains why the observed abundances of elements in the universe grow over time and why some elements and their isotopes are much more abundant than others; the theory was proposed by Fred Hoyle in 1946, who refined it in 1954. Further advances were made to nucleosynthesis by neutron capture of the elements heavier than iron, by Margaret Burbidge, Geoffrey Burbidge, William Alfred Fowler and Hoyle in their famous 1957 B2FH paper, which became one of the most cited papers in astrophysics history. Stars evolve because of changes in their composition over their lifespans, first by burning hydrogen helium, progressively burning higher elements.
However, this does not by itself alter the abundances of elements in the universe as the elements are contained within the star. In its life, a low-mass star will eject its atmosphere via stellar wind, forming a planetary nebula, while a higher–mass star will eject mass via a sudden catastrophic event called a supernova; the term supernova nucleosynthesis is used to describe the creation of elements during the evolution and explosion of a pre-supernova massive star. Those massive stars are the most prolific source of new isotopes from carbon to nickel; the advanced sequence of burning fuels is driven by gravitational collapse and its associated heating, resulting in the subsequent burning of carbon and silicon. However, most of the nucleosynthesis in the mass range A = 28–56 is caused by the upper layers of the star collapsing onto the core, creating a compressional shock wave rebounding outward; the shock front raises temperatures by 50%, thereby causing furious burning for about a second. This final burning in massive stars, called explosive nucleosynthesis or supernova nucleosynthesis, is the final epoch of stellar nucleosynthesis.
A stimulus to the development of the theory of nucleosynthesis was the discovery of variations in the abundances of elements found in the universe. The need for a physical description was inspired by the relative abundances of isotopes of the chemical elements in the solar system; those abundances, when plotted on a graph as a function of atomic number of the element, have a jagged sawtooth shape that varies by factors of tens of millions. This suggested a natural process, not random. A second stimulus to understanding the processes of stellar nucleosynthesis occurred during the 20th century, when it was realized that the energy released from nuclear fusion reactions accounted for the longevity of the Sun as a source of heat and light. In 1920, Arthur Eddington, on the basis of the precise measurements of atomic masses by F. W. Aston and a preliminary suggestion by Jean Perrin, proposed that stars obtained their energy from nuclear fusion of hydrogen to form helium and raised the possibility that the heavier elements are produced in stars.
This was a preliminary step toward the idea of nucleosynthesis. In 1928, George Gamow derived what is now called the Gamow factor, a quantum-mechanical formula that gave the probability of bringing two nuclei sufficiently close for the strong nuclear force to overcome the Coulomb barrier; the Gamow factor was used in the decade that followed by Atkinson and Houtermans and by Gamow himself and Edward Teller to derive the rate at which nuclear reactions would proceed at the high temperatures believed to exist in stellar interiors. In 1939, in a paper entitled "Energy Production in Stars", Hans Bethe analyzed the different possibilities for reactions by which hydrogen is fused into helium, he defined two processes. The first one, the proton–proton chain reaction, is the dominant energy source in stars with masses up to about the mass of the Sun; the second process, the carbon–nitrogen–oxygen cycle, considered by Carl Friedrich von Weizsäcker in 1938, is more important in more massive main-sequence stars.
These works concerned the energy generation capable of keeping stars hot. A clear physical description of the proton–proton chain and of the CNO cycle appears in a 1968 textbook. Bethe's two papers did not address the creation of heavier nuclei, however; that theory was begun by Fred Hoyle in 1946 with his argument that a collection of hot nuclei would assemble thermodynamically into iron Hoyle followed that in 1954 with a large paper describing how advanced fusion stages within massive stars would synthesize the elements from carbon to iron in mass. This is the first work of stellar nucleosynthesis, it and Hoyle's 1954 paper provided the roadmap to how the most abundant elements on Earth had been synthesized within stars from their initial hydrogen and helium, making clear how those abundant elements increased their galactic abundances as the galaxy aged. Hoyle's theory was expanded to other processes, beginning with the publication of a review paper in 1957 by Burbidge, Burbidge and Hoyle.
This review paper collected and refined earlier research into a cited picture that gave promise of accounting for the observed relative abundances of the elements.
A nuclear reactor known as an atomic pile, is a device used to initiate and control a self-sustained nuclear chain reaction. Nuclear reactors are used at nuclear power plants for electricity generation and in propulsion of ships. Heat from nuclear fission is passed to a working fluid; these either turn electrical generators' shafts. Nuclear generated steam in principle can be used for industrial process heat or for district heating; some reactors are used to produce isotopes for medical and industrial use, or for production of weapons-grade plutonium. Some are run only for research; as of early 2019, the IAEA reports there are 454 nuclear power reactors and 226 nuclear research reactors in operation around the world. Just as conventional power-stations generate electricity by harnessing the thermal energy released from burning fossil fuels, nuclear reactors convert the energy released by controlled nuclear fission into thermal energy for further conversion to mechanical or electrical forms; when a large fissile atomic nucleus such as uranium-235 or plutonium-239 absorbs a neutron, it may undergo nuclear fission.
The heavy nucleus splits into two or more lighter nuclei, releasing kinetic energy, gamma radiation, free neutrons. A portion of these neutrons may be absorbed by other fissile atoms and trigger further fission events, which release more neutrons, so on; this is known as a nuclear chain reaction. To control such a nuclear chain reaction, neutron poisons and neutron moderators can change the portion of neutrons that will go on to cause more fission. Nuclear reactors have automatic and manual systems to shut the fission reaction down if monitoring detects unsafe conditions. Used moderators include regular water, solid graphite and heavy water; some experimental types of reactor have used beryllium, hydrocarbons have been suggested as another possibility. The reactor core generates heat in a number of ways: The kinetic energy of fission products is converted to thermal energy when these nuclei collide with nearby atoms; the reactor absorbs some of the gamma rays produced during fission and converts their energy into heat.
Heat is produced by the radioactive decay of fission products and materials that have been activated by neutron absorption. This decay heat-source will remain for some time after the reactor is shut down. A kilogram of uranium-235 converted via nuclear processes releases three million times more energy than a kilogram of coal burned conventionally. A nuclear reactor coolant — water but sometimes a gas or a liquid metal or molten salt — is circulated past the reactor core to absorb the heat that it generates; the heat is carried away from the reactor and is used to generate steam. Most reactor systems employ a cooling system, physically separated from the water that will be boiled to produce pressurized steam for the turbines, like the pressurized water reactor. However, in some reactors the water for the steam turbines is boiled directly by the reactor core; the rate of fission reactions within a reactor core can be adjusted by controlling the quantity of neutrons that are able to induce further fission events.
Nuclear reactors employ several methods of neutron control to adjust the reactor's power output. Some of these methods arising from the physics of radioactive decay and are accounted for during the reactor's operation, while others are mechanisms engineered into the reactor design for a distinct purpose; the fastest method for adjusting levels of fission-inducing neutrons in a reactor is via movement of the control rods. Control rods therefore tend to absorb neutrons; when a control rod is inserted deeper into the reactor, it absorbs more neutrons than the material it displaces—often the moderator. This action results in fewer neutrons available to cause fission and reduces the reactor's power output. Conversely, extracting the control rod will result in an increase in the rate of fission events and an increase in power; the physics of radioactive decay affects neutron populations in a reactor. One such process is delayed neutron emission by a number of neutron-rich fission isotopes; these delayed neutrons account for about 0.65% of the total neutrons produced in fission, with the remainder released upon fission.
The fission products which produce delayed neutrons have half lives for their decay by neutron emission that range from milliseconds to as long as several minutes, so considerable time is required to determine when a reactor reaches the critical point. Keeping the reactor in the zone of chain-reactivity where delayed neutrons are necessary to achieve a critical mass state allows mechanical devices or human operators to control a chain reaction in "real time"; this last stage, where delayed neutrons are no longer required to maintain criticality, is known as the prompt critical point. There is a scale for describing criticality in numerical form, in which bare criticality is known as zero dollars and the prompt critical point is one dollar, other points in the process interpolated in cents. In some reactors, the coolant acts as a neutron moderator. A moderator increases the power of the reactor by causin
A nova or classical nova is a transient astronomical event that causes the sudden appearance of a bright "new" star, that fades over several weeks or many months. Novae involve an interaction between two stars that cause the flareup, perceived as a new entity, much brighter than the stars involved. Causes of the dramatic appearance of a nova vary, depending on the circumstances of the two progenitor stars. All observed novae involve located binary stars, either a pair of red dwarfs in the process of merging, or a white dwarf and another star; the main sub-classes of novae are classical novae, recurrent novae, dwarf novae. They are all considered to be cataclysmic variable stars. Luminous red novae share the name and are cataclysmic variables, but are a different type of event caused by a stellar merger. With similar names are the much more energetic supernovae and kilonovae. Classical nova eruptions are the most common type of nova, they are created in a close binary star system consisting of a white dwarf and either a main sequence, sub-giant, or red giant star.
When the orbital period falls in the range of several days to one day, the white dwarf is close enough to its companion star to start drawing accreted matter onto the surface of the white dwarf, which creates a dense but shallow atmosphere. This atmosphere is hydrogen and is thermally heated by the hot white dwarf, which reaches a critical temperature causing rapid runaway ignition by fusion. From the dramatic and sudden energies created, the now hydrogen-burnt atmosphere is dramatically expelled into interstellar space, its brightened envelope is seen as the visible light created from the nova event, was mistaken as a "new" star. A few novae produce short-lived nova remnants, lasting for several centuries. Recurrent nova processes are the same as the classical nova, except that the fusion ignition may be repetitive because the companion star can again feed the dense atmosphere of the white dwarf. Novae most occur in the sky along the path of the Milky Way near the observed galactic centre in Sagittarius.
They occur far more than galactic supernovae, averaging about ten per year. Most are found telescopically only one every year to eighteen months reaching naked-eye visibility. Novae reaching first or second magnitude occur only several times per century; the last bright nova was V1369 Centauri reaching 3.3 magnitude on 14 December 2013. During the sixteenth century, astronomer Tycho Brahe observed the supernova SN 1572 in the constellation Cassiopeia, he described it in his book De nova stella. In this work he argued that a nearby object should be seen to move relative to the fixed stars, that the nova had to be far away. Although this event was a supernova and not a nova, the terms were considered interchangeable until the 1930s. After this, novae were classified as classical novae to distinguish them from supernovae, as their causes and energies were thought to be different, based in the observational evidence. Despite the term "stella nova" meaning "new star", novae most take place as a result of white dwarfs: remnants of old stars.
Evolution of potential novae begins with two main sequence stars in a binary system. One of the two evolves into a red giant, leaving its remnant white dwarf core in orbit with the remaining star; the second star—which may be either a main sequence star or an aging giant—begins to shed its envelope onto its white dwarf companion when it overflows its Roche lobe. As a result, the white dwarf captures matter from the companion's outer atmosphere in an accretion disk, in turn, the accreted matter falls into the atmosphere; as the white dwarf consists of degenerate matter, the accreted hydrogen does not inflate, but its temperature increases. Runaway fusion occurs when the temperature of this atmospheric layer reaches ~20 million K, initiating nuclear burning, via the CNO cycle. Hydrogen fusion may occur in a stable manner on the surface of the white dwarf for a narrow range of accretion rates, giving rise to a super soft X-ray source, but for most binary system parameters, the hydrogen burning is unstable thermally and converts a large amount of the hydrogen into other, heavier chemical elements in a runaway reaction, liberating an enormous amount of energy.
This blows the remaining gases away from the surface of the white dwarf surface and produces an bright outburst of light. The rise to peak brightness may be rapid, or gradual; this is related to the speed class of the nova. The time taken for a nova to decay by around 2 or 3 magnitudes from maximum optical brightness is used for classification, via its speed class. Fast novae will take fewer than 25 days to decay by 2 magnitudes, while slow novae will take more than 80 days. In spite of their violence the amount of material ejected in novae is only about 1⁄10,000 of a solar mass, quite small relative to the mass of the white dwarf. Furthermore, only five percent of the accreted mass is fused during the power outburst. Nonetheless, this is enough energy to accelerate nova ejecta to velocities as high as several thousand kilometers per second—higher for fast novae than slow ones—with a concurrent rise in luminosity from a few times solar to 50,000–100,000 times solar. In 2010 scientists using NASA's Fermi Gamma-ray Space Telescope discovered that a nova can emit gamma-rays.
A white dwarf can generate multiple novae over t
Neodymium is a chemical element with symbol Nd and atomic number 60. It is a soft silvery metal. Neodymium was discovered in 1885 by the Austrian chemist Carl Auer von Welsbach, it is present in significant quantities in the ore minerals bastnäsite. Neodymium is not found in metallic form or unmixed with other lanthanides, it is refined for general use. Although neodymium is classed as a rare earth, it is a common element, no rarer than cobalt, nickel, or copper, is distributed in the Earth's crust. Most of the world's commercial neodymium is mined in China. Neodymium compounds were first commercially used as glass dyes in 1927, they remain a popular additive in glasses; the color of neodymium compounds—due to the Nd3+ ion—is a reddish-purple but it changes with the type of lighting, due to the interaction of the sharp light absorption bands of neodymium with ambient light enriched with the sharp visible emission bands of mercury, trivalent europium or terbium. Some neodymium-doped glasses are used in lasers that emit infrared with wavelengths between 1047 and 1062 nanometers.
These have been used in extremely-high-power applications, such as experiments in inertial confinement fusion. Neodymium is used with various other substrate crystals, such as yttrium aluminium garnet in the Nd:YAG laser; this laser emits infrared at a wavelength of about 1064 nanometers. The Nd:YAG laser is one of the most used solid-state lasers. Another important use of neodymium is as a component in the alloys used to make high-strength neodymium magnets—powerful permanent magnets; these magnets are used in such products as microphones, professional loudspeakers, in-ear headphones, high performance hobby DC electric motors, computer hard disks, where low magnet mass or strong magnetic fields are required. Larger neodymium magnets are used in generators. Neodymium, a rare-earth metal, was present in the classical mischmetal at a concentration of about 18%. Metallic neodymium has a bright, silvery metallic luster, but as one of the more reactive lanthanide rare-earth metals, it oxidizes in ordinary air.
The oxide layer that forms peels off, exposing the metal to further oxidation. Thus, a centimeter-sized sample of neodymium oxidizes within a year. Neodymium exists in two allotropic forms, with a transformation from a double hexagonal to a body-centered cubic structure taking place at about 863 °C. Neodymium metal tarnishes in air and it burns at about 150 °C to form neodymium oxide: 4 Nd + 3 O2 → 2 Nd2O3Neodymium is a quite electropositive element, it reacts with cold water, but quite with hot water to form neodymium hydroxide: 2 Nd + 6 H2O → 2 Nd3 + 3 H2 Neodymium metal reacts vigorously with all the halogens: 2 Nd + 3 F2 → 2 NdF3 2 Nd + 3 Cl2 → 2 NdCl3 2 Nd + 3 Br2 → 2 NdBr3 2 Nd + 3 I2 → 2 NdI3 Neodymium dissolves in dilute sulfuric acid to form solutions that contain the lilac Nd ion; these exist as a 3+ complexes: 2 Nd + 3 H2SO4 → 2 Nd3+ + 3 SO2−4 + 3 H2 Neodymium compounds include halides: neodymium fluoride. Occurring neodymium is a mixture of five stable isotopes, 142Nd, 143Nd, 145Nd, 146Nd and 148Nd, with 142Nd being the most abundant, two radioisotopes, 144Nd and 150Nd.
In all, 31 radioisotopes of neodymium have been detected as of 2010, with the most stable radioisotopes being the occurring ones: 144Nd and 150Nd. All of the remaining radioactive isotopes have half-lives that are shorter than eleven days, the majority of these have half-lives that are shorter than 70 seconds. Neodymium has 13 known meta states, with the most stable one being 139mNd, 135mNd and 133m1Nd; the primary decay modes before the most abundant stable isotope, 142Nd, are electron capture and positron decay, the primary mode after is beta minus decay. The primary decay products before 142Nd are element Pr isotopes and the primary products after are element Pm isotopes. Neodymium was discovered by Baron Carl Auer von Welsbach, an Austrian chemist, in Vienna in 1885, he separated neodymium, as well as the element praseodymium, from a material known as didymium by means of fractional crystallization of the double ammonium nitrate tetrahydrates from nitric acid, while following the separation by spectroscopic analysis.
The name neodymium is derived from the Greek words neos and didymos, twin. Double nitrate crystallization was the means of commercial neodymium purification until the 1950s. Lindsay Chemical Division was the first to commercialize large-scale ion-exchange purification of neodymium. Starting in the 1950s, high purity