The neutron is a subatomic particle, symbol n or n0, with no net electric charge and a mass larger than that of a proton. Protons and neutrons constitute the nuclei of atoms. Since protons and neutrons behave within the nucleus, each has a mass of one atomic mass unit, they are both referred to as nucleons, their properties and interactions are described by nuclear physics. The chemical and nuclear properties of the nucleus are determined by the number of protons, called the atomic number, the number of neutrons, called the neutron number; the atomic mass number is the total number of nucleons. For example, carbon has atomic number 6, its abundant carbon-12 isotope has 6 neutrons, whereas its rare carbon-13 isotope has 7 neutrons; some elements occur in nature with only one stable isotope, such as fluorine. Other elements occur with many stable isotopes, such as tin with ten stable isotopes. Within the nucleus and neutrons are bound together through the nuclear force. Neutrons are required for the stability of nuclei, with the exception of the single-proton hydrogen atom.
Neutrons are produced copiously in nuclear fusion. They are a primary contributor to the nucleosynthesis of chemical elements within stars through fission and neutron capture processes; the neutron is essential to the production of nuclear power. In the decade after the neutron was discovered by James Chadwick in 1932, neutrons were used to induce many different types of nuclear transmutations. With the discovery of nuclear fission in 1938, it was realized that, if a fission event produced neutrons, each of these neutrons might cause further fission events, etc. in a cascade known as a nuclear chain reaction. These events and findings led to the first self-sustaining nuclear reactor and the first nuclear weapon. Free neutrons, while not directly ionizing atoms, cause ionizing radiation; as such they can be a biological hazard, depending upon dose. A small natural "neutron background" flux of free neutrons exists on Earth, caused by cosmic ray showers, by the natural radioactivity of spontaneously fissionable elements in the Earth's crust.
Dedicated neutron sources like neutron generators, research reactors and spallation sources produce free neutrons for use in irradiation and in neutron scattering experiments. An atomic nucleus is formed by a number of protons, Z, a number of neutrons, N, bound together by the nuclear force; the atomic number defines the chemical properties of the atom, the neutron number determines the isotope or nuclide. The terms isotope and nuclide are used synonymously, but they refer to chemical and nuclear properties, respectively. Speaking, isotopes are two or more nuclides with the same number of protons; the atomic mass number, symbol A, equals Z+N. Nuclides with the same atomic mass number are called isobars; the nucleus of the most common isotope of the hydrogen atom is a lone proton. The nuclei of the heavy hydrogen isotopes deuterium and tritium contain one proton bound to one and two neutrons, respectively. All other types of atomic nuclei are composed of two or more protons and various numbers of neutrons.
The most common nuclide of the common chemical element lead, 208Pb, has 82 protons and 126 neutrons, for example. The table of nuclides comprises all the known nuclides. Though it is not a chemical element, the neutron is included in this table; the free neutron has 1.674927471 × 10 − 27 kg, or 1.00866491588 u. The neutron has a mean square radius of about 0.8×10−15 m, or 0.8 fm, it is a spin-½ fermion. The neutron has no measurable electric charge. With its positive electric charge, the proton is directly influenced by electric fields, whereas the neutron is unaffected by electric fields; the neutron has a magnetic moment, however. The neutron's magnetic moment has a negative value, because its orientation is opposite to the neutron's spin. A free neutron is unstable, decaying to a proton and antineutrino with a mean lifetime of just under 15 minutes; this radioactive decay, known as beta decay, is possible because the mass of the neutron is greater than the proton. The free proton is stable. Neutrons or protons bound in a nucleus can be stable or unstable, depending on the nuclide.
Beta decay, in which neutrons decay to protons, or vice versa, is governed by the weak force, it requires the emission or absorption of electrons and neutrinos, or their antiparticles. Protons and neutrons behave identically under the influence of the nuclear force within the nucleus; the concept of isospin, in which the proton and neutron are viewed as two quantum states of the same particle, is used to model the interactions of nucleons by the nuclear or weak forces. Because of the strength of the nuclear force at short distances, the binding energy of nucleons is more than seven orders of magnitude larger than the electromagnetic energy binding electrons in atoms. Nuclear reactions therefore have an energy density, more than ten million times that of chemical reactions; because of the mass–energy equivalence, nuclear binding energies reduce the mass of nuclei. The ability of the nuclear force to store energy arising from the electromagnetic repulsion of nuclear components is the basis for most of the energy that makes nuclear reactors or bombs possible.
In nuclear fission, the absorption of a neutron by a heavy nuclide causes the nuclide to become unstable and break into light nuclides and additional neu
A proton is a subatomic particle, symbol p or p+, with a positive electric charge of +1e elementary charge and a mass less than that of a neutron. Protons and neutrons, each with masses of one atomic mass unit, are collectively referred to as "nucleons". One or more protons are present in the nucleus of every atom; the number of protons in the nucleus is the defining property of an element, is referred to as the atomic number. Since each element has a unique number of protons, each element has its own unique atomic number; the word proton is Greek for "first", this name was given to the hydrogen nucleus by Ernest Rutherford in 1920. In previous years, Rutherford had discovered that the hydrogen nucleus could be extracted from the nuclei of nitrogen by atomic collisions. Protons were therefore a candidate to be a fundamental particle, hence a building block of nitrogen and all other heavier atomic nuclei. In the modern Standard Model of particle physics, protons are hadrons, like neutrons, the other nucleon, are composed of three quarks.
Although protons were considered fundamental or elementary particles, they are now known to be composed of three valence quarks: two up quarks of charge +2/3e and one down quark of charge –1/3e. The rest masses of quarks contribute only about 1% of a proton's mass, however; the remainder of a proton's mass is due to quantum chromodynamics binding energy, which includes the kinetic energy of the quarks and the energy of the gluon fields that bind the quarks together. Because protons are not fundamental particles, they possess a physical size, though not a definite one. At sufficiently low temperatures, free protons will bind to electrons. However, the character of such bound protons does not change, they remain protons. A fast proton moving through matter will slow by interactions with electrons and nuclei, until it is captured by the electron cloud of an atom; the result is a protonated atom, a chemical compound of hydrogen. In vacuum, when free electrons are present, a sufficiently slow proton may pick up a single free electron, becoming a neutral hydrogen atom, chemically a free radical.
Such "free hydrogen atoms" tend to react chemically with many other types of atoms at sufficiently low energies. When free hydrogen atoms react with each other, they form neutral hydrogen molecules, which are the most common molecular component of molecular clouds in interstellar space. Protons are composed of three valence quarks, making them baryons; the two up quarks and one down quark of a proton are held together by the strong force, mediated by gluons. A modern perspective has a proton composed of the valence quarks, the gluons, transitory pairs of sea quarks. Protons have a positive charge distribution which decays exponentially, with a mean square radius of about 0.8 fm. Protons and neutrons are both nucleons, which may be bound together by the nuclear force to form atomic nuclei; the nucleus of the most common isotope of the hydrogen atom is a lone proton. The nuclei of the heavy hydrogen isotopes deuterium and tritium contain one proton bound to one and two neutrons, respectively. All other types of atomic nuclei are composed of two or more protons and various numbers of neutrons.
The concept of a hydrogen-like particle as a constituent of other atoms was developed over a long period. As early as 1815, William Prout proposed that all atoms are composed of hydrogen atoms, based on a simplistic interpretation of early values of atomic weights, disproved when more accurate values were measured. In 1886, Eugen Goldstein discovered canal rays and showed that they were positively charged particles produced from gases. However, since particles from different gases had different values of charge-to-mass ratio, they could not be identified with a single particle, unlike the negative electrons discovered by J. J. Thomson. Wilhelm Wien in 1898 identified the hydrogen ion as particle with highest charge-to-mass ratio in ionized gases. Following the discovery of the atomic nucleus by Ernest Rutherford in 1911, Antonius van den Broek proposed that the place of each element in the periodic table is equal to its nuclear charge; this was confirmed experimentally by Henry Moseley in 1913 using X-ray spectra.
In 1917, Rutherford proved that the hydrogen nucleus is present in other nuclei, a result described as the discovery of protons. Rutherford had earlier learned to produce hydrogen nuclei as a type of radiation produced as a product of the impact of alpha particles on nitrogen gas, recognize them by their unique penetration signature in air and their appearance in scintillation detectors; these experiments were begun when Rutherford had noticed that, when alpha particles were shot into air, his scintillation detectors showed the signatures of typical hydrogen nuclei as a product. After experimentation Rutherford traced the reaction to the nitrogen in air, found that when alphas were produced into pure nitrogen gas, the effect was larger. Rutherford determined that this hydrogen could have come only from the nitrogen, therefore nitrogen must contain hydrogen nuclei. One hydrogen nucleus was being knocked off by the impact of the alpha particle, producing oxygen-17 in the process; this was 14N + α → 17O + p.
(This reaction wo
Expansion of the universe
The expansion of the universe is the increase of the distance between two distant parts of the universe with time. It is an intrinsic expansion; the universe does not require space to exist "outside" it. Technically, neither space nor objects in space move. Instead it is the metric governing the geometry of spacetime itself that changes in scale. Although light and objects within spacetime cannot travel faster than the speed of light, this limitation does not restrict the metric itself. To an observer it appears that space is expanding and all but the nearest galaxies are receding into the distance. During the inflationary epoch about 10−32 of a second after the Big Bang, the universe expanded, its volume increased by a factor of at least 1078, equivalent to expanding an object 1 nanometer in length to one 10.6 light years long. A much slower and gradual expansion of space continued after this, until at around 9.8 billion years after the Big Bang it began to expand more and is still doing so. The metric expansion of space is of a kind different from the expansions and explosions seen in daily life.
It seems to be a property of the universe as a whole rather than a phenomenon that applies just to one part of the universe or can be observed from "outside" it. Metric expansion is a key feature of Big Bang cosmology, is modeled mathematically with the Friedmann-Lemaître-Robertson-Walker metric and is a generic property of the universe we inhabit. However, the model is valid only on large scales, because gravitational attraction binds matter together enough that metric expansion cannot be observed at this time, on a smaller scale; as such, the only galaxies receding from one another as a result of metric expansion are those separated by cosmologically relevant scales larger than the length scales associated with the gravitational collapse that are possible in the age of the universe given the matter density and average expansion rate. Physicists have postulated the existence of dark energy, appearing as a cosmological constant in the simplest gravitational models as a way to explain the acceleration.
According to the simplest extrapolation of the currently-favored cosmological model, the Lambda-CDM model, this acceleration becomes more dominant into the future. In June 2016, NASA and ESA scientists reported that the universe was found to be expanding 5% to 9% faster than thought earlier, based on studies using the Hubble Space Telescope. While special relativity prohibits objects from moving faster than light with respect to a local reference frame where spacetime can be treated as flat and unchanging, it does not apply to situations where spacetime curvature or evolution in time become important; these situations are described by general relativity, which allows the separation between two distant objects to increase faster than the speed of light, although the definition of "separation" is different from that used in an inertial frame. This can be seen. Light, emitted today from galaxies beyond the cosmological event horizon, about 5 gigaparsecs or 16 billion light-years, will never reach us, although we can still see the light that these galaxies emitted in the past.
Because of the high rate of expansion, it is possible for a distance between two objects to be greater than the value calculated by multiplying the speed of light by the age of the universe. These details are a frequent source of confusion among amateurs and professional physicists. Due to the non-intuitive nature of the subject and what has been described by some as "careless" choices of wording, certain descriptions of the metric expansion of space and the misconceptions to which such descriptions can lead are an ongoing subject of discussion within education and communication of scientific concepts. In 1912, Vesto Slipher discovered that light from remote galaxies was redshifted, interpreted as galaxies receding from the Earth. In 1922, Alexander Friedmann used Einstein field equations to provide theoretical evidence that the universe is expanding. In 1927, Georges Lemaître independently reached a similar conclusion to Friedmann on a theoretical basis, presented the first observational evidence for a linear relationship between distance to galaxies and their recessional velocity.
Edwin Hubble observationally confirmed Lemaître's findings two years later. Assuming the cosmological principle, these findings would imply that all galaxies are moving away from each other. Based on large quantities of experimental observation and theoretical work, the scientific consensus is that space itself is expanding, that it expanded rapidly within the first fraction of a second after the Big Bang; this kind of expansion is known as "metric expansion". In mathematics and physics, a "metric" means a measure of distance, the term implies that the sense of distance within the universe is itself changing; the modern explanation for the metric expansion of space was proposed by physicist Alan Guth in 1979 while investigating the problem of why no magnetic monopoles are seen today. Guth found in his investigation that if the universe contained a field that has a positive-energy false vacuum state according to general relativity it would generate an exponential expansion of space. I
Modified Newtonian dynamics
Modified Newtonian dynamics is a theory that proposes a modification of Newton's laws to account for observed properties of galaxies. It is an alternative to the theory of dark matter in terms of explaining why galaxies do not appear to obey the understood laws of physics. Created in 1982 and first published in 1983 by Israeli physicist Mordehai Milgrom, the theory's original motivation was to explain why the velocities of stars in galaxies were observed to be larger than expected based on Newtonian mechanics. Milgrom noted that this discrepancy could be resolved if the gravitational force experienced by a star in the outer regions of a galaxy was proportional to the square of its centripetal acceleration, or alternatively if gravitational force came to vary inversely with radius. In MOND, violation of Newton's laws occurs at small accelerations, characteristic of galaxies yet far below anything encountered in the Solar System or on Earth. MOND is an example of a class of theories known as modified gravity, is an alternative to the hypothesis that the dynamics of galaxies are determined by massive, invisible dark matter halos.
Since Milgrom's original proposal, MOND has predicted a variety of galactic phenomena that are difficult to understand from a dark matter perspective. However, MOND and its generalisations do not adequately account for observed properties of galaxy clusters, no satisfactory cosmological model has been constructed from the theory; the accurate measurement of the speed of gravitational waves compared to the speed of light in 2017 ruled out many theories which used modified gravity to explain dark matter. However, both Milgrom’s bi-metric formulation of MOND and nonlocal MOND are not ruled out according to the same study. Several independent observations point to the fact that the visible mass in galaxies and galaxy clusters is insufficient to account for their dynamics, when analysed using Newton's laws; this discrepancy – known as the "missing mass problem" – was first identified for clusters by Swiss astronomer Fritz Zwicky in 1933, subsequently extended to include spiral galaxies by the 1939 work of Horace Babcock on Andromeda.
These early studies were augmented and brought to the attention of the astronomical community in the 1960s and 1970s by the work of Vera Rubin at the Carnegie Institute in Washington, who mapped in detail the rotation velocities of stars in a large sample of spirals. While Newton's Laws predict that stellar rotation velocities should decrease with distance from the galactic centre and collaborators found instead that they remain constant – the rotation curves are said to be "flat"; this observation necessitates at least one of the following: 1) There exists in galaxies large quantities of unseen matter which boosts the stars' velocities beyond what would be expected on the basis of the visible mass alone, or 2) Newton's Laws do not apply to galaxies. The former leads to the dark matter hypothesis; the basic premise of MOND is that while Newton's laws have been extensively tested in high-acceleration environments, they have not been verified for objects with low acceleration, such as stars in the outer parts of galaxies.
This led Milgrom to postulate a new effective gravitational force law that relates the true acceleration of an object to the acceleration that would be predicted for it on the basis of Newtonian mechanics. This law, the keystone of MOND, is chosen to reduce to the Newtonian result at high acceleration but lead to different behaviour at low acceleration: Here FN is the Newtonian force, m is the object's mass, a is its acceleration, μ is an as-yet unspecified function, a0 is a new fundamental constant which marks the transition between the Newtonian and deep-MOND regimes. Agreement with Newtonian mechanics requires μ → 1 for x ≫ 1, consistency with astronomical observations requires μ → x for x ≪ 1. Beyond these limits, the interpolating function is not specified by the theory, although it is possible to weakly constrain it empirically. Two common choices are the "simple interpolating function": μ = 1 1 + a 0 a, the "standard interpolating function": μ = 1 1 + 2. Thus, in the deep-MOND regime: F N = m
Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force exhibits electromagnetic fields such as electric fields, magnetic fields, light, is one of the four fundamental interactions in nature; the other three fundamental interactions are the strong interaction, the weak interaction, gravitation. At high energy the weak force and electromagnetic force are unified as a single electroweak force. Electromagnetic phenomena are defined in terms of the electromagnetic force, sometimes called the Lorentz force, which includes both electricity and magnetism as different manifestations of the same phenomenon; the electromagnetic force plays a major role in determining the internal properties of most objects encountered in daily life. Ordinary matter takes its form as a result of intermolecular forces between individual atoms and molecules in matter, is a manifestation of the electromagnetic force.
Electrons are bound by the electromagnetic force to atomic nuclei, their orbital shapes and their influence on nearby atoms with their electrons is described by quantum mechanics. The electromagnetic force governs all chemical processes, which arise from interactions between the electrons of neighboring atoms. There are numerous mathematical descriptions of the electromagnetic field. In classical electrodynamics, electric fields are described as electric potential and electric current. In Faraday's law, magnetic fields are associated with electromagnetic induction and magnetism, Maxwell's equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents; the theoretical implications of electromagnetism the establishment of the speed of light based on properties of the "medium" of propagation, led to the development of special relativity by Albert Einstein in 1905. Electricity and magnetism were considered to be two separate forces; this view changed, with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force.
There are four main effects resulting from these interactions, all of which have been demonstrated by experiments: Electric charges attract or repel one another with a force inversely proportional to the square of the distance between them: unlike charges attract, like ones repel. Magnetic poles attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole. An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire, its direction depends on the direction of the current in the wire. A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it. While preparing for an evening lecture on 21 April 1820, Hans Christian Ørsted made a surprising observation; as he was setting up his materials, he noticed a compass needle deflected away from magnetic north when the electric current from the battery he was using was switched on and off.
This deflection convinced him that magnetic fields radiate from all sides of a wire carrying an electric current, just as light and heat do, that it confirmed a direct relationship between electricity and magnetism. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire; the CGS unit of magnetic induction is named in honor of his contributions to the field of electromagnetism. His findings resulted in intensive research throughout the scientific community in electrodynamics, they influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery represented a major step toward a unified concept of energy.
This unification, observed by Michael Faraday, extended by James Clerk Maxwell, reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th century mathematical physics. It has had far-reaching consequences, one of, the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies. Ørsted was not the only person to examine the relationship between magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile; the factual setup of the experiment is not clear, so if current flew across the needle or not.
An account of the discovery was published in 1802 in an Italian newspaper, but it was overlooked by the contemporary scientific community, because Romagnosi did not belong to this community. An earlier, neglected, connec
Dark matter is a hypothetical form of matter, thought to account for 85% of the matter in the universe and about a quarter of its total energy density. The majority of dark matter is thought to be non-baryonic in nature being composed of some as-yet undiscovered subatomic particles, its presence is implied in a variety of astrophysical observations, including gravitational effects that cannot be explained by accepted theories of gravity unless more matter is present than can be seen. For this reason, most experts think dark matter to be ubiquitous in the universe and to have had a strong influence on its structure and evolution. Dark matter is called dark because it does not appear to interact with observable electromagnetic radiation, such as light, is thus invisible to the entire electromagnetic spectrum, making it difficult to detect using usual astronomical equipment; the primary evidence for dark matter is that calculations show that many galaxies would fly apart instead of rotating, or would not have formed or move as they do, if they did not contain a large amount of unseen matter.
Other lines of evidence include observations in gravitational lensing, from the cosmic microwave background, from astronomical observations of the observable universe's current structure, from the formation and evolution of galaxies, from mass location during galactic collisions, from the motion of galaxies within galaxy clusters. In the standard Lambda-CDM model of cosmology, the total mass–energy of the universe contains 5% ordinary matter and energy, 27% dark matter and 68% of an unknown form of energy known as dark energy. Thus, dark matter constitutes 85% of total mass, while dark energy plus dark matter constitute 95% of total mass–energy content; because dark matter has not yet been observed directly, if it exists, it must interact with ordinary baryonic matter and radiation, except through gravity. The primary candidate for dark matter is some new kind of elementary particle that has not yet been discovered, in particular, weakly-interacting massive particles, or gravitationally-interacting massive particles.
Many experiments to directly detect and study dark matter particles are being undertaken, but none have yet succeeded. Dark matter is classified as warm, or hot according to its velocity. Current models favor a cold dark matter scenario, in which structures emerge by gradual accumulation of particles. Although the existence of dark matter is accepted by the scientific community, some astrophysicists, intrigued by certain observations that do not fit the dark matter theory, argue for various modifications of the standard laws of general relativity, such as modified Newtonian dynamics, tensor–vector–scalar gravity, or entropic gravity; these models attempt to account for all observations without invoking supplemental non-baryonic matter. The hypothesis of dark matter has an elaborate history. In a talk given in 1884, Lord Kelvin estimated the number of dark bodies in the Milky Way from the observed velocity dispersion of the stars orbiting around the center of the galaxy. By using these measurements, he estimated the mass of the galaxy, which he determined is different from the mass of visible stars.
Lord Kelvin thus concluded that "many of our stars a great majority of them, may be dark bodies". In 1906 Henri Poincaré in "The Milky Way and Theory of Gases" used "dark matter", or "matière obscure" in French, in discussing Kelvin's work; the first to suggest the existence of dark matter, using stellar velocities, was Dutch astronomer Jacobus Kapteyn in 1922. Fellow Dutchman and radio astronomy pioneer Jan Oort hypothesized the existence of dark matter in 1932. Oort was studying stellar motions in the local galactic neighborhood and found that the mass in the galactic plane must be greater than what was observed, but this measurement was determined to be erroneous. In 1933, Swiss astrophysicist Fritz Zwicky, who studied galaxy clusters while working at the California Institute of Technology, made a similar inference. Zwicky applied the virial theorem to the Coma Cluster and obtained evidence of unseen mass that he called dunkle Materie. Zwicky estimated its mass based on the motions of galaxies near its edge and compared that to an estimate based on its brightness and number of galaxies.
He estimated. The gravity effect of the visible galaxies was far too small for such fast orbits, thus mass must be hidden from view. Based on these conclusions, Zwicky inferred that some unseen matter provided the mass and associated gravitation attraction to hold the cluster together; this was the first formal inference about the existence of dark matter. Zwicky's estimates were off by more than an order of magnitude due to an obsolete value of the Hubble constant. However, Zwicky did infer that the bulk of the matter was dark. Further indications that the mass-to-light ratio was not unity came from measurements of galaxy rotation curves. In 1939, Horace W. Babcock reported the rotation curve for the Andromeda nebula, which suggested that the mass-to-luminosity ratio increases radially, he attributed it to either light absorption within the galaxy or modified dynamics in the outer portions of the spiral and not to the missing matter that he had uncovered. Following Babcock's 1939 report of unexpectedly rapid rotation in the outskirts of the Andromeda galaxy and a mass-to-light ratio of 50, in 1940 Jan Oort discovered and wrote about the large non-visible halo of NGC 3115.
Vera Rubin, Kent Ford and Ken Freeman's work in the
Degrees of freedom (physics and chemistry)
In physics, a degree of freedom is an independent physical parameter in the formal description of the state of a physical system. The set of all states of a system is known as the system's phase space, degrees of freedom of the system, are the dimensions of the phase space; the location of a particle in three-dimensional space requires. The direction and speed at which a particle moves can be described in terms of three velocity components, each in reference to the three dimensions of space. If the time evolution of the system is deterministic, where the state at one instant uniquely determines its past and future position and velocity as a function of time, such a system has six degrees of freedom. If the motion of the particle is constrained to a lower number of dimensions, for example, the particle must move along a wire or on a fixed surface the system has fewer than six degrees of freedom. On the other hand, a system with an extended object that can rotate or vibrate can have more than six degrees of freedom.
In classical mechanics, the state of a point particle at any given time is described with position and velocity coordinates in the Lagrangian formalism, or with position and momentum coordinates in the Hamiltonian formalism. In statistical mechanics, a degree of freedom is a single scalar number describing the microstate of a system; the specification of all microstates of a system is a point in the system's phase space. In the 3D ideal chain model in chemistry, two angles are necessary to describe the orientation of each monomer, it is useful to specify quadratic degrees of freedom. These are degrees of freedom. In three-dimensional space, three degrees of freedom are associated with the movement of a particle. A diatomic gas molecule has 6 degrees of freedom; this set may be decomposed in terms of translations and vibrations of the molecule. The center of mass motion of the entire molecule accounts for 3 degrees of freedom. In addition, the molecule has two rotational degrees of one vibrational mode.
The rotations occur around the two axes perpendicular to the line between the two atoms. The rotation around the atom–atom bond is not a physical rotation; this yields, for a diatomic molecule, a decomposition of: N = 6 = 3 + 2 + 1. For a general, non-linear molecule, all 3 rotational degrees of freedom are considered, resulting in the decomposition: 3 N = 3 + 3 + which means that an N-atom molecule has 3N − 6 vibrational degrees of freedom for N > 2. In special cases, such as adsorbed large molecules, the rotational degrees of freedom can be limited to only one; as defined above one can count degrees of freedom using the minimum number of coordinates required to specify a position. This is done as follows: For a single particle we need 2 coordinates in a 2-D plane to specify its position and 3 coordinates in 3-D space, thus its degree of freedom in a 3-D space is 3. For a body consisting of 2 particles in a 3-D space with constant distance between them we can show its degrees of freedom to be 5.
Let's say the other has coordinate with z2 unknown. Application of the formula for distance between two coordinates d = 2 + 2 + 2 results in one equation with one unknown, in which we can solve for z2. One of x1, x2, y1, y2, z1, or z2 can be unknown. Contrary to the classical equipartition theorem, at room temperature, the vibrational motion of molecules makes negligible contributions to the heat capacity; this is because these degrees of freedom are frozen because the spacing between the energy eigenvalues exceeds the energy corresponding to ambient temperatures. In the following table such degrees of freedom are disregarded because of their low effect on total energy. Only the translational and rotational degrees of freedom contribute, in equal amounts, to the heat capacity ratio; this is why γ = 7/5 for diatomic gases at room temperature. However, at high temperatures, on the order of the vibrational temperature, vibrational motion cannot be neglected. Vibrational temperatures are between 103 K and 104 K.
The set of degrees of freedom X1, ... , XN of a system is independent if the energy associated with the set can be written in the following form: E = ∑ i = 1 N E i, where Ei is a function of the sole variable Xi. example: if X1 and X2 are two degrees of freedom, E is the associated energy: If E = X 1 4 + X 2 4 the two degrees of freedom are independent. If E = X 1 4 + X 1