1.
Hydrogen
–
Hydrogen is a chemical element with chemical symbol H and atomic number 1. With a standard weight of circa 1.008, hydrogen is the lightest element on the periodic table. Its monatomic form is the most abundant chemical substance in the Universe, non-remnant stars are mainly composed of hydrogen in the plasma state. The most common isotope of hydrogen, termed protium, has one proton, the universal emergence of atomic hydrogen first occurred during the recombination epoch. At standard temperature and pressure, hydrogen is a colorless, odorless, tasteless, non-toxic, nonmetallic, since hydrogen readily forms covalent compounds with most nonmetallic elements, most of the hydrogen on Earth exists in molecular forms such as water or organic compounds. Hydrogen plays an important role in acid–base reactions because most acid-base reactions involve the exchange of protons between soluble molecules. In ionic compounds, hydrogen can take the form of a charge when it is known as a hydride. The hydrogen cation is written as though composed of a bare proton, Hydrogen gas was first artificially produced in the early 16th century by the reaction of acids on metals. Industrial production is mainly from steam reforming natural gas, and less often from more energy-intensive methods such as the electrolysis of water. Most hydrogen is used near the site of its production, the two largest uses being fossil fuel processing and ammonia production, mostly for the fertilizer market, Hydrogen is a concern in metallurgy as it can embrittle many metals, complicating the design of pipelines and storage tanks. Hydrogen gas is flammable and will burn in air at a very wide range of concentrations between 4% and 75% by volume. The enthalpy of combustion is −286 kJ/mol,2 H2 + O2 →2 H2O +572 kJ Hydrogen gas forms explosive mixtures with air in concentrations from 4–74%, the explosive reactions may be triggered by spark, heat, or sunlight. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is 500 °C, the detection of a burning hydrogen leak may require a flame detector, such leaks can be very dangerous. Hydrogen flames in other conditions are blue, resembling blue natural gas flames, the destruction of the Hindenburg airship was a notorious example of hydrogen combustion and the cause is still debated. The visible orange flames in that incident were the result of a mixture of hydrogen to oxygen combined with carbon compounds from the airship skin. H2 reacts with every oxidizing element, the ground state energy level of the electron in a hydrogen atom is −13.6 eV, which is equivalent to an ultraviolet photon of roughly 91 nm wavelength. The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, however, the atomic electron and proton are held together by electromagnetic force, while planets and celestial objects are held by gravity. The most complicated treatments allow for the effects of special relativity
2.
Neutron number
–
The neutron number, symbol N, is the number of neutrons in a nuclide. Atomic number plus neutron number equals mass number, Z+N=A, the difference between the neutron number and the atomic number is known as the neutron excess, D = N - Z = A - 2Z. Neutron number is rarely written explicitly in nuclide symbol notation, in order of increasing explicitness and decreasing frequency of usage, Nuclides that have the same neutron number but a different proton number are called isotones. This word was formed by replacing the p in isotope with n for neutron, Nuclides that have the same mass number are called isobars. Nuclides that have the same neutron excess are called isodiaphers, chemical properties are primarily determined by proton number, which determines which chemical element the nuclide is a member of, neutron number has only a slight influence. Neutron number is primarily of interest for nuclear properties, for example, actinides with odd neutron number are usually fissile while actinides with even neutron number are usually not fissile. Only 57 stable nuclides have an odd number, compared to 200 with an even neutron number. No odd-neutron-number isotope is the most naturally abundant isotope in its element, except for beryllium-9 which is the only stable isotope, nitrogen-14. Only two stable nuclides have fewer neutrons than protons, hydrogen-1 and helium-3
3.
Proton number
–
The atomic number or proton number of a chemical element is the number of protons found in the nucleus of an atom of that element. It is identical to the number of the nucleus. The atomic number identifies a chemical element. In an uncharged atom, the number is also equal to the number of electrons. The atomic number Z, should not be confused with the mass number A and this number of neutrons, N, completes the weight, A = Z + N. Atoms with the atomic number Z but different neutron numbers N. Historically, it was these atomic weights of elements that were the quantities measurable by chemists in the 19th century. Only after 1915, with the suggestion and evidence that this Z number was also the nuclear charge, loosely speaking, the existence or construction of a periodic table of elements creates an ordering of the elements, and so they can be numbered in order. Dmitri Mendeleev claimed that he arranged his first periodic tables in order of atomic weight, however, in consideration of the elements observed chemical properties, he changed the order slightly and placed tellurium ahead of iodine. This placement is consistent with the practice of ordering the elements by proton number, Z. A simple numbering based on periodic table position was never entirely satisfactory and this central charge would thus be approximately half the atomic weight. This proved eventually to be the case, the experimental position improved dramatically after research by Henry Moseley in 1913. To do this, Moseley measured the wavelengths of the innermost photon transitions produced by the elements from aluminum to gold used as a series of movable anodic targets inside an x-ray tube. The square root of the frequency of these photons increased from one target to the next in an arithmetic progression and this led to the conclusion that the atomic number does closely correspond to the calculated electric charge of the nucleus, i. e. the element number Z. Among other things, Moseley demonstrated that the series must have 15 members—no fewer. After Moseleys death in 1915, the numbers of all known elements from hydrogen to uranium were examined by his method. There were seven elements which were not found and therefore identified as still undiscovered, from 1918 to 1947, all seven of these missing elements were discovered. By this time the first four transuranium elements had also been discovered, in 1915 the reason for nuclear charge being quantized in units of Z, which were now recognized to be the same as the element number, was not understood
4.
Table of nuclides
–
A table of nuclides or chart of nuclides is a two-dimensional graph in which one axis represents the number of neutrons and the other represents the number of protons in an atomic nucleus. Each point plotted on the graph represents the nuclide of a real or hypothetical chemical element. This system of ordering nuclides can offer an insight into the characteristics of isotopes than the better-known periodic table. A chart or table of nuclides is a map to the nuclear, or radioactive, behaviour of nuclides. It contrasts with a table, which only maps their chemical behavior. Nuclide charts organize isotopes along the X axis by their numbers of neutrons and along the Y axis by their numbers of protons and this representation was first published by Giorgio Fea in 1935, and expanded by Emilio Segrè in 1945 or G. Seaborg. In 1958, Walter Seelmann-Eggebert and Gerda Pfennig published the first edition of the Karlsruhe Nuclide Chart and its 7th edition was made available in 2006. It has become a tool of the nuclear community. The isotope table below shows isotopes of the elements, including all with half-life of at least one day. They are arranged with increasing numbers from left to right. Cell colour denotes the half-life of each isotope, if a border is present, in graphical browsers, each isotope also has a tool tip indicating its half-life. Each color represents certain range of length of half-life, and the color of border indicates the half-life of its nuclear isomer state, some nuclides have multiple nuclear isomers, and this table notes the longest one. Dotted borders mean it has a nuclear isomer, and its color is same as the normal counterpart, isotopes are nuclides with the same number of protons but differing numbers of neutrons, that is, they have the same atomic number and are therefore the same chemical element. Isotopes neighbor each other vertically, e. g. carbon-12, carbon-13, carbon-14 or oxygen-15, oxygen-16, isotones are nuclides with the same number of neutrons but differing number of protons. Example, carbon-14, nitrogen-15, oxygen-16 in the table above. Isobars are nuclides with the number of nucleons, i. e. mass number. Isobars neighbor each other diagonally from lower-left to upper-right, example, carbon-14, nitrogen-14, oxygen-14 in the sample table above. Isodiaphers are nuclides with the difference between neutrons and protons
5.
Natural abundance
–
In physics, natural abundance refers to the abundance of isotopes of a chemical element as naturally found on a planet. The relative atomic mass of these isotopes is the weight listed for the element in the periodic table. The abundance of an isotope varies from planet to planet, and even place to place on the Earth. As an example, uranium has three naturally occurring isotopes, 238U, 235U and 234U and their respective natural mole-fraction abundances are 99. 2739–99. 2752%,0. 7198–0. 7202%, and 0. 0050–0. 0059%. For example, if 100,000 uranium atoms were analyzed, one would expect to find approximately 99,274 238U atoms, approximately 720 235U atoms, and very few 234U atoms. This is because 238U is much more stable than 235U or 234U, exactly because the different uranium isotopes have different half-lives, when the Earth was younger, the isotopic composition of uranium was different. As an example,1.7 billion years ago the NA of 235U was 3. 1% compared with todays 0. 7% and it is now known from study of the sun and primitive meteorites that the solar system was initially almost homogeneous in isotopic composition. There is also evidence for injection of short-lived isotopes from a supernova explosion that may have triggered solar nebula collapse. Hence deviations from natural abundance on earth are often measured in parts per thousand because they are less than one percent, the single exception to this lies with the presolar grains found in primitive meteorites. These bypassed the homogenization, and often carry the signature of specific nucleosynthesis processes in which their elements were made. In these materials, deviations from natural abundance are sometimes measured in factors of 100, the next table gives the isotope distributions for some elements. Some elements like phosphorus and fluorine only exist as a single isotope, berkeley Isotopes Project Interactive Table Scientific Instrument Services List Tools to compute low and high precision isotopic distribution
6.
Atomic mass
–
The atomic mass is the mass of an atom. Its unit is the atomic mass units where 1 unified atomic mass unit is defined as 1⁄12 of the mass of a single carbon-12 atom. For atoms, the protons and neutrons of the account for almost all of the mass. When divided by unified atomic mass units or daltons to form a pure numeric ratio, thus, the atomic mass of a carbon-12 atom is 12 u or 12 daltons, but the relative isotopic mass of a carbon-12 atom is simply 12. By contrast, atomic mass figures refer to an individual species, as atoms of the same species are identical. Atomic mass figures are commonly reported to many more significant figures than atomic weights. Standard atomic weight is related to atomic mass by the ranking of isotopes for each element. It is usually about the value as the atomic mass of the most abundant isotope. The atomic mass of atoms, ions, or atomic nuclei is slightly less than the sum of the masses of their constituent protons, neutrons, and electrons, due to binding energy mass loss. Relative isotopic mass is similar to mass and has exactly the same numerical value as atomic mass. The only difference in case, is that relative isotopic mass is a pure number with no units. This loss of results from the use of a scaling ratio with respect to a carbon-12 standard. The relative isotopic mass, then, is the mass of an isotope, when this value is scaled by the mass of carbon-12. Equivalently, the isotopic mass of an isotope or nuclide is the mass of the isotope relative to 1/12 of the mass of a carbon-12 atom. For example, the isotopic mass of a carbon-12 atom is exactly 12. For comparison, the mass of a carbon-12 atom is exactly 12 daltons or 12 unified atomic mass units. Alternately, the mass of a carbon-12 atom may be expressed in any other mass units, for example. As in the case of mass, no nuclides other than carbon-12 have exactly whole-number values of relative isotopic mass
7.
Unified atomic mass unit
–
The unified atomic mass unit or dalton is a standard unit of mass that quantifies mass on an atomic or molecular scale. One unified atomic mass unit is approximately the mass of one nucleon and is equivalent to 1 g/mol. The CIPM has categorised it as a non-SI unit accepted for use with the SI, the amu without the unified prefix is technically an obsolete unit based on oxygen, which was replaced in 1961. However, many still use the term amu but now define it in the same way as u. In this sense, most uses of the atomic mass units. For standardization a specific atomic nucleus had to be chosen because the mass of a nucleon depends on the count of the nucleons in the atomic nucleus due to mass defect. This is also why the mass of a proton or neutron by itself is more than 1 u, the atomic mass unit is not the unit of mass in the atomic units system, which is rather the electron rest mass. The relative atomic mass scale has traditionally been a relative value and this evaluation was made prior to the discovery of the existence of elemental isotopes, which occurred in 1912. The divergence of these values could result in errors in computations, the chemistry amu, based on the relative atomic mass of natural oxygen, was about 1.000282 as massive as the physics amu, based on pure isotopic 16O. For these and other reasons, the standard for both physics and chemistry was changed to carbon-12 in 1961. The choice of carbon-12 was made to minimise further divergence with prior literature. The new and current unit was referred to as the atomic mass unit u. and given a new symbol, u. The Dalton is another name for the atomic mass unit. 1 u = m u =112 m Despite this change, modern sources often use the old term amu but define it as u. Therefore, in general, amu likely does not refer to the old oxygen standard unit, the unified atomic mass unit and the dalton are different names for the same unit of measure. As with other names such as watt and newton, dalton is not capitalized in English. In 2003 the Consultative Committee for Units, part of the CIPM, recommended a preference for the usage of the dalton over the atomic mass unit as it is shorter. In 2005, the International Union of Pure and Applied Physics endorsed the use of the dalton as an alternative to the atomic mass unit
8.
Spin (physics)
–
In quantum mechanics and particle physics, spin is an intrinsic form of angular momentum carried by elementary particles, composite particles, and atomic nuclei. Spin is one of two types of angular momentum in mechanics, the other being orbital angular momentum. In some ways, spin is like a vector quantity, it has a definite magnitude, all elementary particles of a given kind have the same magnitude of spin angular momentum, which is indicated by assigning the particle a spin quantum number. The SI unit of spin is the or, just as with classical angular momentum, very often, the spin quantum number is simply called spin leaving its meaning as the unitless spin quantum number to be inferred from context. When combined with the theorem, the spin of electrons results in the Pauli exclusion principle. Wolfgang Pauli was the first to propose the concept of spin, in 1925, Ralph Kronig, George Uhlenbeck and Samuel Goudsmit at Leiden University suggested an physical interpretation of particles spinning around their own axis. The mathematical theory was worked out in depth by Pauli in 1927, when Paul Dirac derived his relativistic quantum mechanics in 1928, electron spin was an essential part of it. As the name suggests, spin was originally conceived as the rotation of a particle around some axis and this picture is correct so far as spin obeys the same mathematical laws as quantized angular momenta do. On the other hand, spin has some properties that distinguish it from orbital angular momenta. Although the direction of its spin can be changed, a particle cannot be made to spin faster or slower. The spin of a particle is associated with a magnetic dipole moment with a g-factor differing from 1. This could only occur if the internal charge of the particle were distributed differently from its mass. The conventional definition of the quantum number, s, is s = n/2. Hence the allowed values of s are 0, 1/2,1, 3/2,2, the value of s for an elementary particle depends only on the type of particle, and cannot be altered in any known way. The spin angular momentum, S, of any system is quantized. The allowed values of S are S = ℏ s = h 4 π n, in contrast, orbital angular momentum can only take on integer values of s, i. e. even-numbered values of n. Those particles with half-integer spins, such as 1/2, 3/2, 5/2, are known as fermions, while particles with integer spins. The two families of particles obey different rules and broadly have different roles in the world around us, a key distinction between the two families is that fermions obey the Pauli exclusion principle, that is, there cannot be two identical fermions simultaneously having the same quantum numbers
9.
Electronvolt
–
In physics, the electronvolt is a unit of energy equal to approximately 1. 6×10−19 joules. By definition, it is the amount of energy gained by the charge of an electron moving across an electric potential difference of one volt. Thus it is 1 volt multiplied by the elementary charge, therefore, one electronvolt is equal to 6981160217662079999♠1. 6021766208×10−19 J. The electronvolt is not a SI unit, and its definition is empirical, like the elementary charge on which it is based, it is not an independent quantity but is equal to 1 J/C √2hα / μ0c0. It is a unit of energy within physics, widely used in solid state, atomic, nuclear. It is commonly used with the metric prefixes milli-, kilo-, in some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion electronvolts, it is equivalent to the GeV. By mass–energy equivalence, the electronvolt is also a unit of mass and it is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum. It is common to express mass in terms of eV as a unit of mass. The mass equivalent of 1 eV/c2 is 1 eV / c 2 = ⋅1 V2 =1.783 ×10 −36 kg. For example, an electron and a positron, each with a mass of 0.511 MeV/c2, the proton has a mass of 0.938 GeV/c2. In general, the masses of all hadrons are of the order of 1 GeV/c2, the unified atomic mass unit,1 gram divided by Avogadros number, is almost the mass of a hydrogen atom, which is mostly the mass of the proton. To convert to megaelectronvolts, use the formula,1 u =931.4941 MeV/c2 =0.9314941 GeV/c2, in high-energy physics, the electronvolt is often used as a unit of momentum. A potential difference of 1 volt causes an electron to gain an amount of energy and this gives rise to usage of eV as units of momentum, for the energy supplied results in acceleration of the particle. The dimensions of units are LMT−1. The dimensions of units are L2MT−2. Then, dividing the units of energy by a constant that has units of velocity. In the field of particle physics, the fundamental velocity unit is the speed of light in vacuum c. Thus, dividing energy in eV by the speed of light, the fundamental velocity constant c is often dropped from the units of momentum by way of defining units of length such that the value of c is unity
10.
Nuclear binding energy
–
Nuclear binding energy is the energy that would be required to disassemble the nucleus of an atom into its component parts. These component parts are neutrons and protons, which are called nucleons. The term nuclear binding energy may also refer to the balance in processes in which the nucleus splits into fragments composed of more than one nucleon. If new binding energy is available when light nuclei fuse, or when heavy nuclei split and this energy may be made available as nuclear energy and can be used to produce electricity as in or in a nuclear weapon. When a large nucleus splits into pieces, excess energy is emitted as photons, the nuclear binding energies and forces are on the order of a million times greater than the electron binding energies of light atoms like hydrogen. Nuclear binding energy is explained by the principles involved in nuclear physics. Energy is consumed or liberated because of differences in the binding energy between the incoming and outgoing products of the nuclear transmutation. The best-known classes of nuclear transmutations are fission and fusion. Nuclear energy may be liberated by atomic fission, when atomic nuclei are broken apart into lighter nuclei. The energy from fission is used to generate power in hundreds of locations worldwide. Nuclear energy is released during atomic fusion, when light nuclei like hydrogen are combined to form heavier nuclei such as helium. The Sun and other stars use nuclear fusion to generate thermal energy which is radiated from the surface. In any exothermic nuclear process, nuclear mass might ultimately be converted to thermal energy, in order to quantify the energy released or absorbed in any nuclear transmutation, one must know the nuclear binding energies of the nuclear components involved in the transmutation. Electrons and nuclei are kept together by electrostatic attraction, the force of electric attraction does not hold nuclei together, because all protons carry a positive charge and repel each other. Thus, electric forces do not hold together, because they act in the opposite direction. It has been established that binding neutrons to nuclei clearly requires a non-electrical attraction, therefore, another force, called the nuclear force holds the nucleons of nuclei together. This force is a residuum of the interaction, which binds quarks into nucleons at an even smaller level of distance. The nuclear force must be stronger than the electric repulsion at short distances, unlike gravity or electrical forces, the nuclear force is effective only at very short distances
11.
Bohr model
–
After the cubic model, the plum-pudding model, the Saturnian model, and the Rutherford model came the Rutherford–Bohr model or just Bohr model for short. The improvement to the Rutherford model is mostly a physical interpretation of it. The models key success lay in explaining the Rydberg formula for the emission lines of atomic hydrogen. While the Rydberg formula had been known experimentally, it did not gain a theoretical underpinning until the Bohr model was introduced. Not only did the Bohr model explain the reason for the structure of the Rydberg formula, the Bohr model is a relatively primitive model of the hydrogen atom, compared to the valence shell atom. A related model was proposed by Arthur Erich Haas in 1910. The quantum theory of the period between Plancks discovery of the quantum and the advent of a quantum mechanics is often referred to as the old quantum theory. In the early 20th century, experiments by Ernest Rutherford established that atoms consisted of a cloud of negatively charged electrons surrounding a small, dense. The laws of mechanics, predict that the electron will release electromagnetic radiation while orbiting a nucleus. Because the electron would lose energy, it would rapidly spiral inwards and this atom model is disastrous, because it predicts that all atoms are unstable. Also, as the electron spirals inward, the emission would rapidly increase in frequency as the orbit got smaller and faster and this would produce a continuous smear, in frequency, of electromagnetic radiation. However, late 19th century experiments with electric discharges have shown that atoms will emit light at certain discrete frequencies. To overcome this difficulty, Niels Bohr proposed, in 1913 and he suggested that electrons could only have certain classical motions, Electrons in atoms orbit the nucleus. The electrons can only orbit stably, without radiating, in certain orbits at a discrete set of distances from the nucleus. These orbits are associated with definite energies and are called energy shells or energy levels. In these orbits, the electrons acceleration does not result in radiation, the Bohr model of an atom was based upon Plancks quantum theory of radiation. The frequency of the radiation emitted at an orbit of period T is as it would be in classical mechanics, it is the reciprocal of the orbit period. The significance of the Bohr model is that the laws of classical mechanics apply to the motion of the electron about the nucleus only when restricted by a quantum rule, is called the principal quantum number, and ħ = h/2π
12.
Atom
–
An atom is the smallest constituent unit of ordinary matter that has the properties of a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms, Atoms are very small, typical sizes are around 100 picometers. Atoms are small enough that attempting to predict their behavior using classical physics - as if they were billiard balls, through the development of physics, atomic models have incorporated quantum principles to better explain and predict the behavior. Every atom is composed of a nucleus and one or more bound to the nucleus. The nucleus is made of one or more protons and typically a number of neutrons. Protons and neutrons are called nucleons, more than 99. 94% of an atoms mass is in the nucleus. The protons have an electric charge, the electrons have a negative electric charge. If the number of protons and electrons are equal, that atom is electrically neutral, if an atom has more or fewer electrons than protons, then it has an overall negative or positive charge, respectively, and it is called an ion. The electrons of an atom are attracted to the protons in a nucleus by this electromagnetic force. The number of protons in the nucleus defines to what chemical element the atom belongs, for example, the number of neutrons defines the isotope of the element. The number of influences the magnetic properties of an atom. Atoms can attach to one or more other atoms by chemical bonds to form compounds such as molecules. The ability of atoms to associate and dissociate is responsible for most of the changes observed in nature. The idea that matter is made up of units is a very old idea, appearing in many ancient cultures such as Greece. The word atom was coined by ancient Greek philosophers, however, these ideas were founded in philosophical and theological reasoning rather than evidence and experimentation. As a result, their views on what look like. They also could not convince everybody, so atomism was but one of a number of competing theories on the nature of matter. It was not until the 19th century that the idea was embraced and refined by scientists, in the early 1800s, John Dalton used the concept of atoms to explain why elements always react in ratios of small whole numbers
13.
Chemical element
–
A chemical element or element is a species of atoms having the same number of protons in their atomic nuclei. There are 118 elements that have identified, of which the first 94 occur naturally on Earth with the remaining 24 being synthetic elements. There are 80 elements that have at least one stable isotope and 38 that have exclusively radioactive isotopes, iron is the most abundant element making up Earth, while oxygen is the most common element in the Earths crust. Chemical elements constitute all of the matter of the universe. The two lightest elements, hydrogen and helium, were formed in the Big Bang and are the most common elements in the universe. The next three elements were formed mostly by cosmic ray spallation, and are rarer than those that follow. Formation of elements with from 6 to 26 protons occurred and continues to occur in main sequence stars via stellar nucleosynthesis, the high abundance of oxygen, silicon, and iron on Earth reflects their common production in such stars. The term element is used for atoms with a number of protons as well as for a pure chemical substance consisting of a single element. A single element can form multiple substances differing in their structure, when different elements are chemically combined, with the atoms held together by chemical bonds, they form chemical compounds. Only a minority of elements are found uncombined as relatively pure minerals, among the more common of such native elements are copper, silver, gold, carbon, and sulfur. All but a few of the most inert elements, such as gases and noble metals, are usually found on Earth in chemically combined form. While about 32 of the elements occur on Earth in native uncombined forms. For example, atmospheric air is primarily a mixture of nitrogen, oxygen, and argon, the history of the discovery and use of the elements began with primitive human societies that found native elements like carbon, sulfur, copper and gold. Later civilizations extracted elemental copper, tin, lead and iron from their ores by smelting, using charcoal, alchemists and chemists subsequently identified many more, almost all of the naturally occurring elements were known by 1900. Save for unstable radioactive elements with short half-lives, all of the elements are available industrially, almost all other elements found in nature were made by various natural methods of nucleosynthesis. On Earth, small amounts of new atoms are produced in nucleogenic reactions, or in cosmogenic processes. Of the 94 naturally occurring elements, those with atomic numbers 1 through 82 each have at least one stable isotope, Isotopes considered stable are those for which no radioactive decay has yet been observed. Elements with atomic numbers 83 through 94 are unstable to the point that radioactive decay of all isotopes can be detected, the very heaviest elements undergo radioactive decay with half-lives so short that they are not found in nature and must be synthesized
14.
Electric charge
–
Electric charge is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. There are two types of charges, positive and negative. Like charges repel and unlike attract, an absence of net charge is referred to as neutral. An object is charged if it has an excess of electrons. The SI derived unit of charge is the coulomb. In electrical engineering, it is common to use the ampere-hour. The symbol Q often denotes charge, early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that dont require consideration of quantum effects. The electric charge is a conserved property of some subatomic particles. Electrically charged matter is influenced by, and produces, electromagnetic fields, the interaction between a moving charge and an electromagnetic field is the source of the electromagnetic force, which is one of the four fundamental forces. 602×10−19 coulombs. The proton has a charge of +e, and the electron has a charge of −e, the study of charged particles, and how their interactions are mediated by photons, is called quantum electrodynamics. Charge is the property of forms of matter that exhibit electrostatic attraction or repulsion in the presence of other matter. Electric charge is a property of many subatomic particles. The charges of free-standing particles are integer multiples of the charge e. Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge, robert Millikans oil drop experiment demonstrated this fact directly, and measured the elementary charge. By convention, the charge of an electron is −1, while that of a proton is +1, charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. The charge of an antiparticle equals that of the corresponding particle, quarks have fractional charges of either −1/3 or +2/3, but free-standing quarks have never been observed. The electric charge of an object is the sum of the electric charges of the particles that make it up. An ion is an atom that has lost one or more electrons, giving it a net charge, or that has gained one or more electrons
15.
Proton
–
A proton is a subatomic particle, symbol p or p+, with a positive electric charge of +1e elementary charge and mass slightly less than that of a neutron. Protons and neutrons, each with masses of one atomic mass unit, are collectively referred to as nucleons. One or more protons are present in the nucleus of every atom, the number of protons in the nucleus is the defining property of an element, and is referred to as the atomic number. Since each element has a number of protons, each element has its own unique atomic number. The word proton is Greek for first, and this name was given to the nucleus by Ernest Rutherford in 1920. In previous years, Rutherford had discovered that the nucleus could be extracted from the nuclei of nitrogen by atomic collisions. Protons were therefore a candidate to be a particle, and hence a building block of nitrogen. In the modern Standard Model of particle physics, protons are hadrons, and like neutrons, although protons were originally considered fundamental or elementary particles, they are now known to be composed of three valence quarks, two up quarks and one down quark. The rest masses of quarks contribute only about 1% of a protons mass, the remainder of a protons mass is due to quantum chromodynamics binding energy, which includes the kinetic energy of the quarks and the energy of the gluon fields that bind the quarks together. At sufficiently low temperatures, free protons will bind to electrons, however, the character of such bound protons does not change, and they remain protons. A fast proton moving through matter will slow by interactions with electrons and nuclei, the result is a protonated atom, which is a chemical compound of hydrogen. In vacuum, when electrons are present, a sufficiently slow proton may pick up a single free electron, becoming a neutral hydrogen atom. Such free hydrogen atoms tend to react chemically with other types of atoms at sufficiently low energies. When free hydrogen atoms react with other, they form neutral hydrogen molecules. Protons are spin-½ fermions and are composed of three quarks, making them baryons. Protons have an exponentially decaying positive charge distribution with a mean square radius of about 0.8 fm. Protons and neutrons are both nucleons, which may be together by the nuclear force to form atomic nuclei. The nucleus of the most common isotope of the atom is a lone proton
16.
Electron
–
The electron is a subatomic particle, symbol e− or β−, with a negative elementary electric charge. Electrons belong to the first generation of the lepton particle family, the electron has a mass that is approximately 1/1836 that of the proton. Quantum mechanical properties of the include a intrinsic angular momentum of a half-integer value, expressed in units of the reduced Planck constant. As it is a fermion, no two electrons can occupy the same state, in accordance with the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of particles and waves, they can collide with other particles and can be diffracted like light. Since an electron has charge, it has an electric field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law, electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields, special telescopes can detect electron plasma in outer space. Electrons are involved in applications such as electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors. Interactions involving electrons with other particles are of interest in fields such as chemistry. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms, ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of a quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge electron in 1891, electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of isotopes and in high-energy collisions. The antiparticle of the electron is called the positron, it is identical to the electron except that it carries electrical, when an electron collides with a positron, both particles can be totally annihilated, producing gamma ray photons. The ancient Greeks noticed that amber attracted small objects when rubbed with fur, along with lightning, this phenomenon is one of humanitys earliest recorded experiences with electricity. In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electricus, both electric and electricity are derived from the Latin ēlectrum, which came from the Greek word for amber, ἤλεκτρον
17.
Coulomb force
–
Coulombs law, or Coulombs inverse-square law, is a law of physics that describes force interacting between static electrically charged particles. The force of interaction between the charges is attractive if the charges have opposite signs and repulsive if like-signed, the law was first published in 1784 by French physicist Charles Augustin de Coulomb and was essential to the development of the theory of electromagnetism. It is analogous to Isaac Newtons inverse-square law of universal gravitation, Coulombs law can be used to derive Gausss law, and vice versa. The law has been tested extensively, and all observations have upheld the laws principle, ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cats fur to attract light objects like feathers. Thales was incorrect in believing the attraction was due to a magnetic effect and he coined the New Latin word electricus to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words electric and electricity, however, he did not generalize or elaborate on this. In 1767, he conjectured that the force between charges varied as the square of the distance. In 1769, Scottish physicist John Robison announced that, according to his measurements, in the early 1770s, the dependence of the force between charged bodies upon both distance and charge had already been discovered, but not published, by Henry Cavendish of England. Finally, in 1785, the French physicist Charles-Augustin de Coulomb published his first three reports of electricity and magnetism where he stated his law and this publication was essential to the development of the theory of electromagnetism. The torsion balance consists of a bar suspended from its middle by a thin fiber, the fiber acts as a very weak torsion spring. In Coulombs experiment, the balance was an insulating rod with a metal-coated ball attached to one end. The ball was charged with a charge of static electricity. The two charged balls repelled one another, twisting the fiber through an angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, the force is along the straight line joining them. If the two charges have the sign, the electrostatic force between them is repulsive, if they have different signs, the force between them is attractive. Coulombs law can also be stated as a mathematical expression. The vector form of the equation calculates the force F1 applied on q1 by q2, if r12 is used instead, then the effect on q2 can be found. It can be calculated using Newtons third law, F2 = −F1
18.
Abundance of the chemical elements
–
The abundance of a chemical element is a measure of the occurrence of the element relative to all other elements in a given environment. Abundance is measured in one of three ways, by the mass-fraction, by the mole-fraction, or by the volume-fraction, most abundance values in this article are given as mass-fractions. For example, the abundance of oxygen in water can be measured in two ways, the mass fraction is about 89%, because that is the fraction of waters mass which is oxygen. However, the mole-fraction is 33.3333. % because only 1 atom of 3 in water, the abundance of chemical elements in the universe is dominated by the large amounts of hydrogen and helium which were produced in the Big Bang. Remaining elements, making up only about 2% of the universe, were produced by supernovae. Lithium, beryllium and boron are rare although they are produced by nuclear fusion. The elements from carbon to iron are more common in the universe because of the ease of making them in supernova nucleosynthesis. Elements of higher number than iron become progressively more rare in the universe, because they increasingly absorb stellar energy in being produced. Elements with even numbers are generally more common than their neighbors in the periodic table. The abundance of elements in the Sun and outer planets is similar to that in the universe. Due to solar heating, the elements of Earth and the rocky planets of the Solar System have undergone an additional depletion of volatile hydrogen, helium, neon, nitrogen. The crust, mantle, and core of the Earth show evidence of chemical segregation plus some sequestration by density, lighter silicates of aluminum are found in the crust, with more magnesium silicate in the mantle, while metallic iron and nickel compose the core. The abundance of elements in specialized environments, such as atmospheres, or oceans, the elements – that is, ordinary matter made of protons, neutrons, and electrons, are only a small part of the content of the Universe. Cosmological observations suggest that only 4. 6% of the universes energy comprises the visible baryonic matter that constitutes stars, planets, the rest is made up of dark energy and dark matter. Hydrogen is the most abundant element in the Universe, helium is second, however, after this, the rank of abundance does not continue to correspond to the atomic number, oxygen has abundance rank 3, but atomic number 8. All others are less common. Heavier elements were mostly produced much later, inside of stars, hydrogen and helium are estimated to make up roughly 74% and 24% of all baryonic matter in the universe respectively. Despite comprising only a small fraction of the universe, the remaining heavy elements can greatly influence astronomical phenomena
19.
Baryon
–
A baryon is a composite subatomic particle made up of three quarks. Baryons and mesons belong to the family of particles, which are the quark-based particles. The name baryon comes from the Greek word for heavy, because, at the time of their naming, as quark-based particles, baryons participate in the strong interaction, whereas leptons, which are not quark-based, do not. The most familiar baryons are the protons and neutrons that make up most of the mass of the matter in the universe. Each baryon has a corresponding antiparticle where quarks are replaced by their corresponding antiquarks, for example, a proton is made of two up quarks and one down quark, and its corresponding antiparticle, the antiproton, is made of two up antiquarks and one down antiquark. This is in contrast to the bosons, which do not obey the exclusion principle, Baryons, along with mesons, are hadrons, meaning they are particles composed of quarks. Quarks have baryon numbers of B = 1/3 and antiquarks have baryon number of B = −1/3, the term baryon usually refers to triquarks—baryons made of three quarks. Other exotic baryons have been proposed, such as made of four quarks and one antiquark. The particle physics community as a whole did not view their existence as likely in 2006, however, in July 2015, the LHCb experiment observed two resonances consistent with pentaquark states in the Λ0 b → J/ψK−p decay, with a combined statistical significance of 15σ. In theory, heptaquarks, nonaquarks, etc. could also exist, nearly all matter that may be encountered or experienced in everyday life is baryonic matter, which includes atoms of any sort, and provides those with the property of mass. Non-baryonic matter, as implied by the name, is any sort of matter that is not composed primarily of baryons and this might include neutrinos and free electrons, dark matter, such as supersymmetric particles, axions, and black holes. The very existence of baryons is also a significant issue in cosmology, the process by which baryons came to outnumber their antiparticles is called baryogenesis. Some grand unified theories of physics also predict that a single proton can decay, changing the baryon number by one, however. The excess of baryons over antibaryons in the present universe is thought to be due to non-conservation of baryon number in the early universe. The concept of isospin was first proposed by Werner Heisenberg in 1932 to explain the similarities between protons and neutrons under the strong interaction, although they had different electric charges, their masses were so similar that physicists believed they were the same particle. The different electric charges were explained as being the result of some unknown excitation similar to spin and this unknown excitation was later dubbed isospin by Eugene Wigner in 1937. This belief lasted until Murray Gell-Mann proposed the model in 1964. The success of the model is now understood to be the result of the similar masses of the u and d quarks
20.
Diatomic
–
Diatomic molecules are molecules composed of only two atoms, of the same or different chemical elements. The prefix di- is of Greek origin, meaning two, if a diatomic molecule consists of two atoms of the same element, such as hydrogen or oxygen, then it is said to be homonuclear. Otherwise, if a molecule consists of two different atoms, such as carbon monoxide or nitric oxide, the molecule is said to be heteronuclear. The only chemical elements that form stable homonuclear diatomic molecules at temperature and pressure are the gases hydrogen, nitrogen, oxygen, fluorine. The noble gases are gases at STP, but they are monatomic. The homonuclear diatomic gases and noble gases together are called elemental gases or molecular gases, at slightly elevated temperatures, the halogens bromine and iodine also form diatomic gases. All halogens have been observed as diatomic molecules, except for astatine, other elements form diatomic molecules when evaporated, but these diatomic species repolymerize when cooled. Heating elemental phosphorus gives diphosphorus, P2, dilithium is known in the gas phase. Ditungsten and dimolybdenum form with sextuple bonds in the gas phase, the bond in a homonuclear diatomic molecule is non-polar. All other diatomic molecules are chemical compounds of two different elements, many elements can combine to form heteronuclear diatomic molecules, depending on temperature and pressure. Common examples include the carbon monoxide, nitric oxide. Hundreds of diatomic molecules have been identified in the environment of the Earth, in the laboratory, about 99% of the Earths atmosphere is composed of two species of diatomic molecules, nitrogen and oxygen. The natural abundance of hydrogen in the Earths atmosphere is only of the order of parts per million, the interstellar medium is, indeed, dominated by hydrogen atoms. John Daltons original atomic hypothesis assumed that all elements were monatomic, for example, Dalton assumed waters formula to be HO, giving the atomic weight of oxygen as eight times that of hydrogen, instead of the modern value of about 16. As a consequence, confusion existed regarding atomic weights and molecular formulas for about half a century, at the 1860 Karlsruhe Congress on atomic weights, Cannizzaro resurrected Avogadros ideas and used them to produce a consistent table of atomic weights, which mostly agree with modern values. These weights were an important prerequisite for the discovery of the law by Dmitri Mendeleev. Diatomic molecules are normally in their lowest or ground state, which conventionally is also known as the X state, such excitation can also occur when the gas absorbs light or other electromagnetic radiation. The excited states are unstable and naturally relax back to the ground state, over various short time scales after the excitation, transitions occur from higher to lower electronic states and ultimately to the ground state, and in each transition results a photon is emitted
21.
History of quantum mechanics
–
The history of quantum mechanics is a fundamental part of the history of modern physics. In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, Ludwig Boltzmann suggested in 1877 that the energy levels of a physical system, such as a molecule, could be discrete. He was a founder of the Austrian Mathematical Society, together with the mathematicians Gustav von Escherich, the earlier Wien approximation may be derived from Plancks law by assuming h ν ≫ k T. This statement has been called the most revolutionary sentence written by a physicist of the twentieth century and these energy quanta later came to be called photons, a term introduced by Gilbert N. Lewis in 1926. In 1913, Bohr explained the lines of the hydrogen atom, again by using quantization, in his paper of July 1913 On the Constitution of Atoms. They are collectively known as the old quantum theory, the phrase quantum physics was first used in Johnstons Plancks Universe in Light of Modern Physics. In 1923, the French physicist Louis de Broglie put forward his theory of waves by stating that particles can exhibit wave characteristics. This theory was for a particle and derived from special relativity theory. Schrödinger subsequently showed that the two approaches were equivalent, heisenberg formulated his uncertainty principle in 1927, and the Copenhagen interpretation started to take shape at about the same time. Starting around 1927, Paul Dirac began the process of unifying quantum mechanics with special relativity by proposing the Dirac equation for the electron, the Dirac equation achieves the relativistic description of the wavefunction of an electron that Schrödinger failed to obtain. It predicts electron spin and led Dirac to predict the existence of the positron and he also pioneered the use of operator theory, including the influential bra–ket notation, as described in his famous 1930 textbook. These, like other works from the founding period, still stand. The field of chemistry was pioneered by physicists Walter Heitler and Fritz London. Beginning in 1927, researchers made attempts at applying quantum mechanics to fields instead of single particles, early workers in this area include P. A. M. Dirac, W. Pauli, V. Weisskopf, and P. Jordan and this area of research culminated in the formulation of quantum electrodynamics by R. P. Feynman, F. Dyson, J. Schwinger, and S. I. Tomonaga during the 1940s. Quantum electrodynamics describes a quantum theory of electrons, positrons, and the electromagnetic field, the theory of quantum chromodynamics was formulated beginning in the early 1960s. The theory as we know it today was formulated by Politzer, Gross, thomas Youngs double-slit experiment demonstrating the wave nature of light. J. J. Thomsons cathode ray tube experiments, the study of black-body radiation between 1850 and 1900, which could not be explained without quantum concepts
22.
Isotopes of hydrogen
–
Hydrogen has three naturally occurring isotopes, sometimes denoted 1H, 2H, and 3H. The first two of these are stable while 3H has a half-life of 12.32 years, all heavier isotopes are synthetic and have a half-life less than one zeptosecond. Of these, 5H is the most stable, and 7H is the least, Hydrogen is the only element whose isotopes have different names that are in common use today. The 2H isotope is usually called deuterium, while the 3H isotope is usually called tritium, the symbols D and T are sometimes used for deuterium and tritium. The IUPAC states in the 2005 Red Book that while the use of D and T is common, the ordinary isotope of hydrogen, with no neutrons, is sometimes called protium. 1H is the most common hydrogen isotope with an abundance of more than 99. 98%, because the nucleus of this isotope consists of only a single proton, it is given the descriptive formal name protium. The proton has never observed to decay, and hydrogen-1 is therefore considered a stable isotope. Some grand unified theories proposed in the 1970s predict that proton decay can occur with a half-life between 1031 and 1036 years, if this prediction is found to be true, then hydrogen-1 are only observationally stable. To date, however, experiments have shown that the minimum proton half-life is in excess of 1034 years, 2H, the other stable hydrogen isotope, is known as deuterium and contains one proton and one neutron in its nucleus. The nucleus of deuterium is called a deuteron, deuterium comprises 0.0026 –0. 0184% of hydrogen samples on Earth, with the lower number tending to be found in samples of hydrogen gas and the higher enrichment typical of ocean water. Deuterium on Earth has been enriched with respect to its concentration in the Big Bang. Deuterium is not radioactive, and does not represent a significant toxicity hazard, water enriched in molecules that include deuterium instead of protium is called heavy water. Deuterium and its compounds are used as a label in chemical experiments. Heavy water is used as a moderator and coolant for nuclear reactors. Deuterium is also a fuel for commercial nuclear fusion. 3H is known as tritium and contains one proton and two neutrons in its nucleus and it is radioactive, decaying into helium-3 through β− decay with a half-life of 12.32 years. Small amounts of tritium occur naturally because of the interaction of cosmic rays with atmospheric gases, tritium has also been released during nuclear weapons tests. It is used in thermonuclear weapons, as a tracer in isotope geochemistry
23.
Isotope
–
Isotopes are variants of a particular chemical element which differ in neutron number. All isotopes of an element have the same number of protons in each atom. The number of protons within the nucleus is called atomic number and is equal to the number of electrons in the neutral atom. Each atomic number identifies a specific element, but not the isotope, the number of nucleons in the nucleus is the atoms mass number, and each isotope of a given element has a different mass number. For example, carbon-12, carbon-13 and carbon-14 are three isotopes of the element carbon with mass numbers 12,13 and 14 respectively. The atomic number of carbon is 6, which means that carbon atom has 6 protons. Nuclide refers to a rather than to an atom. Identical nuclei belong to one nuclide, for each nucleus of the carbon-13 nuclide is composed of 6 protons and 7 neutrons. The nuclide concept emphasizes nuclear properties over chemical properties, whereas the isotope concept emphasizes chemical over nuclear, the neutron number has large effects on nuclear properties, but its effect on chemical properties is negligible for most elements. Because isotope is the term, it is better known than nuclide. An isotope and/or nuclide is specified by the name of the particular element followed by a hyphen, when a chemical symbol is used, e. g. C for carbon, standard notation is to indicate the number with a superscript at the upper left of the chemical symbol. Because the atomic number is given by the element symbol, it is common to only the mass number in the superscript. The letter m is sometimes appended after the number to indicate a nuclear isomer. For example, 14C is a form of carbon, whereas 12C. There are about 339 naturally occurring nuclides on Earth, of which 286 are primordial nuclides, primordial nuclides include 32 nuclides with very long half-lives and 254 that are formally considered as stable nuclides, because they have not been observed to decay. In most cases, for reasons, if an element has stable isotopes. Theory predicts that many apparently stable isotopes/nuclides are radioactive, with extremely long half-lives, of the 254 nuclides never observed to decay, only 90 of these are theoretically stable to all known forms of decay
24.
Neutron
–
The neutron is a subatomic particle, symbol n or n0, with no net electric charge and a mass slightly larger than that of a proton. Protons and neutrons, each with approximately one atomic mass unit, constitute the nucleus of an atom. Their properties and interactions are described by nuclear physics, the nucleus consists of Z protons, where Z is called the atomic number, and N neutrons, where N is the neutron number. The atomic number defines the properties of the atom. The terms isotope and nuclide are often used synonymously, but they are chemical and nuclear concepts, the atomic mass number, symbol A, equals Z+N. For example, carbon has atomic number 6, and its abundant carbon-12 isotope has 6 neutrons, some elements occur in nature with only one stable isotope, such as fluorine. Other elements occur with many stable isotopes, such as tin with ten stable isotopes, even though it is not a chemical element, the neutron is included in the table of nuclides. Within the nucleus, protons and neutrons are bound together through the nuclear force, neutrons are produced copiously in nuclear fission and fusion. They are a contributor to the nucleosynthesis of chemical elements within stars through fission, fusion. The neutron is essential to the production of nuclear power, in the decade after the neutron was discovered in 1932, neutrons were used to induce many different types of nuclear transmutations. These events and findings led to the first self-sustaining nuclear reactor, free neutrons, or individual neutrons free of the nucleus, are effectively a form of ionizing radiation, and as such, are a biological hazard, depending upon dose. A small natural background flux of free neutrons exists on Earth, caused by cosmic ray showers. Dedicated neutron sources like neutron generators, research reactors and spallation sources produce free neutrons for use in irradiation, neutrons and protons are both nucleons, which are attracted and bound together by the nuclear force to form atomic nuclei. The nucleus of the most common isotope of the atom is a lone proton. The nuclei of the hydrogen isotopes deuterium and tritium contain one proton bound to one. All other types of nuclei are composed of two or more protons and various numbers of neutrons. The most common nuclide of the chemical element lead, 208Pb has 82 protons and 126 neutrons. The free neutron has a mass of about 1. 675×10−27 kg, the neutron has a mean square radius of about 0. 8×10−15 m, or 0.8 fm, and it is a spin-½ fermion
25.
Proton decay
–
In particle physics, proton decay is a hypothetical form of radioactive decay in which the proton decays into lighter subatomic particles, such as a neutral pion and a positron. There is currently no evidence that proton decay occurs. According to the Standard Model, protons, a type of baryon, are stable because baryon number is conserved, therefore, protons will not decay into other particles on their own, because they are the lightest baryon. Positron emission, a form of decay which sees a proton become a neutron, is not proton decay. To date, all attempts to observe new phenomena predicted by GUTs have failed, quantum gravity may also provide a venue of proton decay at magnitudes or lifetimes well beyond the GUT scale decay range above, as well as extra dimensions in supersymmetry. There are other methods of baryon violation other than proton decay including interactions with changes of baryon and/or lepton number other than 1. These included B and/or L violations of 2,3 or other numbers, such examples include neutron oscillations and the electroweak sphaleron anomaly at high energies and temperatures that can result between the collision of protons into antileptons or vice versa. One of the problems in modern physics is the predominance of matter over antimatter in the universe. The universe, as a whole, seems to have a nonzero positive baryon number density — that is and this has led to a number of proposed mechanisms for symmetry breaking that favour the creation of normal matter under certain conditions. Experiments reported in 2010 at Fermilab, however, seem to show that this imbalance is much greater than previously assumed, in an experiment involving a series of particle collisions, the amount of generated matter was approximately 1% larger than the amount of generated antimatter. The reason for this discrepancy is yet unknown and these estimates predict that a large volume of material will occasionally exhibit a spontaneous proton decay. Proton decay is one of the key predictions of the grand unified theories proposed in the 1970s. Both concepts have been the focus of major experimental physics efforts since the early 1980s, to date, all attempts to observe these events have failed. Best results come from the super-Kamiokande water Cherenkov radiation detector in Japan, an upgraded version, Hyper-Kamiokande, probably will have sensitivity 5–10 times better than super-Kamiokande. Despite the lack of evidence for proton decay, some grand unification theories, such as the SU Georgi–Glashow model and SO, along with their supersymmetric variants. Additional decay modes are available, both directly and when catalyzed via interaction with GUT-predicted magnetic monopoles, though this process has not been observed experimentally, it is within the realm of experimental testability for future planned very large-scale detectors on the megaton scale. As further experiments and calculations were performed in the 1990s, it clear that the proton half-life could not lie below 1032 years. Many books from that period refer to this figure for the decay time for baryonic matter
26.
Deuterium
–
Deuterium is one of two stable isotopes of hydrogen. The nucleus of deuterium, called a deuteron, contains one proton and one neutron, whereas the far more common hydrogen isotope, Deuterium has a natural abundance in Earths oceans of about one atom in 6420 of hydrogen. Thus deuterium accounts for approximately 0. 0156% of all the naturally occurring hydrogen in the oceans, the abundance of deuterium changes slightly from one kind of natural water to another. The deuterium isotopes name is formed from the Greek deuteros meaning second, Deuterium was discovered and named in 1931 by Harold Urey. When the neutron was discovered in 1932, this made the structure of deuterium obvious. Soon after deuteriums discovery, Urey and others produced samples of water in which the deuterium content had been highly concentrated. Deuterium is destroyed in the interiors of stars faster than it is produced, other natural processes are thought to produce only an insignificant amount of deuterium. Nearly all deuterium found in nature was produced in the Big Bang 13.8 billion years ago and this is the ratio found in the gas giant planets, such as Jupiter. However, other bodies are found to have different ratios of deuterium to hydrogen-1. This is thought to be as a result of natural isotope separation processes that occur from solar heating of ices in comets, like the water-cycle in Earths weather, such heating processes may enrich deuterium with respect to protium. The analysis of ratios in comets found results very similar to the mean ratio in Earths oceans. This reinforces theories that much of Earths ocean water is of cometary origin, the deuterium/protium ratio of the comet 67P/Churyumov-Gerasimenko, as measured by the Rosetta space probe, is about three times that of earth water. This figure is the highest yet measured in a comet, deuterium/protium ratios thus continue to be an active topic of research in both astronomy and climatology. Deuterium is frequently represented by the chemical symbol D, since it is an isotope of hydrogen with mass number 2, it is also represented by 2H. IUPAC allows both D and 2H, although 2H is preferred, a distinct chemical symbol is used for convenience because of the isotopes common use in various scientific processes. In quantum mechanics the energy levels of electrons in atoms depend on the mass of the system of electron. For hydrogen, this amount is about 1837/1836, or 1.000545, the energies of spectroscopic lines for deuterium and light-hydrogen therefore differ by the ratios of these two numbers, which is 1.000272. The wavelengths of all deuterium spectroscopic lines are shorter than the lines of light hydrogen
27.
Nuclear reactor
–
This article is a subarticle of Nuclear power. A nuclear reactor, formerly known as a pile, is a device used to initiate. Nuclear reactors are used at power plants for electricity generation. Heat from nuclear fission is passed to a fluid, which runs through steam turbines. These either drive a ships propellers or turn electrical generators, Nuclear generated steam in principle can be used for industrial process heat or for district heating. Some reactors are used to produce isotopes for medical and industrial use, some are run only for research. As of April 2014, the IAEA reports there are 435 nuclear power reactors in operation, when a large fissile atomic nucleus such as uranium-235 or plutonium-239 absorbs a neutron, it may undergo nuclear fission. The heavy nucleus splits into two or more nuclei, releasing kinetic energy, gamma radiation, and free neutrons. A portion of neutrons may later be absorbed by other fissile atoms and trigger further fission events, which release more neutrons. This is known as a chain reaction. To control such a chain reaction, neutron poisons and neutron moderators can change the portion of neutrons that will go on to cause more fission. Nuclear reactors generally have automatic and manual systems to shut the fission reaction down if monitoring detects unsafe conditions, commonly-used moderators include regular water, solid graphite and heavy water. Some experimental types of reactor have used beryllium, and hydrocarbons have been suggested as another possibility, the reactor core generates heat in a number of ways, The kinetic energy of fission products is converted to thermal energy when these nuclei collide with nearby atoms. The reactor absorbs some of the rays produced during fission. Heat is produced by the decay of fission products and materials that have been activated by neutron absorption. This decay heat-source will remain for some even after the reactor is shut down. A kilogram of uranium-235 converted via nuclear processes releases approximately three times more energy than a kilogram of coal burned conventionally. A nuclear reactor coolant — usually water but sometimes a gas or a metal or molten salt — is circulated past the reactor core to absorb the heat that it generates
28.
Tritium
–
Tritium is a radioactive isotope of hydrogen. The nucleus of tritium contains one proton and two neutrons, whereas the nucleus of protium contains one proton and no neutrons, naturally occurring tritium is extremely rare on Earth, where trace amounts are formed by the interaction of the atmosphere with cosmic rays. It can be produced by irradiating lithium metal or lithium bearing ceramic pebbles in a nuclear reactor, the name of this isotope is derived from the Greek word τρίτος meaning third. While tritium has several different experimentally determined values of its half-life and it decays into helium-3 by beta decay as in this nuclear equation, and it releases 18.6 keV of energy in the process. The electrons kinetic energy varies, with an average of 5.7 keV, beta particles from tritium can penetrate only about 6.0 mm of air, and they are incapable of passing through the dead outermost layer of human skin. The unusually low energy released in the beta decay makes the decay appropriate for absolute neutrino mass measurements in the laboratory. The low energy of tritiums radiation makes it difficult to detect tritium-labeled compounds except by using liquid scintillation counting, Tritium is produced in nuclear reactors by neutron activation of lithium-6. This is possible with neutrons of any energy, and is an exothermic reaction yielding 4.8 MeV, in comparison, the fusion of deuterium with tritium releases about 17.6 MeV of energy. High-energy neutrons can produce tritium from lithium-7 in an endothermic reaction. This was discovered when the 1954 Castle Bravo nuclear test produced a high yield. High-energy neutrons irradiating boron-10 will also produce tritium, A more common result of boron-10 neutron capture is 7Li. The reactions requiring high neutron energies are not attractive production methods for peaceful applications, Tritium is also produced in heavy water-moderated reactors whenever a deuterium nucleus captures a neutron. This reaction has a quite small absorption cross section, making water a good neutron moderator. Even so, cleaning tritium from the moderator may be desirable after several years to reduce the risk of its escaping to the environment. Ontario Power Generations Tritium Removal Facility processes up to 2,500 tonnes of water a year. Deuteriums absorption cross section for thermal neutrons is about 0.52 millibarns, whereas that of oxygen-16 is about 0.19 millibarns and that of oxygen-17 is about 240 millibarns. Tritium is a product of the nuclear fission of uranium-235, plutonium-239. The release or recovery of tritium needs to be considered in the operation of reactors, especially in the reprocessing of nuclear fuels
29.
Half-life
–
Half-life is the time required for a quantity to reduce to half its initial value. The term is used in nuclear physics to describe how quickly unstable atoms undergo. The term is used more generally to characterize any type of exponential or non-exponential decay. For example, the medical sciences refer to the biological half-life of drugs, the converse of half-life is doubling time. The original term, half-life period, dating to Ernest Rutherfords discovery of the principle in 1907, was shortened to half-life in the early 1950s. Rutherford applied the principle of a radioactive elements half-life to studies of age determination of rocks by measuring the period of radium to lead-206. Half-life is constant over the lifetime of an exponentially decaying quantity, the accompanying table shows the reduction of a quantity as a function of the number of half-lives elapsed. A half-life usually describes the decay of discrete entities, such as radioactive atoms, in that case, it does not work to use the definition that states half-life is the time required for exactly half of the entities to decay. For example, if there are 3 radioactive atoms with a half-life of one second, instead, the half-life is defined in terms of probability, Half-life is the time required for exactly half of the entities to decay on average. In other words, the probability of a radioactive atom decaying within its half-life is 50%, for example, the image on the right is a simulation of many identical atoms undergoing radioactive decay. Note that after one half-life there are not exactly one-half of the remaining, only approximately. Nevertheless, when there are many identical atoms decaying, the law of large numbers suggests that it is a good approximation to say that half of the atoms remain after one half-life. There are various simple exercises that demonstrate probabilistic decay, for example involving flipping coins or running a computer program. The three parameters t1⁄2, τ, and λ are all related in the following way. Amount approaches zero as t approaches infinity as expected, some quantities decay by two exponential-decay processes simultaneously. There is a half-life describing any exponential-decay process, for example, The current flowing through an RC circuit or RL circuit decays with a half-life of RCln or lnL/R, respectively. For this example, the half time might be used instead of half life. In a first-order chemical reaction, the half-life of the reactant is ln/λ, in radioactive decay, the half-life is the length of time after which there is a 50% chance that an atom will have undergone nuclear decay
30.
Particle accelerator
–
A particle accelerator is a machine that uses electromagnetic fields to propel charged particles to nearly light speed and to contain them in well-defined beams. Large accelerators are used in physics as colliders, or as synchrotron light sources for the study of condensed matter physics. There are currently more than 30,000 accelerators in operation around the world, there are two basic classes of accelerators, electrostatic and electrodynamic accelerators. Electrostatic accelerators use electric fields to accelerate particles. The most common types are the Cockcroft–Walton generator and the Van de Graaff generator, a small-scale example of this class is the cathode ray tube in an ordinary old television set. The achievable kinetic energy for particles in these devices is determined by the accelerating voltage, electrodynamic or electromagnetic accelerators, on the other hand, use changing electromagnetic fields to accelerate particles. Since in these types the particles can pass through the accelerating field multiple times. This class, which was first developed in the 1920s, is the basis for most modern large-scale accelerators, because colliders can give evidence of the structure of the subatomic world, accelerators were commonly referred to as atom smashers in the 20th century. Despite the fact that most accelerators actually propel subatomic particles, the term persists in popular usage when referring to particle accelerators in general. Beams of high-energy particles are useful for both fundamental and applied research in the sciences, and also in many technical and industrial fields unrelated to fundamental research and it has been estimated that there are approximately 30,000 accelerators worldwide. The bar graph shows the breakdown of the number of industrial accelerators according to their applications, for the most basic inquiries into the dynamics and structure of matter, space, and time, physicists seek the simplest kinds of interactions at the highest possible energies. These typically entail particle energies of many GeV, and the interactions of the simplest kinds of particles, leptons and quarks for the matter, the largest and highest energy particle accelerator used for elementary particle physics is the Large Hadron Collider at CERN, operating since 2009. These investigations often involve collisions of heavy nuclei – of atoms like iron or gold – at energies of several GeV per nucleon, the largest such particle accelerator is the Relativistic Heavy Ion Collider at Brookhaven National Laboratory. An example of type of machine is LANSCE at Los Alamos. A large number of light sources exist worldwide. The ESRF in Grenoble, France has been used to extract detailed 3-dimensional images of trapped in amber. Thus there is a demand for electron accelerators of moderate energy. Everyday examples of particle accelerators are cathode ray tubes found in television sets and these low-energy accelerators use a single pair of electrodes with a DC voltage of a few thousand volts between them
31.
Hydron (chemistry)
–
Unlike other ions, the hydron consists only of a bare atomic nucleus. The hydron is too reactive to occur in many liquids, even though it is sometimes visualized to do so by students of chemistry, a free hydron would react with a molecule of the liquid to form a more complicated cation. Examples are the ion in water-based acids, and H 2F+, the unstable cation of fluoroantimonic acid. For this reason, in such liquids including liquid acids, hydrons diffuse by contact from one complex cation to another, the hydrated form of the hydrogen cation, the hydronium ion H 3O+, is a key object of Arrhenius definition of acid. Other hydrated forms, the Zundel cation H 5O+2 which is formed from a proton, the hydron itself is crucial in more general Brønsted–Lowry acid–base theory, which extends the concept of acid–base chemistry beyond aqueous solutions. The negatively charged counterpart of the hydron is the anion, H−. Proton, having the symbol p or 1H+, is the +1 ion of protium. Deuteron, having the symbol 2H+ or D+, is the +1 ion of deuterium, 2H or D. Triton, having the symbol 3H+ or T+, is the +1 ion of tritium, other isotopes of hydrogen are too unstable to be relevant in chemistry. The name proton refers to isotopically pure 1H+, on the other hand, referring to the hydron as simply hydrogen ion is not recommended because hydrogen anions also exist. The term hydron was defined by IUPAC in 1988, traditionally, the term proton was and is used in place of hydron. The latter term is only used in the context where comparisons between the various isotopes of hydrogen is important. Otherwise, referring to hydrons as protons is still considered acceptable, the transfer of H+ in an acid-base reaction is usually referred to as proton transfer. Acid and bases are referred to as proton donors and acceptors correspondingly, however, although 99. 9844% of natural hydrogen nuclei are protons, the remainder are deuterons. Deprotonation Superacid Dihydrogen cation Trihydrogen cation Hydrogen ion cluster
32.
Hydrochloric acid
–
Hydrochloric acid is a corrosive, strong mineral acid with many industrial uses. A colorless, highly pungent solution of chloride in water. Free hydrochloric acid was first formally described in the 16th century by Libavius, later, it was used by chemists such as Glauber, Priestley, and Davy in their scientific research. It has numerous applications, including household cleaning, production of gelatin and other food additives, descaling. About 20 million tonnes of acid are produced worldwide annually. It is also found naturally in gastric acid, Hydrochloric acid was known to European alchemists as spirits of salt or acidum salis. Both names are used, especially in other languages, such as German, Salzsäure, Dutch, Zoutzuur, Swedish, Saltsyra, Turkish, Tuz Ruhu, Polish, kwas solny and Chinese. Gaseous HCl was called marine acid air, the old name muriatic acid has the same origin, and this name is still sometimes used. The name hydrochloric acid was coined by the French chemist Joseph Louis Gay-Lussac in 1814, aqua regia, a mixture consisting of hydrochloric and nitric acids, prepared by dissolving sal ammoniac in nitric acid, was described in the works of Pseudo-Geber, a 13th-century European alchemist. Other references suggest that the first mention of aqua regia is in Byzantine manuscripts dating to the end of the 13th century, free hydrochloric acid was first formally described in the 16th century by Libavius, who prepared it by heating salt in clay crucibles. Joseph Priestley of Leeds, England prepared pure hydrogen chloride in 1772, during the Industrial Revolution in Europe, demand for alkaline substances increased. A new industrial process developed by Nicolas Leblanc of Issoundun, France enabled cheap large-scale production of sodium carbonate, in this Leblanc process, common salt is converted to soda ash, using sulfuric acid, limestone, and coal, releasing hydrogen chloride as a by-product. Until the British Alkali Act 1863 and similar legislation in other countries, after the passage of the act, soda ash producers were obliged to absorb the waste gas in water, producing hydrochloric acid on an industrial scale. In the 20th century, the Leblanc process was replaced by the Solvay process without a hydrochloric acid by-product. Since hydrochloric acid was already settled as an important chemical in numerous applications. After the year 2000, hydrochloric acid is made by absorbing by-product hydrogen chloride from industrial organic compounds production. Hydrochloric acid is the salt of hydronium ion, H3O+ and chloride and it is usually prepared by treating HCl with water. H C l + H2 O ⟶ H3 O + + C l − Hydrochloric acid can therefore be used to prepare salts called chlorides, Hydrochloric acid is a strong acid, since it is completely dissociated in water
33.
Hydronium ion
–
In chemistry, hydronium is the common name for the aqueous cation H 3O+, the type of oxonium ion produced by protonation of water. It is the ion present when an Arrhenius acid is dissolved in water. It is the amount of hydronium ions relative to hydroxide ions that determines a solutions pH, at 25 °C, water has a pH of 7. A pH value less than 7 indicates an acidic solution, according to IUPAC nomenclature of organic chemistry, the hydronium ion should be referred to as oxonium. Hydroxonium may also be used unambiguously to identify it, a draft IUPAC proposal also recommends the use of oxonium and oxidanium in organic and inorganic chemistry contexts, respectively. An oxonium ion is any ion with a trivalent oxygen cation, for example, a protonated hydroxyl group is an oxonium ion, but not a hydronium ion. Since O+ and N have the number of electrons, H 3O+ is isoelectronic with ammonia. As shown in the images above, H 3O+ has a pyramidal molecular geometry with the oxygen atom at its apex. The H–O–H bond angle is approximately 113°, and the center of mass is close to the oxygen atom. Because the base of the pyramid is made up of three hydrogen atoms, the H 3O+ molecules symmetric top configuration is such that it belongs to the C3v point group. Because of this symmetry and the fact that it has a dipole moment, the transition dipole lies along the c-axis and, because the negative charge is localized near the oxygen atom, the dipole moment points to the apex, perpendicular to the base plane. Hydronium is the cation that forms from water in the presence of hydrogen ions and these hydrons do not exist in a free state, they are extremely reactive and are solvated by water. An acidic solute is generally the source of these hydrons, however and this special case of water reacting with water to produce hydronium ions is commonly known as the self-ionization of water. The resulting hydronium ions are few and short-lived, PH is a measure of the relative activity of hydronium and hydroxide ions in aqueous solutions. In acidic solutions, hydronium is the active, its excess proton being readily available for reaction with basic species. The hydronium ion is very acidic, at 25 °C, its pKa is 0 and it is the most acidic species that can exist in water, any stronger acid will ionize and protonate a water molecule to form hydronium. PH was originally conceived to be a measure of the ion concentration of aqueous solution. We now know that all such free protons quickly react with water to form hydronium
34.
Oxygen
–
Oxygen is a chemical element with symbol O and atomic number 8. It is a member of the group on the periodic table and is a highly reactive nonmetal. By mass, oxygen is the third-most abundant element in the universe, after hydrogen, at standard temperature and pressure, two atoms of the element bind to form dioxygen, a colorless and odorless diatomic gas with the formula O2. This is an important part of the atmosphere and diatomic oxygen gas constitutes 20. 8% of the Earths atmosphere, additionally, as oxides the element makes up almost half of the Earths crust. Most of the mass of living organisms is oxygen as a component of water, conversely, oxygen is continuously replenished by photosynthesis, which uses the energy of sunlight to produce oxygen from water and carbon dioxide. Oxygen is too reactive to remain a free element in air without being continuously replenished by the photosynthetic action of living organisms. Another form of oxygen, ozone, strongly absorbs ultraviolet UVB radiation, but ozone is a pollutant near the surface where it is a by-product of smog. At low earth orbit altitudes, sufficient atomic oxygen is present to cause corrosion of spacecraft, the name oxygen was coined in 1777 by Antoine Lavoisier, whose experiments with oxygen helped to discredit the then-popular phlogiston theory of combustion and corrosion. One of the first known experiments on the relationship between combustion and air was conducted by the 2nd century BCE Greek writer on mechanics, Philo of Byzantium. In his work Pneumatica, Philo observed that inverting a vessel over a burning candle, Philo incorrectly surmised that parts of the air in the vessel were converted into the classical element fire and thus were able to escape through pores in the glass. Many centuries later Leonardo da Vinci built on Philos work by observing that a portion of air is consumed during combustion and respiration, Oxygen was discovered by the Polish alchemist Sendivogius, who considered it the philosophers stone. In the late 17th century, Robert Boyle proved that air is necessary for combustion, English chemist John Mayow refined this work by showing that fire requires only a part of air that he called spiritus nitroaereus. From this he surmised that nitroaereus is consumed in both respiration and combustion, Mayow observed that antimony increased in weight when heated, and inferred that the nitroaereus must have combined with it. Accounts of these and other experiments and ideas were published in 1668 in his work Tractatus duo in the tract De respiratione. Robert Hooke, Ole Borch, Mikhail Lomonosov, and Pierre Bayen all produced oxygen in experiments in the 17th and the 18th century but none of them recognized it as a chemical element. This may have been in part due to the prevalence of the philosophy of combustion and corrosion called the phlogiston theory, which was then the favored explanation of those processes. Established in 1667 by the German alchemist J. J. Becher, one part, called phlogiston, was given off when the substance containing it was burned, while the dephlogisticated part was thought to be its true form, or calx. The fact that a substance like wood gains overall weight in burning was hidden by the buoyancy of the combustion products
35.
Interstellar medium
–
In astronomy, the interstellar medium is the matter that exists in the space between the star systems in a galaxy. This matter includes gas in ionic, atomic, and molecular form, as well as dust and it fills interstellar space and blends smoothly into the surrounding intergalactic space. The energy that occupies the same volume, in the form of radiation, is the interstellar radiation field. The interstellar medium is composed of phases, distinguished by whether matter is ionic, atomic, or molecular. The interstellar medium is composed primarily of hydrogen followed by helium with trace amounts of carbon, oxygen, the thermal pressures of these phases are in rough equilibrium with one another. Magnetic fields and turbulent motions also provide pressure in the ISM, in all phases, the interstellar medium is extremely tenuous by terrestrial standards. In cool, dense regions of the ISM, matter is primarily in molecular form, in hot, diffuse regions of the ISM, matter is primarily ionized, and the density may be as low as 10−4 ions per cm3. Compare this with a density of roughly 1019 molecules per cm3 for air at sea level. By mass, 99% of the ISM is gas in any form, and 1% is dust. Of the gas in the ISM, by number 91% of atoms are hydrogen and 9% are helium, with 0. 1% being atoms of elements heavier than hydrogen or helium, by mass this amounts to 70% hydrogen, 28% helium, and 1. 5% heavier elements. The hydrogen and helium are primarily a result of primordial nucleosynthesis, the ISM plays a crucial role in astrophysics precisely because of its intermediate role between stellar and galactic scales. Stars form within the densest regions of the ISM, molecular clouds, and replenish the ISM with matter and energy through planetary nebulae, stellar winds, and supernovae. This interplay between stars and the ISM helps determine the rate at which a galaxy depletes its gaseous content, voyager 1 reached the ISM on August 25,2012, making it the first artificial object from Earth to do so. Interstellar plasma and dust will be studied until the end in 2025. Table 1 shows a breakdown of the properties of the components of the ISM of the Milky Way, field, Goldsmith & Habing put forward the static two phase equilibrium model to explain the observed properties of the ISM. Their modeled ISM consisted of a dense phase, consisting of clouds of neutral and molecular hydrogen. McKee & Ostriker added a third phase that represented the very hot gas which had been shock heated by supernovae. These phases are the temperatures where heating and cooling can reach a stable equilibrium and their paper formed the basis for further study over the past three decades
36.
Solar wind
–
The solar wind is a stream of charged particles released from the upper atmosphere of the Sun. This plasma consists of electrons, protons and alpha particles with thermal energies between 1.5 and 10 keV. Embedded within the plasma is the interplanetary magnetic field. The solar wind varies in density, temperature and speed over time and its particles can escape the Suns gravity because of their high energy resulting from the high temperature of the corona, which in turn is a result of the coronal magnetic field. At a distance of more than a few radii from the sun. The flow of the wind is no longer supersonic at the termination shock. The Voyager 2 spacecraft crossed the shock more than five times between 30 August and 10 December 2007, Voyager 2 crossed the shock about a billion kilometers closer to the Sun than the 13.5 billion kilometer distance where Voyager 1 came upon the termination shock. The spacecraft moved outward through the shock into the heliosheath. Other related phenomena include the aurora, the tails of comets that always point away from the Sun. The existence of flowing outward from the Sun to the Earth was first suggested by British astronomer Richard C. In 1859, Carrington and Richard Hodgson independently made the first observation of what would later be called a solar flare, george FitzGerald later suggested that matter was being regularly accelerated away from the Sun and was reaching the Earth after several days. In 1910 British astrophysicist Arthur Eddington essentially suggested the existence of the wind, without naming it. The idea never caught on even though Eddington had also made a similar suggestion at a Royal Institution address the previous year. In the latter case, he postulated that the material consisted of electrons while in his study of Comet Morehouse he supposed them to be ions. The first person to suggest that the material consisted of both ions and electrons was Kristian Birkeland. His geomagnetic surveys showed that activity was nearly uninterrupted. In 1916, Birkeland proposed that, From a physical point of view it is most probable that solar rays are neither exclusively negative nor positive rays, in other words, the solar wind consists of both negative electrons and positive ions. Three years later in 1919, Frederick Lindemann also suggested that particles of both polarities, protons as well as electrons, come from the Sun
37.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a branch of physics which is the fundamental theory of nature at small scales and low energies of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, derives from quantum mechanics as an approximation valid only at large scales, early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms, in one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. In 1803, Thomas Young, an English polymath, performed the famous experiment that he later described in a paper titled On the nature of light. This experiment played a role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays, Plancks hypothesis that energy is radiated and absorbed in discrete quanta precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, ludwig Boltzmann independently arrived at this result by considerations of Maxwells equations. However, it was only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmanns statistical interpretation of thermodynamics and proposed what is now called Plancks law, following Max Plancks solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohrs theory of structure, introducing elliptical orbits. This phase is known as old quantum theory, according to Planck, each energy element is proportional to its frequency, E = h ν, where h is Plancks constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the reality of the radiation itself. In fact, he considered his quantum hypothesis a mathematical trick to get the right rather than a sizable discovery. He won the 1921 Nobel Prize in Physics for this work, Einstein further developed this idea to show that an electromagnetic wave such as light could also be described as a particle, with a discrete quantum of energy that was dependent on its frequency. The Copenhagen interpretation of Niels Bohr became widely accepted, in the mid-1920s, developments in quantum mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory, out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons. From Einsteins simple postulation was born a flurry of debating, theorizing, thus, the entire field of quantum physics emerged, leading to its wider acceptance at the Fifth Solvay Conference in 1927
38.
Quantum field theory
–
QFT treats particles as excited states of the underlying physical field, so these are called field quanta. In quantum field theory, quantum mechanical interactions among particles are described by interaction terms among the corresponding underlying quantum fields and these interactions are conveniently visualized by Feynman diagrams, which are a formal tool of relativistically covariant perturbation theory, serving to evaluate particle processes. The first achievement of quantum theory, namely quantum electrodynamics, is still the paradigmatic example of a successful quantum field theory. Ordinarily, quantum mechanics cannot give an account of photons which constitute the prime case of relativistic particles, since photons have rest mass zero, and correspondingly travel in the vacuum at the speed c, a non-relativistic theory such as ordinary QM cannot give even an approximate description. Photons are implicit in the emission and absorption processes which have to be postulated, for instance, the formalism of QFT is needed for an explicit description of photons. In fact most topics in the development of quantum theory were related to the interaction of radiation and matter. However, quantum mechanics as formulated by Dirac, Heisenberg, and Schrödinger in 1926–27 started from atomic spectra, as soon as the conceptual framework of quantum mechanics was developed, a small group of theoreticians tried to extend quantum methods to electromagnetic fields. A good example is the paper by Born, Jordan & Heisenberg. The basic idea was that in QFT the electromagnetic field should be represented by matrices in the way that position. The ideas of QM were thus extended to systems having a number of degrees of freedom. The inception of QFT is usually considered to be Diracs famous 1927 paper on The quantum theory of the emission and absorption of radiation, here Dirac coined the name quantum electrodynamics for the part of QFT that was developed first. Employing the theory of the harmonic oscillator, Dirac gave a theoretical description of how photons appear in the quantization of the electromagnetic radiation field. Later, Diracs procedure became a model for the quantization of fields as well. These first approaches to QFT were further developed during the three years. P. Jordan introduced creation and annihilation operators for fields obeying Fermi–Dirac statistics and these differ from the corresponding operators for Bose–Einstein statistics in that the former satisfy anti-commutation relations while the latter satisfy commutation relations. The methods of QFT could be applied to derive equations resulting from the treatment of particles, e. g. the Dirac equation, the Klein–Gordon equation. Schweber points out that the idea and procedure of second quantization goes back to Jordan, in a number of papers from 1927, some difficult problems concerning commutation relations, statistics, and Lorentz invariance were eventually solved. The first comprehensive account of a theory of quantum fields, in particular
39.
Two-body problem
–
In classical mechanics, the two-body problem is to determine the motion of two point particles that interact only with each other. Common examples include a satellite orbiting a planet, a planet orbiting a star, the two-body problem can be re-formulated as two one-body problems, a trivial one and one that involves solving for the motion of one particle in an external potential. Since many one-body problems can be solved exactly, the corresponding two-body problem can also be solved, by contrast, the three-body problem cannot be solved in terms of first integrals, except in special cases. Let x1 and x2 be the positions of the two bodies, and m1 and m2 be their masses. The goal is to determine the trajectories x1 and x2 for all t, given the initial positions x1 and x2. The two dots on top of the x position vectors denote their second derivative with respect to time, adding and subtracting these two equations decouples them into two one-body problems, which can be solved independently. Adding equations and results in an equation describing the center of mass motion, by contrast, subtracting equation from equation results in an equation that describes how the vector r = x1 − x2 between the masses changes with time. The solutions of these independent one-body problems can be combined to obtain the solutions for the trajectories x1 and x2. The resulting equation, R ¨ =0 shows that the velocity V = dR/dt of the center of mass is constant, hence, the position R of the center of mass can be determined at all times from the initial positions and velocities. The motion of two bodies with respect to each other always lies in a plane, introducing the assumption that the force between two particles acts along the line between their positions, it follows that r × F =0 and the angular momentum vector L is constant. We now have, μ r ¨ = F r ^, Kepler orbit Energy drift Equation of the center Eulers three-body problem Gravitational two-body problem Kepler problem n-body problem Virial theorem Two-body problem Landau LD, Lifshitz EM. Two-body problem at Eric Weissteins World of Physics
40.
Rutherford model
–
The Rutherford model is a model of the atom devised by Ernest Rutherford. Rutherford directed the famous Geiger–Marsden experiment in 1909 which suggested, upon Rutherfords 1911 analysis and this region would be known as the nucleus of the atom. Rutherford overturned Thomsons model in 1911 with his well-known gold foil experiment in which he demonstrated that the atom has a tiny, Rutherford designed an experiment to use the alpha particles emitted by a radioactive element as probes to the unseen world of atomic structure. If Thomson was correct, the beam would go straight through the gold foil, most of the beams went through the foil, but a few were deflected. Rutherford presented his own model for subatomic structure, as an interpretation for the unexpected experimental results. In it, the atom is made up of a central charge surrounded by a cloud of orbiting electrons, in this May 1911 paper, Rutherford only commits himself to a small central region of very high positive or negative charge in the atom. For concreteness, consider the passage of a high speed α particle through an atom having a central charge N e. This was in a gold atom known to be 10−10 meters or so in radius—a very surprising finding, as it implied a strong central charge less than 1/3000th of the diameter of the atom. The Rutherford model served to concentrate a great deal of the charge and mass to a very small core. It did mention the atomic model of Hantaro Nagaoka, in which the electrons are arranged in one or more rings, the plum pudding model of J. J. Thomson also had rings of orbiting electrons. Jean Baptiste Perrin claimed in his Nobel lecture that he was the first one to suggest the model in his paper dated 1901, the Rutherford paper suggested that the central charge of an atom might be proportional to its atomic mass in hydrogen mass units u. Thus, Rutherford did not formally suggest the two numbers might be exactly the same and these are the key indicators- The atoms electron cloud does not influence alpha particle scattering. Much of a positive charge is concentrated in a relatively tiny volume at the center of the atom. The magnitude of charge is proportional to the atoms atomic mass—the remaining mass is now known to be mostly attributed to neutrons. This concentrated central mass and charge is responsible for deflecting both alpha and beta particles, the atom itself is about 100,000 times the diameter of the nucleus. This could be related to putting a grain of sand in the middle of a football field, after Rutherfords discovery, scientists started to realize that the atom is not ultimately a single particle, but is made up of far smaller subatomic particles. Subsequent research determined the atomic structure which led to Rutherfords gold foil experiment. Scientists eventually discovered that atoms have a positively charged nucleus in the center, electrons were found to be even smaller
41.
Ernest Rutherford
–
Ernest Rutherford, 1st Baron Rutherford of Nelson, OM, FRS was a New Zealand-born British physicist who came to be known as the father of nuclear physics. Encyclopædia Britannica considers him to be the greatest experimentalist since Michael Faraday and this work was done at McGill University in Canada. Rutherford moved in 1907 to the Victoria University of Manchester in the UK, Rutherford performed his most famous work after he became a Nobel laureate. He conducted research that led to the first splitting of the atom in 1917 in a reaction between nitrogen and alpha particles, in which he also discovered the proton. Rutherford became Director of the Cavendish Laboratory at the University of Cambridge in 1919, after his death in 1937, he was honoured by being interred with the greatest scientists of the United Kingdom, near Sir Isaac Newtons tomb in Westminster Abbey. The chemical element rutherfordium was named after him in 1997, Ernest Rutherford was the son of James Rutherford, a farmer, and his wife Martha Thompson, originally from Hornchurch, Essex, England. James had emigrated to New Zealand from Perth, Scotland, to raise a little flax, Ernest was born at Brightwater, near Nelson, New Zealand. His first name was mistakenly spelled Earnest when his birth was registered, Rutherfords mother Martha Thompson was a schoolteacher. He studied at Havelock School and then Nelson College and won a scholarship to study at Canterbury College, University of New Zealand, in 1898 Thomson recommended Rutherford for a position at McGill University in Montreal, Canada. He was to replace Hugh Longbourne Callendar who held the chair of Macdonald Professor of physics and was coming to Cambridge, in 1901 he gained a DSc from the University of New Zealand. In 1907 Rutherford returned to Britain to take the chair of physics at the Victoria University of Manchester, during World War I, he worked on a top secret project to solve the practical problems of submarine detection by sonar. In 1916 he was awarded the Hector Memorial Medal, in 1919 he returned to the Cavendish succeeding J. J. Thomson as the Cavendish professor and Director. Between 1925 and 1930 he served as President of the Royal Society, in 1933, Rutherford was one of the two inaugural recipients of the T. K. Sidey Medal, set up by the Royal Society of New Zealand as an award for outstanding scientific research. For some time before his death, Rutherford had a hernia, which he had neglected to have fixed. Despite an emergency operation in London, he died four days afterwards of what physicians termed intestinal paralysis, after cremation at Golders Green Crematorium, he was given the high honour of burial in Westminster Abbey, near Isaac Newton and other illustrious British scientists. At Cambridge, Rutherford started to work with J. J. Thomson on the effects of X-rays on gases. Hearing of Becquerels experience with uranium, Rutherford started to explore its radioactivity, continuing his research in Canada, he coined the terms alpha ray and beta ray in 1899 to describe the two distinct types of radiation. He then discovered that thorium gave off a gas which produced an emanation which was itself radioactive and he found that a sample of this radioactive material of any size invariably took the same amount of time for half the sample to decay – its half-life
42.
Classical electromagnetism
–
The theory provides an excellent description of electromagnetic phenomena whenever the relevant length scales and field strengths are large enough that quantum mechanical effects are negligible. For small distances and low field strengths, such interactions are described by quantum electrodynamics. Fundamental physical aspects of classical electrodynamics are presented in texts, such as those by Feynman, Leighton and Sands, Griffiths, Panofsky and Phillips. The physical phenomena that electromagnetism describes have been studied as separate fields since antiquity, for example, there were many advances in the field of optics centuries before light was understood to be an electromagnetic wave. For a detailed account, consult Pauli, Whittaker, Pais. The above equation illustrates that the Lorentz force is the sum of two vectors, one is the cross product of the velocity and magnetic field vectors. Based on the properties of the product, this produces a vector that is perpendicular to both the velocity and magnetic field vectors. The other vector is in the direction as the electric field. The sum of two vectors is the Lorentz force. In the absence of a field, the force is perpendicular to the velocity of the particle. If both electric and magnetic fields are present, the Lorentz force is the sum of both of these vectors, the electric field E is defined such that, on a stationary charge, F = q 0 E where q0 is what is known as a test charge. The size of the charge doesnt really matter, as long as it is small enough not to influence the field by its mere presence. What is plain from this definition, though, is that the unit of E is N/C and this unit is equal to V/m, see below. In electrostatics, where charges are not moving, around a distribution of point charges, both of the above equations are cumbersome, especially if one wants to determine E as a function of position. A scalar function called the potential can help. Electric potential, also called voltage, is defined by the line integral φ = − ∫ C E ⋅ d l where φ is the electric potential, unfortunately, this definition has a caveat. From Maxwells equations, it is clear that ∇ × E is not always zero, as a result, one must add a correction factor, which is generally done by subtracting the time derivative of the A vector potential described below. Whenever the charges are quasistatic, however, this condition will be essentially met, the scalar φ will add to other potentials as a scalar
43.
Larmor formula
–
The Larmor formula is used to calculate the total power radiated by a non relativistic point charge as it accelerates or decelerates. This is used in the branch of known as electrodynamics and is not to be confused with the Larmor precession from classical nuclear magnetic resonance. It was first derived by J. J. Larmor in 1897, when any charged particle accelerates, it radiates away energy in the form of electromagnetic waves. A relativistic generalization is given by the Liénard–Wiechert potentials, the terms on the right are evaluated at the retarded time t r = t − R / c. The right-hand side is the sum of the fields associated with the velocity. The velocity field depends only upon β while the field depends on both β and β ˙ and the angular relationship between the two. Since the velocity field is proportional to 1 / R2, on the other hand, the acceleration field is proportional to 1 / R, which means that it falls much more slowly with distance. Because of this, the field is representative of the radiation field and is responsible for carrying most of the energy away from the charge. We can find the energy density of the radiation field by computing its Poynting vector, S = c 4 π E a × B a. Substituting in the relation between the magnetic and electric fields while assuming that the particle instantaneously at rest at time t r, the total power radiated is found by integrating this quantity over all solid angles. This gives P =23 q 2 a 2 c 3 and it relates the power radiated by the particle to its acceleration. It clearly shows that the faster the charge accelerates the greater the radiation will be and we would expect this since the radiation field is dependent upon acceleration. The full derivation can be found here, here is an explanation which can help understanding the above page. This approach is based on the speed of light. A charge moving with constant velocity has an electric field E r, always emerging from the future position of the charge. This future position is completely deterministic as long as the velocity is constant, when the velocity of the charge changes, the future position jumps, so from this moment and on, the radial electric field E r emerges from a new position. Given the fact that the field must be continuous, a non-zero tangential component of the electric field E t appears. The tangential component comes out, E t = e a sin 4 π ε0 c 2 R and this is mathematically equivalent to, P = μ0 e 2 a 26 π c
44.
Niels Bohr
–
Niels Henrik David Bohr was a Danish physicist who made foundational contributions to understanding atomic structure and quantum theory, for which he received the Nobel Prize in Physics in 1922. Bohr was also a philosopher and a promoter of scientific research, although the Bohr model has been supplanted by other models, its underlying principles remain valid. He conceived the principle of complementarity, that items could be analysed in terms of contradictory properties. The notion of complementarity dominated Bohrs thinking in science and philosophy. Bohr founded the Institute of Theoretical Physics at the University of Copenhagen, now known as the Niels Bohr Institute, Bohr mentored and collaborated with physicists including Hans Kramers, Oskar Klein, George de Hevesy, and Werner Heisenberg. He predicted the existence of a new element, which was named hafnium, after the Latin name for Copenhagen. Later, the element bohrium was named after him, during the 1930s, Bohr helped refugees from Nazism. After Denmark was occupied by the Germans, he had a meeting with Heisenberg. In September 1943, word reached Bohr that he was about to be arrested by the Germans, from there, he was flown to Britain, where he joined the British Tube Alloys nuclear weapons project, and was part of the British mission to the Manhattan Project. After the war, Bohr called for cooperation on nuclear energy. He had a sister, Jenny, and a younger brother Harald. Jenny became a teacher, while Harald became a mathematician and Olympic footballer who played for the Danish national team at the 1908 Summer Olympics in London. Bohr was a footballer as well, and the two brothers played several matches for the Copenhagen-based Akademisk Boldklub, with Bohr as goalkeeper. Bohr was educated at Gammelholm Latin School, starting when he was seven, in 1903, Bohr enrolled as an undergraduate at Copenhagen University. His major was physics, which he studied under Professor Christian Christiansen and he also studied astronomy and mathematics under Professor Thorvald Thiele, and philosophy under Professor Harald Høffding, a friend of his father. This involved measuring the frequency of oscillation of the radius of a water jet, Bohr conducted a series of experiments using his fathers laboratory in the university, the university itself had no physics laboratory. To complete his experiments, he had to make his own glassware and his essay, which he submitted at the last minute, won the prize. He later submitted a version of the paper to the Royal Society in London for publication in the Philosophical Transactions of the Royal Society
45.
Planck constant
–
The Planck constant is a physical constant that is the quantum of action, central in quantum mechanics. The light quantum behaved in some respects as a neutral particle. It was eventually called the photon, the Planck–Einstein relation connects the particulate photon energy E with its associated wave frequency f, E = h f This energy is extremely small in terms of ordinarily perceived everyday objects. Since the frequency f, wavelength λ, and speed of c are related by f = c λ. This leads to another relationship involving the Planck constant, with p denoting the linear momentum of a particle, the de Broglie wavelength λ of the particle is given by λ = h p. In applications where it is natural to use the frequency it is often useful to absorb a factor of 2π into the Planck constant. The resulting constant is called the reduced Planck constant or Dirac constant and it is equal to the Planck constant divided by 2π, and is denoted ħ, ℏ = h 2 π. The energy of a photon with angular frequency ω, where ω = 2πf, is given by E = ℏ ω, while its linear momentum relates to p = ℏ k and this was confirmed by experiments soon afterwards. This holds throughout quantum theory, including electrodynamics and these two relations are the temporal and spatial component parts of the special relativistic expression using 4-Vectors. P μ = = ℏ K μ = ℏ Classical statistical mechanics requires the existence of h, eventually, following upon Plancks discovery, it was recognized that physical action cannot take on an arbitrary value. Instead, it must be multiple of a very small quantity. This is the old quantum theory developed by Bohr and Sommerfeld, in which particle trajectories exist but are hidden. Thus there is no value of the action as classically defined, related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain either quantization of energy or the lack of a particle motion. In many cases, such as for light or for atoms, quantization of energy also implies that only certain energy levels are allowed. The Planck constant has dimensions of physical action, i. e. energy multiplied by time, or momentum multiplied by distance, in SI units, the Planck constant is expressed in joule-seconds or or. The value of the Planck constant is, h =6.626070040 ×10 −34 J⋅s =4.135667662 ×10 −15 eV⋅s. The value of the reduced Planck constant is, ℏ = h 2 π =1.054571800 ×10 −34 J⋅s =6.582119514 ×10 −16 eV⋅s
46.
Centripetal force
–
A centripetal force is a force that makes a body follow a curved path. Its direction is orthogonal to the motion of the body. Isaac Newton described it as a force by which bodies are drawn or impelled, or in any way tend, in Newtonian mechanics, gravity provides the centripetal force responsible for astronomical orbits. One common example involving centripetal force is the case in which a body moves with uniform speed along a circular path, the centripetal force is directed at right angles to the motion and also along the radius towards the centre of the circular path. The mathematical description was derived in 1659 by the Dutch physicist Christiaan Huygens, the direction of the force is toward the center of the circle in which the object is moving, or the osculating circle. The speed in the formula is squared, so twice the speed needs four times the force, the inverse relationship with the radius of curvature shows that half the radial distance requires twice the force. Expressed using the orbital period T for one revolution of the circle, the rope example is an example involving a pull force. The centripetal force can also be supplied as a push force, newtons idea of a centripetal force corresponds to what is nowadays referred to as a central force. Another example of centripetal force arises in the helix that is traced out when a particle moves in a uniform magnetic field in the absence of other external forces. In this case, the force is the centripetal force that acts towards the helix axis. Below are three examples of increasing complexity, with derivations of the formulas governing velocity and acceleration, uniform circular motion refers to the case of constant rate of rotation. Here are two approaches to describing this case, assume uniform circular motion, which requires three things. The object moves only on a circle, the radius of the circle r does not change in time. The object moves with constant angular velocity ω around the circle, therefore, θ = ω t where t is time. Now find the velocity v and acceleration a of the motion by taking derivatives of position with respect to time, consequently, a = − ω2 r. negative shows that the acceleration is pointed towards the center of the circle, hence it is called centripetal. While objects naturally follow a path, this centripetal acceleration describes the circular motion path caused by a centripetal force. The image at right shows the relationships for uniform circular motion. In this subsection, dθ/dt is assumed constant, independent of time, consequently, d r d t = lim Δ t →0 r − r Δ t = d ℓ d t
47.
Coulomb's law
–
Coulombs law, or Coulombs inverse-square law, is a law of physics that describes force interacting between static electrically charged particles. The force of interaction between the charges is attractive if the charges have opposite signs and repulsive if like-signed, the law was first published in 1784 by French physicist Charles Augustin de Coulomb and was essential to the development of the theory of electromagnetism. It is analogous to Isaac Newtons inverse-square law of universal gravitation, Coulombs law can be used to derive Gausss law, and vice versa. The law has been tested extensively, and all observations have upheld the laws principle, ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cats fur to attract light objects like feathers. Thales was incorrect in believing the attraction was due to a magnetic effect and he coined the New Latin word electricus to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words electric and electricity, however, he did not generalize or elaborate on this. In 1767, he conjectured that the force between charges varied as the square of the distance. In 1769, Scottish physicist John Robison announced that, according to his measurements, in the early 1770s, the dependence of the force between charged bodies upon both distance and charge had already been discovered, but not published, by Henry Cavendish of England. Finally, in 1785, the French physicist Charles-Augustin de Coulomb published his first three reports of electricity and magnetism where he stated his law and this publication was essential to the development of the theory of electromagnetism. The torsion balance consists of a bar suspended from its middle by a thin fiber, the fiber acts as a very weak torsion spring. In Coulombs experiment, the balance was an insulating rod with a metal-coated ball attached to one end. The ball was charged with a charge of static electricity. The two charged balls repelled one another, twisting the fiber through an angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, the force is along the straight line joining them. If the two charges have the sign, the electrostatic force between them is repulsive, if they have different signs, the force between them is attractive. Coulombs law can also be stated as a mathematical expression. The vector form of the equation calculates the force F1 applied on q1 by q2, if r12 is used instead, then the effect on q2 can be found. It can be calculated using Newtons third law, F2 = −F1
48.
Permeability (electromagnetism)
–
In electromagnetism, permeability is the measure of the ability of a material to support the formation of a magnetic field within itself. Hence, it is the degree of magnetization that a material obtains in response to a magnetic field. Magnetic permeability is typically represented by the Greek letter µ, the term was coined in September 1885 by Oliver Heaviside. The opposite of magnetic permeability is magnetic reluctance, in SI units, permeability is measured in henries per meter, or equivalently in newtons per ampere squared. The magnetic constant has the exact value and its relation to permeability is B = μ H, where the permeability, µ, is a scalar if the medium is isotropic or a second rank tensor for an anisotropic medium. In general, permeability is not a constant, as it can vary with the position in the medium, the frequency of the applied, humidity, temperature. In a nonlinear medium, the permeability can depend on the strength of the magnetic field, Permeability as a function of frequency can take on real or complex values. In ferromagnetic materials, the relationship between B and H exhibits both non-linearity and hysteresis, B is not a function of H, but depends also on the history of the material. For these materials it is useful to consider the incremental permeability defined as Δ B = μ Δ Δ H. Permeability is the inductance per unit length. In SI units, permeability is measured in henries per metre, the auxiliary magnetic field H has dimensions current per unit length and is measured in units of amperes per metre. The product µ H thus has dimensions inductance times current per unit area, but inductance is magnetic flux per unit current, so the product has dimensions magnetic flux per unit area, that is, magnetic flux density. This is the magnetic field B, which is measured in webers per square-metre, B is related to the Lorentz force on a moving charge q, F = q. A magnetic dipole is a circulation of electric current. The dipole moment has dimensions current times area, units ampere square-metre, the H field at a distance from a dipole has magnitude proportional to the dipole moment divided by distance cubed, which has dimensions current per unit length. Relative permeability, denoted by the symbol µr, is the ratio of the permeability of a medium to the permeability of free space µ0, μ r = μ μ0. In terms of relative permeability, the susceptibility is χ m = μ r −1. The number χm is a quantity, sometimes called volumetric or bulk susceptibility, to distinguish it from χp. Diamagnetism is the property of an object causes it to create a magnetic field in opposition of an externally applied magnetic field
49.
Hydrogen spectral series
–
The emission spectrum of atomic hydrogen is divided into a number of spectral series, with wavelengths given by the Rydberg formula. These observed spectral lines are due to the electron making transitions between two levels in the atom. The classification of the series by the Rydberg formula was important in the development of quantum mechanics, the spectral series are important in astronomical spectroscopy for detecting the presence of hydrogen and calculating red shifts. A hydrogen atom consists of an electron orbiting its nucleus, the electromagnetic force between the electron and the nuclear proton leads to a set of quantum states for the electron, each with its own energy. These states were visualized by the Bohr model of the atom as being distinct orbits around the nucleus. Each energy state, or orbit, is designated by an integer, Spectral emission occurs when an electron transitions, or jumps, from a higher energy state to a lower energy state. To distinguish the two states, the energy state is commonly designated as n′, and the higher energy state is designated as n. The energy of a photon corresponds to the energy difference between the two states. Because the energy of state is fixed, the energy difference between them is fixed, and the transition will always produce a photon with the same energy. The spectral lines are grouped into series according to n′, lines are named sequentially starting from the longest wavelength/lowest frequency of the series, using Greek letters within each series. For example, the 2 →1 line is called Lyman-alpha, there are emission lines from hydrogen that fall outside of these series, such as the 21 cm line. These emission lines correspond to much rarer atomic events such as hyperfine transitions, the fine structure also results in single spectral lines appearing as two or more closely grouped thinner lines, due to relativistic corrections. Meaningful values are returned only when n is greater than n′, note that this equation is valid for all hydrogen-like species, i. e. atoms having only a single electron, and the particular case of hydrogen spectral lines are given by Z=1. The series is named after its discoverer, Theodore Lyman, who discovered the lines from 1906–1914. All the wavelengths in the Lyman series are in the ultraviolet band, named after Johann Balmer, who discovered the Balmer formula, an empirical equation to predict the Balmer series, in 1885. Balmer lines are referred to as H-alpha, H-beta, H-gamma and so on. Four of the Balmer lines are in the visible part of the spectrum, with wavelengths longer than 400 nm. Parts of the Balmer series can be seen in the solar spectrum, H-alpha is an important line used in astronomy to detect the presence of hydrogen
50.
Atomic physics
–
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. It is primarily concerned with the arrangement of electrons around the nucleus and this comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions. The term atomic physics can be associated with power and nuclear weapons, due to the synonymous use of atomic. Physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, which considers atomic nuclei alone. As with many fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular. Physics research groups are usually so classified, Atomic physics primarily considers atoms in isolation. Atomic models will consist of a nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules, nor does it examine atoms in a state as condensed matter. It is concerned with such as ionization and excitation by photons or collisions with atomic particles. This means that the atoms can be treated as if each were in isolation. By this consideration atomic physics provides the underlying theory in physics and atmospheric physics. Electrons form notional shells around the nucleus and these are normally in a ground state but can be excited by the absorption of energy from light, magnetic fields, or interaction with a colliding particle. Electrons that populate a shell are said to be in a bound state, the energy necessary to remove an electron from its shell is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy, the atom is said to have undergone the process of ionization. If the electron absorbs a quantity of less than the binding energy. After a certain time, the electron in a state will jump to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, if an inner electron has absorbed more than the binding energy, then a more outer electron may undergo a transition to fill the inner orbital. The Auger effect allows one to multiply ionize an atom with a single photon, there are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however there are no such rules for excitation by collision processes
51.
Fine structure
–
In atomic physics, the fine structure describes the splitting of the spectral lines of atoms due to electron spin and relativistic corrections to the non-relativistic Schrödinger equation. The gross structure of spectra is the line spectra predicted by the quantum mechanics of non-relativistic electrons with no spin. For a hydrogenic atom, the gross structure energy levels depend on the principal quantum number n. However, an accurate model takes into account relativistic and spin effects. The fine structure energy corrections can be obtained by using perturbation theory, to do this one adds three corrective terms to the Hamiltonian, the leading order relativistic correction to the kinetic energy, the correction due to the spin-orbit coupling, and the Darwinian term. These corrections can also be obtained from the limit of the Dirac equation, since Diracs theory naturally incorporates relativity. Classically, the energy term of the Hamiltonian is T = p 22 m where p is the momentum. However, when considering a more accurate theory of nature via, R is the distance of the electron from the nucleus. The spin-orbit correction can be understood by shifting from the frame of reference into one where the electron is stationary. In this case the orbiting nucleus functions as a current loop. However, the electron itself has a magnetic moment due to its angular momentum. The two magnetic vectors, B → and μ → s couple together so there is a certain energy cost depending on their relative orientation. Remark, On the = and = energy level, which the fine structure said their level are the same, if we take the g-factor to be 2.0031904622, then, the calculated energy level will be different by using 2 as g-factor. Only using 2 as the g-factor, we can match the level in the 1st order approximation of the relativistic correction. When using the higher order approximation for the term, the 2.0031904622 g-factor may agree with each other. However, if we use the g-factor as 2.0031904622, the result does not agree with the formula, there is one last term in the non-relativistic expansion of the Dirac equation. This is because the function of an electron with l >0 vanishes at the origin. For example, it gives the 2s-orbit the same energy as the 2p-orbit by raising the 2s-state by 9. 057×10−5 eV, the Darwin term changes the effective potential at the nucleus
52.
Hyperfine structure
–
In atomic physics, hyperfine structure is the different effects leading to small shifts and splittings in the energy levels of atoms, molecules and ions. The name is a reference to the structure, which results from the interaction between the magnetic moments associated with electron spin and the electrons orbital angular momentum. The optical hyperfine structure was observed in 1881 by Albert Abraham Michelson and it could, however, only be explained in terms of quantum mechanics when Wolfgang Pauli proposed the existence of a small nuclear magnetic moment in 1924. In 1935, H. Schüler and Theodor Schmidt proposed the existence of a quadrupole moment in order to explain anomalies in the hyperfine structure. The theory of structure comes directly from electromagnetism, consisting of the interaction of the nuclear multipole moments with internally generated fields. The theory is derived first for the case, but can be applied to each nucleus in a molecule. Following this there is a discussion of the additional effects unique to the molecular case, the dominant term in the hyperfine Hamiltonian is typically the magnetic dipole term. Atomic nuclei with a nuclear spin I have a magnetic dipole moment, given by, μ I = g I μ N I. There is an associated with a magnetic dipole moment in the presence of a magnetic field. For a nuclear magnetic moment, μI, placed in a magnetic field, B. Electron orbital angular momentum results from the motion of the electron about some fixed point that we shall take to be the location of the nucleus. Written in terms of the Bohr magneton, this gives, B el l = −2 μ B μ04 π1 r 3 r × m e v ℏ. Recognizing that mev is the momentum, p, and that r×p/ħ is the orbital angular momentum in units of ħ, l, we can write. The electron spin angular momentum is a different property that is intrinsic to the particle. Nonetheless it is angular momentum and any angular momentum associated with a charged particle results in a dipole moment. The magnetic field of a moment, μs, is given by. The first term gives the energy of the dipole in the field due to the electronic orbital angular momentum. The second term gives the energy of the finite distance interaction of the dipole with the field due to the electron spin magnetic moments
53.
Fine-structure constant
–
It is related to the elementary charge e, which characterizes the strength of the coupling of an elementary charged particle with the electromagnetic field, by the formula 4πε0ħcα = e2. Being a dimensionless quantity, it has the numerical value of about 1⁄137 in all systems of units. Arnold Sommerfeld introduced the fine-structure constant in 1916, the definition reflects the relationship between α and the elementary charge e, which equals √4παε0ħc. In electrostatic cgs units, the unit of charge, the statcoulomb, is defined so that the Coulomb constant, ke, or the permittivity factor, 4πε0, is 1. Then the expression of the constant, as commonly found in older physics literature. In natural units, commonly used in high energy physics, where ε0 = c = ħ =1, the value of the fine-structure constant is α = e 24 π. As such, the constant is just another, albeit dimensionless, quantity determining the elementary charge. The 2014 CODATA recommended value of α is α = e 2 ℏ c =0.0072973525664 and this has a relative standard uncertainty of 0.32 parts per billion. For reasons of convenience, historically the value of the reciprocal of the constant is often specified. The 2014 CODATA recommended value is given by α −1 =137.035999139, the theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant α.035999173. This measurement of α has a precision of 0.25 parts per billion and this value and uncertainty are about the same as the latest experimental results. The fine-structure constant, α, has several physical interpretations, α is, The square of the ratio of the elementary charge to the Planck charge α =2. The ratio of the velocity of the electron in the first circular orbit of the Bohr model of the atom to the speed of light in vacuum and this is Sommerfelds original physical interpretation. Then the square of α is the ratio between the Hartree energy and the electron rest energy, the theory does not predict its value. Therefore, α must be determined experimentally, in fact, α is one of the about 20 empirical parameters in the Standard Model of particle physics, whose value is not determined within the Standard Model. In the electroweak theory unifying the weak interaction with electromagnetism, α is absorbed into two other coupling constants associated with the gauge fields. In this theory, the interaction is treated as a mixture of interactions associated with the electroweak fields. The strength of the electromagnetic interaction varies with the strength of the energy field, the absorption value for normal-incident light on graphene in vacuum would then be given by πα/2 or 2. 24%, and the transmission by 1/2 or 97. 75%
54.
Arnold Sommerfeld
–
He served as PhD supervisor for many Nobel Prize winners in physics and chemistry. He introduced the 2nd quantum number and the 4th quantum number and he also introduced the fine-structure constant and pioneered X-ray wave theory. Sommerfeld studied mathematics and physical sciences at the Albertina University of his city, Königsberg. His dissertation advisor was the mathematician Ferdinand von Lindemann, and he benefited from classes with mathematicians Adolf Hurwitz and David Hilbert. His participation in the student fraternity Deutsche Burschenschaft resulted in a scar on his face. He received his Ph. D. on October 24,1891, after receiving his doctorate, Sommerfeld remained at Königsberg to work on his teaching diploma. He passed the exam in 1892 and then began a year of military service. He completed his military service in September 1893, and for the next eight years continued voluntary eight-week military service. With his turned up moustache, his build, his Prussian bearing. In October, Sommerfeld went to the University of Göttingen, which was the center of mathematics in Germany, Sommerfelds Habilitationsschrift was completed under Klein, in 1895, which allowed Sommerfeld to become a Privatdozent at Göttingen. As a Privatdozent, Sommerfeld lectured on a range of mathematical and mathematical physics topics. Lectures by Klein in 1895 and 1896 on rotating bodies led Klein and Sommerfeld to write a four-volume text Die Theorie des Kreisels – a 13-year collaboration, the first two volumes were on theory, and the latter two were on applications in geophysics, astronomy, and technology. The association Sommerfeld had with Klein influenced Sommerfelds turn of mind to be applied mathematics, while at Göttingen, Sommerfeld met Johanna Höpfner, daughter of Ernst Höpfner, curator at Göttingen. In October,1897 Sommerfeld began the appointment to the Chair of Mathematics at the Bergakademie in Clausthal-Zellerfeld and this appointment provided enough income to eventually marry Johanna. At Kleins request, Sommerfeld took on the position of editor of Volume V of Enzyklopädie der mathematischen Wissenschaften, in 1900, Sommerfeld started his appointment to the Chair of Applied Mechanics at the Königliche Technische Hochschule Aachen as extraordinarius professor, which was arranged through Kleins efforts. At Aachen, he developed the theory of hydrodynamics, which would retain his interest for a long time, later, at the University of Munich, Sommerfelds students Ludwig Hopf and Werner Heisenberg would write their Ph. D. theses on this topic. From 1906 Sommerfeld established himself as professor of physics and director of the new Theoretical Physics Institute at the University of Munich. He was selected for positions by Wilhelm Röntgen, Director of the Physics Institute at Munich
55.
Orbital eccentricity
–
The orbital eccentricity of an astronomical object is a parameter that determines the amount by which its orbit around another body deviates from a perfect circle. A value of 0 is an orbit, values between 0 and 1 form an elliptical orbit,1 is a parabolic escape orbit. The term derives its name from the parameters of conic sections and it is normally used for the isolated two-body problem, but extensions exist for objects following a rosette orbit through the galaxy. In a two-body problem with inverse-square-law force, every orbit is a Kepler orbit, the eccentricity of this Kepler orbit is a non-negative number that defines its shape. The limit case between an ellipse and a hyperbola, when e equals 1, is parabola, radial trajectories are classified as elliptic, parabolic, or hyperbolic based on the energy of the orbit, not the eccentricity. Radial orbits have zero angular momentum and hence eccentricity equal to one, keeping the energy constant and reducing the angular momentum, elliptic, parabolic, and hyperbolic orbits each tend to the corresponding type of radial trajectory while e tends to 1. For a repulsive force only the trajectory, including the radial version, is applicable. For elliptical orbits, a simple proof shows that arcsin yields the projection angle of a circle to an ellipse of eccentricity e. For example, to view the eccentricity of the planet Mercury, next, tilt any circular object by that angle and the apparent ellipse projected to your eye will be of that same eccentricity. From Medieval Latin eccentricus, derived from Greek ἔκκεντρος ekkentros out of the center, from ἐκ- ek-, eccentric first appeared in English in 1551, with the definition a circle in which the earth, sun. Five years later, in 1556, a form of the word was added. The eccentricity of an orbit can be calculated from the state vectors as the magnitude of the eccentricity vector, e = | e | where. For elliptical orbits it can also be calculated from the periapsis and apoapsis since rp = a and ra = a, where a is the semimajor axis. E = r a − r p r a + r p =1 −2 r a r p +1 where, rp is the radius at periapsis. For Earths annual orbit path, ra/rp ratio = longest_radius / shortest_radius ≈1.034 relative to center point of path, the eccentricity of the Earths orbit is currently about 0.0167, the Earths orbit is nearly circular. Venus and Neptune have even lower eccentricity, over hundreds of thousands of years, the eccentricity of the Earths orbit varies from nearly 0.0034 to almost 0.058 as a result of gravitational attractions among the planets. The table lists the values for all planets and dwarf planets, Mercury has the greatest orbital eccentricity of any planet in the Solar System. Such eccentricity is sufficient for Mercury to receive twice as much solar irradiation at perihelion compared to aphelion, before its demotion from planet status in 2006, Pluto was considered to be the planet with the most eccentric orbit
56.
Declination
–
In astronomy, declination is one of the two angles that locate a point on the celestial sphere in the equatorial coordinate system, the other being hour angle. Declinations angle is measured north or south of the celestial equator, the root of the word declination means a bending away or a bending down. It comes from the root as the words incline and recline. Declination in astronomy is comparable to geographic latitude, projected onto the celestial sphere, points north of the celestial equator have positive declinations, while those south have negative declinations. Any units of measure can be used for declination, but it is customarily measured in the degrees, minutes. Declinations with magnitudes greater than 90° do not occur, because the poles are the northernmost and southernmost points of the celestial sphere, the Earths axis rotates slowly westward about the poles of the ecliptic, completing one circuit in about 26,000 years. This effect, known as precession, causes the coordinates of stationary celestial objects to change continuously, therefore, equatorial coordinates are inherently relative to the year of their observation, and astronomers specify them with reference to a particular year, known as an epoch. Coordinates from different epochs must be rotated to match each other. The currently used standard epoch is J2000.0, which is January 1,2000 at 12,00 TT, the prefix J indicates that it is a Julian epoch. Prior to J2000.0, astronomers used the successive Besselian Epochs B1875.0, B1900.0, the declinations of Solar System objects change very rapidly compared to those of stars, due to orbital motion and close proximity. This similarly occurs in the Southern Hemisphere for objects with less than −90° − φ. An extreme example is the star which has a declination near to +90°. Circumpolar stars never dip below the horizon, conversely, there are other stars that never rise above the horizon, as seen from any given point on the Earths surface. Generally, if a star whose declination is δ is circumpolar for some observer, then a star whose declination is −δ never rises above the horizon, as seen by the same observer. Likewise, if a star is circumpolar for an observer at latitude φ, neglecting atmospheric refraction, declination is always 0° at east and west points of the horizon. At the north point, it is 90° − |φ|, and at the south point, from the poles, declination is uniform around the entire horizon, approximately 0°. Non-circumpolar stars are visible only during certain days or seasons of the year, the Suns declination varies with the seasons. As seen from arctic or antarctic latitudes, the Sun is circumpolar near the summer solstice, leading to the phenomenon of it being above the horizon at midnight
57.
Angular momentum
–
In physics, angular momentum is the rotational analog of linear momentum. It is an important quantity in physics because it is a conserved quantity – the angular momentum of a system remains constant unless acted on by an external torque. The definition of momentum for a point particle is a pseudovector r×p. This definition can be applied to each point in continua like solids or fluids, unlike momentum, angular momentum does depend on where the origin is chosen, since the particles position is measured from it. The angular momentum of an object can also be connected to the angular velocity ω of the object via the moment of inertia I. However, while ω always points in the direction of the rotation axis, Angular momentum is additive, the total angular momentum of a system is the vector sum of the angular momenta. For continua or fields one uses integration, torque can be defined as the rate of change of angular momentum, analogous to force. Applications include the gyrocompass, control moment gyroscope, inertial systems, reaction wheels, flying discs or Frisbees. In general, conservation does limit the motion of a system. In quantum mechanics, angular momentum is an operator with quantized eigenvalues, Angular momentum is subject to the Heisenberg uncertainty principle, meaning only one component can be measured with definite precision, the other two cannot. Also, the spin of elementary particles does not correspond to literal spinning motion, Angular momentum is a vector quantity that represents the product of a bodys rotational inertia and rotational velocity about a particular axis. Angular momentum can be considered an analog of linear momentum. Thus, where momentum is proportional to mass m and linear speed v, p = m v, angular momentum is proportional to moment of inertia I. Unlike mass, which only on amount of matter, moment of inertia is also dependent on the position of the axis of rotation. Unlike linear speed, which occurs in a line, angular speed occurs about a center of rotation. Therefore, strictly speaking, L should be referred to as the angular momentum relative to that center and this simple analysis can also apply to non-circular motion if only the component of the motion which is perpendicular to the radius vector is considered. In that case, L = r m v ⊥, where v ⊥ = v sin θ is the component of the motion. It is this definition, × to which the moment of momentum refers
58.
Zeeman effect
–
The Zeeman effect, named after the Dutch physicist Pieter Zeeman, is the effect of splitting a spectral line into several components in the presence of a static magnetic field. It is analogous to the Stark effect, the splitting of a line into several components in the presence of an electric field. Also similar to the Stark effect, transitions between different components have, in general, different intensities, with some being entirely forbidden, the Zeeman effect is very important in applications such as nuclear magnetic resonance spectroscopy, electron spin resonance spectroscopy, magnetic resonance imaging and Mössbauer spectroscopy. It may also be utilized to improve accuracy in atomic absorption spectroscopy, a theory about the magnetic sense of birds assumes that a protein in the retina is changed due to the Zeeman effect. When the spectral lines are lines, the effect is called inverse Zeeman effect. Historically, one distinguishes between the normal and an anomalous Zeeman effect, the anomalous effect appears on transitions where the net spin of the electrons is an odd half-integer, so that the number of Zeeman sub-levels is even. It was called anomalous because the spin had not yet been discovered. At higher magnetic fields the effect ceases to be linear, at even higher field strength, when the strength of the external field is comparable to the strength of the atoms internal field, electron coupling is disturbed and the spectral lines rearrange. This is called the Paschen-Back effect, in the modern scientific literature, these terms are rarely used, with a tendency to use just the Zeeman effect. The magnetic moment consists of the electronic and nuclear parts, however, therefore, μ → ≈ − μ B g J → ℏ, where μ B is the Bohr magneton, J → is the total electronic angular momentum, and g is the Landé g-factor.0023192. If the interaction term V M is small, it can be treated as a perturbation, in the Paschen-Back effect, described below, V M exceeds the LS coupling significantly. In ultrastrong magnetic fields, the interaction may exceed H0, in which case the atom can no longer exist in its normal meaning. There are, of course, intermediate cases which are more complex than these limit cases. If the spin-orbit interaction dominates over the effect of the magnetic field, L → and S → are not separately conserved. The spin and orbital angular momentum vectors can be thought of as precessing about the angular momentum vector J →. Thus, ⟨ V M ⟩ = μ B ℏ J → ⋅ B →, in the presence of an external magnetic field, the weak-field Zeeman effect splits the 1S1/2 and 2P1/2 levels into 2 states each and the 2P3/2 level into 4 states. Note in particular that the size of the splitting is different for the different orbitals. On the left, fine structure splitting is depicted and this splitting occurs even in the absence of a magnetic field, as it is due to spin-orbit coupling
59.
Dirac equation
–
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles such as electrons and it was validated by accounting for the fine details of the hydrogen spectrum in a completely rigorous way. The equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved, moreover, in the limit of zero mass, the Dirac equation reduces to the Weyl equation. This accomplishment has been described as fully on a par with the works of Newton, Maxwell, in the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-1/2 particles. The Dirac equation in the originally proposed by Dirac is. The p1, p2, p3 are the components of the momentum, also, c is the speed of light, and ħ is the Planck constant divided by 2π. These fundamental physical constants reflect special relativity and quantum mechanics, respectively, Diracs purpose in casting this equation was to explain the behavior of the relativistically moving electron, and so to allow the atom to be treated in a manner consistent with relativity. His rather modest hope was that the corrections introduced this way might have a bearing on the problem of atomic spectra, the new elements in this equation are the 4 ×4 matrices αk and β, and the four-component wave function ψ. There are four components in ψ because the evaluation of it at any point in configuration space is a bispinor. It is interpreted as a superposition of an electron, a spin-down electron, a spin-up positron. These matrices and the form of the function have a deep mathematical significance. The algebraic structure represented by the matrices had been created some 50 years earlier by the English mathematician W. K. Clifford. In turn, Cliffords ideas had emerged from the work of the German mathematician Hermann Grassmann in his Lineale Ausdehnungslehre. The latter had been regarded as well-nigh incomprehensible by most of his contemporaries, the appearance of something so seemingly abstract, at such a late date, and in such a direct physical manner, is one of the most remarkable chapters in the history of physics. The Dirac equation is similar to the Schrödinger equation for a massive free particle. The left side represents the square of the momentum operator divided by twice the mass, space and time derivatives both enter to second order. This has a consequence for the interpretation of the equation. Because the equation is second order in the derivative, one must specify initial values both of the wave function itself and of its first-time derivative in order to solve definite problems
60.
Separation of variables
–
Suppose a differential equation can be written in the form d d x f = g h which we can write more simply by letting y = f, d y d x = g h. As long as h ≠0, we can rearrange terms to obtain, d y h = g d x, dx can be viewed, at a simple level, as just a convenient notation, which provides a handy mnemonic aid for assisting with manipulations. A formal definition of dx as a differential is somewhat advanced, some who dislike Leibnizs notation may prefer to write this as 1 h d y d x = g, but that fails to make it quite as obvious why this is called separation of variables. If one can evaluate the two integrals, one can find a solution to the differential equation, observe that this process effectively allows us to treat the derivative d y d x as a fraction which can be separated. This allows us to solve differential equations more conveniently, as demonstrated in the example below. Separation of variables may be used to solve this differential equation, then we have P0 = K1 + A e 0 Noting that e 0 =1, and solving for A we get A = K − P0 P0. Thus, and − λ here is the eigenvalue for both operators, and T and X are corresponding eigenfunctions. We will now show that solutions for X for values of λ ≤0 cannot occur, then there exist real numbers B, C such that X = B e − λ x + C e − − λ x. From we get and therefore B =0 = C which implies u is identically 0, then there exist real numbers B, C such that X = B x + C. From we conclude in the manner as in 1 that u is identically 0. Therefore, it must be the case that λ >0, then there exist real numbers A, B, C such that T = A e − λ α t, and X = B sin + C cos . From we get C =0 and that for some integer n, λ = n π L. This solves the equation in the special case that the dependence of u has the special form of. In general, the sum of solutions to which satisfy the boundary conditions also satisfies, hence a complete solution can be given as u = ∑ n =1 ∞ D n sin n π x L exp , where Dn are coefficients determined by initial condition. Given the initial condition u | t =0 = f and this is the sine series expansion of f. Multiplying both sides with sin n π x L and integrating over result in D n =2 L ∫0 L f sin n π x L d x. This method requires that the eigenfunctions of x, here n =1 ∞, are orthogonal, in general this is guaranteed by Sturm-Liouville theory. Suppose the equation is nonhomogeneous, with the condition the same as
61.
Partial differential equation
–
In mathematics, a partial differential equation is a differential equation that contains unknown multivariable functions and their partial derivatives. PDEs are used to formulate problems involving functions of several variables, PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid dynamics, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs, just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations, Partial differential equations are equations that involve rates of change with respect to continuous variables. The dynamics for the body take place in a finite-dimensional configuration space. This distinction usually makes PDEs much harder to solve ordinary differential equations. Classic domains where PDEs are used include acoustics, fluid dynamics, electrodynamics, a partial differential equation for the function u is an equation of the form f =0. If f is a function of u and its derivatives. Common examples of linear PDEs include the equation, the wave equation, Laplaces equation, Helmholtz equation, Klein–Gordon equation. A relatively simple PDE is ∂ u ∂ x =0 and this relation implies that the function u is independent of x. However, the equation gives no information on the dependence on the variable y. Hence the general solution of equation is u = f. The analogous ordinary differential equation is d u d x =0, which has the solution u = c and these two examples illustrate that general solutions of ordinary differential equations involve arbitrary constants, but solutions of PDEs involve arbitrary functions. A solution of a PDE is generally not unique, additional conditions must generally be specified on the boundary of the region where the solution is defined. For instance, in the example above, the function f can be determined if u is specified on the line x =0. Even if the solution of a differential equation exists and is unique. The mathematical study of questions is usually in the more powerful context of weak solutions. The derivative of u with respect to y approaches 0 uniformly in x as n increases and this solution approaches infinity if nx is not an integer multiple of π for any non-zero value of y
62.
Wavefunction
–
A wave function in quantum physics is a description of the quantum state of a system. The wave function is a probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. The most common symbols for a function are the Greek letters ψ or Ψ. The wave function is a function of the degrees of freedom corresponding to some set of commuting observables. Once such a representation is chosen, the function can be derived from the quantum state. For a given system, the choice of which commuting degrees of freedom to use is not unique, some particles, like electrons and photons, have nonzero spin, and the wave function for such particles includes spin as an intrinsic, discrete degree of freedom. Other discrete variables can also be included, such as isospin and these values are often displayed in a column matrix. According to the principle of quantum mechanics, wave functions can be added together and multiplied by complex numbers to form new wave functions. The Schrödinger equation determines how wave functions evolve over time, a wave function behaves qualitatively like other waves, such as water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name wave function, and gives rise to wave–particle duality, however, the wave function in quantum mechanics describes a kind of physical phenomenon, still open to different interpretations, which fundamentally differs from that of classic mechanical waves. The integral of this quantity, over all the degrees of freedom. This general requirement a wave function must satisfy is called the normalization condition, since the wave function is complex valued, only its relative phase and relative magnitude can be measured. In 1905 Einstein postulated the proportionality between the frequency of a photon and its energy, E = hf, and in 1916 the corresponding relation between photon momentum and wavelength, λ = h/p, the equations represent wave–particle duality for both massless and massive particles. In the 1920s and 1930s, quantum mechanics was developed using calculus and those who used the techniques of calculus included Louis de Broglie, Erwin Schrödinger, and others, developing wave mechanics. Those who applied the methods of linear algebra included Werner Heisenberg, Max Born, Schrödinger subsequently showed that the two approaches were equivalent. However, no one was clear on how to interpret it, at first, Schrödinger and others thought that wave functions represent particles that are spread out with most of the particle being where the wave function is large. This was shown to be incompatible with the scattering of a wave packet representing a particle off a target. While a scattered particle may scatter in any direction, it not break up
63.
Spherical coordinates
–
It can be seen as the three-dimensional version of the polar coordinate system. The radial distance is called the radius or radial coordinate. The polar angle may be called colatitude, zenith angle, normal angle, the use of symbols and the order of the coordinates differs between sources. In both systems ρ is often used instead of r, other conventions are also used, so great care needs to be taken to check which one is being used. A number of different spherical coordinate systems following other conventions are used outside mathematics, in a geographical coordinate system positions are measured in latitude, longitude and height or altitude. There are a number of different celestial coordinate systems based on different fundamental planes, the polar angle is often replaced by the elevation angle measured from the reference plane. Elevation angle of zero is at the horizon, the spherical coordinate system generalises the two-dimensional polar coordinate system. It can also be extended to spaces and is then referred to as a hyperspherical coordinate system. To define a coordinate system, one must choose two orthogonal directions, the zenith and the azimuth reference, and an origin point in space. These choices determine a plane that contains the origin and is perpendicular to the zenith. The spherical coordinates of a point P are then defined as follows, the inclination is the angle between the zenith direction and the line segment OP. The azimuth is the angle measured from the azimuth reference direction to the orthogonal projection of the line segment OP on the reference plane. The sign of the azimuth is determined by choosing what is a sense of turning about the zenith. This choice is arbitrary, and is part of the coordinate systems definition, the elevation angle is 90 degrees minus the inclination angle. If the inclination is zero or 180 degrees, the azimuth is arbitrary, if the radius is zero, both azimuth and inclination are arbitrary. In linear algebra, the vector from the origin O to the point P is often called the vector of P. Several different conventions exist for representing the three coordinates, and for the order in which they should be written. The use of to denote radial distance, inclination, and azimuth, respectively, is common practice in physics, and is specified by ISO standard 80000-2,2009, and earlier in ISO 31-11
64.
Laguerre polynomial
–
In mathematics, the Laguerre polynomials, named after Edmond Laguerre, are solutions of Laguerres equation, x y ″ + y ′ + n y =0 which is a second-order linear differential equation. This equation has solutions only if n is a non-negative integer. More generally, the name Laguerre polynomials is used for solutions of x y ″ + y ′ + n y =0, then they are also named generalized Laguerre polynomials, as will be done here. The Laguerre polynomials are used for Gaussian quadrature to numerically compute integrals of the form ∫0 ∞ f e − x d x. These polynomials, usually denoted L0, L1. are a sequence which may be defined by the Rodrigues formula. N x n, reducing to the form of a following section. They are orthogonal polynomials with respect to an inner product ⟨ f, g ⟩ = ∫0 ∞ f g e − x d x, the sequence of Laguerre polynomials n. Ln is a Sheffer sequence, d d x L n = L n −1, the Rook polynomials in combinatorics are more or less the same as Laguerre polynomials, up to elementary changes of variables. The Laguerre polynomials arise in quantum mechanics, in the part of the solution of the Schrödinger equation for a one-electron atom. They also describe the static Wigner functions of systems in quantum mechanics in phase space. They further enter in the mechanics of the Morse potential. Physicists sometimes use a definition for the Laguerre polynomials which is larger by a factor of n. than the definition used here, in solution of some boundary value problems, the characteristic values can be useful, L k =1, L k ′ = − k. The closed form is L n = ∑ k =0 n k k, the generating function for them likewise follows, ∑ n ∞ t n L n =11 − t e − t x 1 − t. Polynomials of negative index can be expressed using the ones with positive index, for arbitrary real α the polynomial solutions of the differential equation x y ″ + y ′ + n y =0 are called generalized Laguerre polynomials, or associated Laguerre polynomials. The simple Laguerre polynomials are the special case α =0 of the generalized Laguerre polynomials, the Rodrigues formula for them is L n = x − α e x n. The generating function for them is ∑ n ∞ t n L n =1 α +1 e − t x 1 − t, Laguerre functions are defined by confluent hypergeometric functions and Kummers transformation as L n, = M. is a generalized binomial coefficient. When n is an integer the function reduces to a polynomial of degree n and it has the alternative expression L n = n n. U in terms of Kummers function of the second kind, the closed form for these generalized Laguerre polynomials of degree n is L n = ∑ i =0 n i x i i. derived by applying Leibnizs theorem for differentiation of a product to Rodrigues formula
65.
Spherical harmonic
–
Spherical Harmonic is a science fiction novel from the Saga of the Skolian Empire by Catherine Asaro. It tells the story of Dyhianna Selei, the Ruby Pharaoh of the Skolian Imperialate, as she strives to reform her government, Spherical Harmonic is a first person narrative told from the viewpoint of Dyhianna Selei. Although an elected Assembly governs the Imperialate, in ages past the Ruby Pharaoh ruled as absolute sovereign, Selei is the descendant of the ancient pharaohs, and is considered the titular ruler of modern Skolia. Spherical Harmonic takes place following the Radiance War, a conflict fought between the Imperialate and the Eubian Concord, an empire ruled by a caste of narcissists called Aristos. The Eubian economy is based on trade, which the Aristos seek to expand to the Imperialate. Just prior to the scene of Spherical Harmonic, Dyhianna Selei escapes a Eubian military force by stepping into a Lock. In mathematical terms, she has entered an alternate dimension defined by the known as spherical harmonics. As the book opens, she is coalescing on a moon called Opalite and she reforms in partial waves that transfer her from one universe to the other. Some prose in the book is written in the shape of the functions found in the spherical harmonics. As Selei fades in and out of existence, in danger of disappearing, she recovers her memories about her identity. She manages to activate an emergency protocol secretly established on the moon for her protection, as a result she is found by Jon Casestar, an admiral in the Skolian Fleet, and Commander Vaz Majda, an elite fighter pilot who is also her sister-in-law. Once aboard an ISC battle cruiser, Selei strives to reunite the Ruby Dynasty, the book follows her attempts to resurrect the Skolian military and government. Selei also struggles to discover what has happened to her son Taquinil, unable to trust anyone, Selei ends up seeking to overthrow the elected government of her own empire so she can rebuild it from the ashes of the war. In one sense, Spherical Harmonic is an adventure about the recovery of a civilization from a war that had no winner. Both acclaimed and criticized for the complexity of her plotting, world building, Asaro is also known for the use of mathematics in her novels. Spherical Harmonic involves an imagined universe based on the Hilbert space described by the wave functions that solve the Laplace Equation. The spherical harmonics are a set of eigenfunctions used in many areas of math and physics. A theoretical physicist by training, Asaro uses the concepts of the Hilbert space described by the harmonics to create the universe called Kyle Space
66.
Dirac notation
–
In quantum mechanics, bra–ket notation is a standard notation for describing quantum states. The notation uses angle brackets and vertical bars and it can also be used to denote abstract vectors and linear functionals in mathematics. The relevant quantity is actually | ⟨ ϕ ∣ ψ ⟩ |2 = | ⟨ ψ ∣ ϕ ⟩ |2 and is interpreted according to the fundamental Born rule, bra–ket notation is widespread in quantum mechanics. Many phenomena that are explained using quantum mechanics are usually most clearly demonstrated with the help of the bra-ket notation. It is only when a bra appears unpaired in an expression, such as and it is in handling expressions containing these that the Dirac notation comes into its own. The notation does not introduce or imply any new physics, in physics, basis vectors allow any Euclidean vector to be represented geometrically using angles and lengths, in different directions, i. e. in terms of the spatial orientations. The vector A can be using any set of basis vectors. Informally, basis vectors are like building blocks of a vector, they are added together to compose a vector, two useful representations of a vector are simply a linear combination of basis vectors, and column matrices. The vector A is still represented by a linear combination of basis vectors or a column matrix. Even more generally, A can be a vector in a complex Hilbert space, some Hilbert spaces, like ℂN, have finite dimension, while others have infinite dimension. In an infinite-dimensional space, the representation of A would be a list of infinitely many complex numbers. When this notation is used, these vectors are called kets and this applies to all vectors, the resultant vector and the basis. Note how any symbols, letters, numbers, or even words—whatever serves as a convenient label—can be used as the label inside a ket, in other words, the symbol | A ⟩ has a specific and universal mathematical meaning, while just the A by itself does not. In this context, one should best use a different than the equal sign, for example the symbol ≐. An inner product is a generalization of the dot product, the inner product of two vectors is a scalar. e. Linear transformations that input a ket and output a complex number, the bra linear functionals are defined to be consistent with the inner product. In mathematics terminology, the space of bras is the dual space to the vector space of kets. Bra-ket notation can be used if the vector space is not a Hilbert space
67.
Gegenbauer polynomial
–
In mathematics, Gegenbauer polynomials or ultraspherical polynomials Cn are orthogonal polynomials on the interval with respect to the weight function α–1/2. They generalize Legendre polynomials and Chebyshev polynomials, and are special cases of Jacobi polynomials and they are named after Leopold Gegenbauer. A variety of characterizations of the Gegenbauer polynomials are available, the polynomials can be defined in terms of their generating function,1 α = ∑ n =0 ∞ C n t n. The polynomials satisfy the relation, C0 α =1 C1 α =2 α x C n α =1 n. Gegenbauer polynomials are solutions of the Gegenbauer differential equation, y ″ − x y ′ + n y =0. When α = 1/2, the equation reduces to the Legendre equation, and they are given as Gaussian hypergeometric series in certain cases where the series is in fact finite, C n = n n.2 F1. Here n is the rising factorial, explicitly, C n = ∑ k =0 ⌊ n /2 ⌋ k Γ Γ k. They are special cases of the Jacobi polynomials, C n = n n P n. in which n represents the rising factorial of θ, one therefore also has the Rodrigues formula C n = n 2 n n. Γ Γ Γ Γ − α +1 /2 d n d x n, for a fixed α, the polynomials are orthogonal on with respect to the weighting function w = α −12. To wit, for n ≠ m, ∫ −11 C n C m α −12 d x =0 and they are normalized by ∫ −112 α −12 d x = π21 −2 α Γ n. The Gegenbauer polynomials appear naturally as extensions of Legendre polynomials in the context of potential theory, when n =3, this gives the Legendre polynomial expansion of the gravitational potential. Similar expressions are available for the expansion of the Poisson kernel in a ball and it follows that the quantities C n, k are spherical harmonics, when regarded as a function of x only. They are, in fact, exactly the zonal spherical harmonics, Gegenbauer polynomials also appear in the theory of Positive-definite functions. The Askey–Gasper inequality reads ∑ j =0 n C j α ≥0, rogers polynomials, the q-analogue of Gegenbauer polynomials Chebyshev polynomials Romanovski polynomials Abramowitz, Milton, Stegun, Irene Ann, eds. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, new York, United States Department of Commerce, National Bureau of Standards, Dover Publications. Mathematical Methods in Science and Engineering, Wiley, Chapter 5, koornwinder, Tom H. Wong, Roderick S. C. Ultraspherical polynomials, in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
68.
Energy levels
–
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy. This contrasts with classical particles, which can have any energy and these discrete values are called energy levels. The energy spectrum of a system with discrete energy levels is said to be quantized. In chemistry and atomic physics, a shell, or a principal energy level. The closest shell to the nucleus is called the 1 shell, followed by the 2 shell, then the 3 shell, the shells correspond with the principal quantum numbers or are labeled alphabetically with letters used in the X-ray notation. Each shell can contain only a number of electrons, The first shell can hold up to two electrons, the second shell can hold up to eight electrons, the third shell can hold up to 18. The general formula is that the nth shell can in principle hold up to 2 electrons, since electrons are electrically attracted to the nucleus, an atoms electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a requirement, atoms may have two or even three incomplete outer shells. For an explanation of why electrons exist in these shells see electron configuration, if the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy. If an atom, ion, or molecule is at the lowest possible level, it. If it is at an energy level, it is said to be excited. If more than one quantum state is at the same energy. They are then called degenerate energy levels, quantized energy levels result from the relation between a particles energy and its wavelength. For a confined particle such as an electron in an atom, only stationary states with energies corresponding to integral numbers of wavelengths can exist, for other states the waves interfere destructively, resulting in zero probability density. Elementary examples that show mathematically how energy levels come about are the particle in a box, the first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926. When the electron is bound to the atom in any closer value of n, assume there is one electron in a given atomic orbital in a hydrogen-like atom
69.
Spectral line
–
Spectral lines are often used to identify atoms and molecules from their characteristic spectral lines. Spectral lines are the result of interaction between a system and a single photon. When a photon has about the amount of energy to allow a change in the energy state of the system. A spectral line may be observed either as a line or an absorption line. Which type of line is observed depends on the type of material, an absorption line is produced when photons from a hot, broad spectrum source pass through a cold material. The intensity of light, over a frequency range, is reduced due to absorption by the material. By contrast, a bright, emission line is produced when photons from a hot material are detected in the presence of a spectrum from a cold source. The intensity of light, over a frequency range, is increased due to emission by the material. Spectral lines are highly atom-specific, and can be used to identify the composition of any medium capable of letting light pass through it. Several elements were discovered by means, such as helium, thallium. Mechanisms other than atom-photon interaction can produce spectral lines, depending on the exact physical interaction, the frequency of the involved photons will vary widely, and lines can be observed across the electromagnetic spectrum, from radio waves to gamma rays. In other cases the lines are designated according to the level of ionization by adding a Roman numeral to the designation of the chemical element, so that Ca+ also has the designation Ca II. Neutral atoms are denoted with the roman number I, singly ionized atoms with II, more detailed designations usually include the line wavelength and may include a multiplet number or band designation. Many spectral lines of hydrogen also have designations within their respective series. A spectral line extends over a range of frequencies, not a single frequency, in addition, its center may be shifted from its nominal central wavelength. There are several reasons for this broadening and shift and these reasons may be divided into two general categories – broadening due to local conditions and broadening due to extended conditions. Broadening due to conditions is due to effects which hold in a small region around the emitting element. Broadening due to extended conditions may result from changes to the distribution of the radiation as it traverses its path to the observer
70.
Anisotropic
–
Anisotropy /ˌænaɪˈsɒtrəpi/ is the property of being directionally dependent, which implies different properties in different directions, as opposed to isotropy. It can be defined as a difference, when measured along different axes, another is wood, which is easier to split along its grain than against it. In the field of graphics, an anisotropic surface changes in appearance as it rotates about its geometric normal. Anisotropic filtering is a method of enhancing the quality of textures on surfaces that are far away. Older techniques, such as bilinear and trilinear filtering, do not take account the angle a surface is viewed from. By reducing detail in one more than another, these effects can be reduced. In NMR spectroscopy, the orientation of nuclei with respect to the magnetic field determines their chemical shift. In this context, anisotropic systems refer to the distribution of molecules with abnormally high electron density. This abnormal electron density affects the magnetic field and causes the observed chemical shift to change. Anisotropy measurements reveal the average displacement of the fluorophore that occurs between absorption and subsequent emission of a photon. Physicists from University of California, Berkeley reported about their detection of the anisotropy in cosmic microwave background radiation in 1977. Their experiment demonstrated the Doppler shift caused by the movement of the earth with respect to the early Universe matter, cosmic anisotropy has also been seen in the alignment of galaxies rotation axes and polarisation angles of quasars. Physicists use the term anisotropy to describe direction-dependent properties of materials, magnetic anisotropy, for example, may occur in a plasma, so that its magnetic field is oriented in a preferred direction. Plasmas may also show filamentation that is directional, liquid crystals are examples of anisotropic liquids. Some materials conduct heat in a way that is isotropic, that is independent of spatial orientation around the heat source, heat conduction is more commonly anisotropic, which implies that detailed geometric modeling of typically diverse materials being thermally managed is required. The materials used to transfer and reject heat from the source in electronics are often anisotropic. Many crystals are anisotropic to light, and exhibit such as birefringence. Crystal optics describes light propagation in these media, an axis of anisotropy is defined as the axis along which isotropy is broken