1.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
2.
Joule
–
The joule, symbol J, is a derived unit of energy in the International System of Units. It is equal to the transferred to an object when a force of one newton acts on that object in the direction of its motion through a distance of one metre. It is also the energy dissipated as heat when a current of one ampere passes through a resistance of one ohm for one second. It is named after the English physicist James Prescott Joule, one joule can also be defined as, The work required to move an electric charge of one coulomb through an electrical potential difference of one volt, or one coulomb volt. This relationship can be used to define the volt, the work required to produce one watt of power for one second, or one watt second. This relationship can be used to define the watt and this SI unit is named after James Prescott Joule. As with every International System of Units unit named for a person, note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. The CGPM has given the unit of energy the name Joule, the use of newton metres for torque and joules for energy is helpful to avoid misunderstandings and miscommunications. The distinction may be also in the fact that energy is a scalar – the dot product of a vector force. By contrast, torque is a vector – the cross product of a distance vector, torque and energy are related to one another by the equation E = τ θ, where E is energy, τ is torque, and θ is the angle swept. Since radians are dimensionless, it follows that torque and energy have the same dimensions, one joule in everyday life represents approximately, The energy required to lift a medium-size tomato 1 m vertically from the surface of the Earth. The energy released when that same tomato falls back down to the ground, the energy required to accelerate a 1 kg mass at 1 m·s−2 through a 1 m distance in space. The heat required to raise the temperature of 1 g of water by 0.24 °C, the typical energy released as heat by a person at rest every 1/60 s. The kinetic energy of a 50 kg human moving very slowly, the kinetic energy of a 56 g tennis ball moving at 6 m/s. The kinetic energy of an object with mass 1 kg moving at √2 ≈1.4 m/s, the amount of electricity required to light a 1 W LED for 1 s. Since the joule is also a watt-second and the unit for electricity sales to homes is the kW·h. For additional examples, see, Orders of magnitude The zeptojoule is equal to one sextillionth of one joule,160 zeptojoules is equivalent to one electronvolt. The nanojoule is equal to one billionth of one joule, one nanojoule is about 1/160 of the kinetic energy of a flying mosquito
3.
Electron
–
The electron is a subatomic particle, symbol e− or β−, with a negative elementary electric charge. Electrons belong to the first generation of the lepton particle family, the electron has a mass that is approximately 1/1836 that of the proton. Quantum mechanical properties of the include a intrinsic angular momentum of a half-integer value, expressed in units of the reduced Planck constant. As it is a fermion, no two electrons can occupy the same state, in accordance with the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of particles and waves, they can collide with other particles and can be diffracted like light. Since an electron has charge, it has an electric field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law, electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields, special telescopes can detect electron plasma in outer space. Electrons are involved in applications such as electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors. Interactions involving electrons with other particles are of interest in fields such as chemistry. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms, ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of a quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge electron in 1891, electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of isotopes and in high-energy collisions. The antiparticle of the electron is called the positron, it is identical to the electron except that it carries electrical, when an electron collides with a positron, both particles can be totally annihilated, producing gamma ray photons. The ancient Greeks noticed that amber attracted small objects when rubbed with fur, along with lightning, this phenomenon is one of humanitys earliest recorded experiences with electricity. In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electricus, both electric and electricity are derived from the Latin ēlectrum, which came from the Greek word for amber, ἤλεκτρον
4.
Voltage
–
Voltage, electric potential difference, electric pressure or electric tension is the difference in electric potential energy between two points per unit electric charge. The voltage between two points is equal to the work done per unit of charge against an electric field to move the test charge between two points. This is measured in units of volts, voltage can be caused by static electric fields, by electric current through a magnetic field, by time-varying magnetic fields, or some combination of these three. A voltmeter can be used to measure the voltage between two points in a system, often a reference potential such as the ground of the system is used as one of the points. A voltage may represent either a source of energy or lost, used, given two points in space, x A and x B, voltage is the difference in electric potential between those two points. Electric potential must be distinguished from electric energy by noting that the potential is a per-unit-charge quantity. Like mechanical potential energy, the zero of electric potential can be chosen at any point, so the difference in potential, i. e. the voltage, is the quantity which is physically meaningful. The voltage between point A to point B is equal to the work which would have to be done, per unit charge, against or by the electric field to move the charge from A to B. The voltage between the two ends of a path is the energy required to move a small electric charge along that path. Mathematically this is expressed as the integral of the electric field. In the general case, both an electric field and a dynamic electromagnetic field must be included in determining the voltage between two points. Historically this quantity has also called tension and pressure. Pressure is now obsolete but tension is used, for example within the phrase high tension which is commonly used in thermionic valve based electronics. Voltage is defined so that negatively charged objects are pulled towards higher voltages, therefore, the conventional current in a wire or resistor always flows from higher voltage to lower voltage. Current can flow from lower voltage to higher voltage, but only when a source of energy is present to push it against the electric field. This is the case within any electric power source, for example, inside a battery, chemical reactions provide the energy needed for ion current to flow from the negative to the positive terminal. The electric field is not the only factor determining charge flow in a material, the electric potential of a material is not even a well defined quantity, since it varies on the subatomic scale. A more convenient definition of voltage can be found instead in the concept of Fermi level, in this case the voltage between two bodies is the thermodynamic work required to move a unit of charge between them
5.
Volt
–
The volt is the derived unit for electric potential, electric potential difference, and electromotive force. One volt is defined as the difference in potential between two points of a conducting wire when an electric current of one ampere dissipates one watt of power between those points. It is also equal to the difference between two parallel, infinite planes spaced 1 meter apart that create an electric field of 1 newton per coulomb. Additionally, it is the difference between two points that will impart one joule of energy per coulomb of charge that passes through it. It can also be expressed as amperes times ohms, watts per ampere, or joules per coulomb, for the Josephson constant, KJ = 2e/h, the conventional value KJ-90 is used, K J-90 =0.4835979 GHz μ V. This standard is typically realized using an array of several thousand or tens of thousands of junctions. Empirically, several experiments have shown that the method is independent of device design, material, measurement setup, etc. in the water-flow analogy sometimes used to explain electric circuits by comparing them with water-filled pipes, voltage is likened to difference in water pressure. Current is proportional to the diameter of the pipe or the amount of water flowing at that pressure. A resistor would be a reduced diameter somewhere in the piping, the relationship between voltage and current is defined by Ohms Law. Ohms Law is analogous to the Hagen–Poiseuille equation, as both are linear models relating flux and potential in their respective systems, the voltage produced by each electrochemical cell in a battery is determined by the chemistry of that cell. Cells can be combined in series for multiples of that voltage, mechanical generators can usually be constructed to any voltage in a range of feasibility. High-voltage electric power lines,110 kV and up Lightning, Varies greatly. Volta had determined that the most effective pair of metals to produce electricity was zinc. In 1861, Latimer Clark and Sir Charles Bright coined the name volt for the unit of resistance, by 1873, the British Association for the Advancement of Science had defined the volt, ohm, and farad. In 1881, the International Electrical Congress, now the International Electrotechnical Commission and they made the volt equal to 108 cgs units of voltage, the cgs system at the time being the customary system of units in science. At that time, the volt was defined as the difference across a conductor when a current of one ampere dissipates one watt of power. The international volt was defined in 1893 as 1/1.434 of the emf of a Clark cell and this definition was abandoned in 1908 in favor of a definition based on the international ohm and international ampere until the entire set of reproducible units was abandoned in 1948. Prior to the development of the Josephson junction voltage standard, the volt was maintained in laboratories using specially constructed batteries called standard cells
6.
Coulomb
–
The coulomb is the International System of Units unit of electric charge. 242×1018 protons, and −1 C is equivalent to the charge of approximately 6. 242×1018 electrons. This SI unit is named after Charles-Augustin de Coulomb, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, the SI system defines the coulomb in terms of the ampere and second,1 C =1 A ×1 s. The second is defined in terms of a frequency emitted by caesium atoms. The ampere is defined using Ampères force law, the definition relies in part on the mass of the prototype kilogram. In practice, the balance is used to measure amperes with the highest possible accuracy. One coulomb is the magnitude of charge in 6. 24150934×10^18 protons or electrons. The inverse of this gives the elementary charge of 1. 6021766208×10−19 C. The magnitude of the charge of one mole of elementary charges is known as a faraday unit of charge. In terms of Avogadros number, one coulomb is equal to approximately 1.036 × NA×10−5 elementary charges, one ampere-hour =3600 C,1 mA⋅h =3.6 C. One statcoulomb, the obsolete CGS electrostatic unit of charge, is approximately 3. 3356×10−10 C or about one-third of a nanocoulomb, the elementary charge, the charge of a proton, is approximately 1. 6021766208×10−19 C. In SI, the charge in coulombs is an approximate value. However, in other systems, the elementary charge has an exact value by definition. Specifically, e90 = / C exactly, SI itself may someday change its definitions in a similar way. For example, one possible proposed redefinition is the ampere. is such that the value of the charge e is exactly 1. 602176487×10−19 coulombs. This proposal is not yet accepted as part of the SI, the charges in static electricity from rubbing materials together are typically a few microcoulombs. The amount of charge that travels through a lightning bolt is typically around 15 C, the amount of charge that travels through a typical alkaline AA battery from being fully charged to discharged is about 5 kC =5000 C ≈1400 mA⋅h. The hydraulic analogy uses everyday terms to illustrate movement of charge, the analogy equates charge to a volume of water, and voltage to pressure
7.
Particle accelerator
–
A particle accelerator is a machine that uses electromagnetic fields to propel charged particles to nearly light speed and to contain them in well-defined beams. Large accelerators are used in physics as colliders, or as synchrotron light sources for the study of condensed matter physics. There are currently more than 30,000 accelerators in operation around the world, there are two basic classes of accelerators, electrostatic and electrodynamic accelerators. Electrostatic accelerators use electric fields to accelerate particles. The most common types are the Cockcroft–Walton generator and the Van de Graaff generator, a small-scale example of this class is the cathode ray tube in an ordinary old television set. The achievable kinetic energy for particles in these devices is determined by the accelerating voltage, electrodynamic or electromagnetic accelerators, on the other hand, use changing electromagnetic fields to accelerate particles. Since in these types the particles can pass through the accelerating field multiple times. This class, which was first developed in the 1920s, is the basis for most modern large-scale accelerators, because colliders can give evidence of the structure of the subatomic world, accelerators were commonly referred to as atom smashers in the 20th century. Despite the fact that most accelerators actually propel subatomic particles, the term persists in popular usage when referring to particle accelerators in general. Beams of high-energy particles are useful for both fundamental and applied research in the sciences, and also in many technical and industrial fields unrelated to fundamental research and it has been estimated that there are approximately 30,000 accelerators worldwide. The bar graph shows the breakdown of the number of industrial accelerators according to their applications, for the most basic inquiries into the dynamics and structure of matter, space, and time, physicists seek the simplest kinds of interactions at the highest possible energies. These typically entail particle energies of many GeV, and the interactions of the simplest kinds of particles, leptons and quarks for the matter, the largest and highest energy particle accelerator used for elementary particle physics is the Large Hadron Collider at CERN, operating since 2009. These investigations often involve collisions of heavy nuclei – of atoms like iron or gold – at energies of several GeV per nucleon, the largest such particle accelerator is the Relativistic Heavy Ion Collider at Brookhaven National Laboratory. An example of type of machine is LANSCE at Los Alamos. A large number of light sources exist worldwide. The ESRF in Grenoble, France has been used to extract detailed 3-dimensional images of trapped in amber. Thus there is a demand for electron accelerators of moderate energy. Everyday examples of particle accelerators are cathode ray tubes found in television sets and these low-energy accelerators use a single pair of electrodes with a DC voltage of a few thousand volts between them
8.
International System of Units
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version
9.
Litre
–
The litre or liter is an SI accepted metric system unit of volume equal to 1 cubic decimetre,1,000 cubic centimetres or 1/1,000 cubic metre. A cubic decimetre occupies a volume of 10×10×10 centimetres and is equal to one-thousandth of a cubic metre. The original French metric system used the litre as a base unit. The word litre is derived from an older French unit, the litron, whose name came from Greek — where it was a unit of weight, not volume — via Latin, and which equalled approximately 0.831 litres. The litre was also used in subsequent versions of the metric system and is accepted for use with the SI. The spelling used by the International Bureau of Weights and Measures is litre, the less common spelling of liter is more predominantly used in American English. One litre of water has a mass of almost exactly one kilogram. Subsequent redefinitions of the metre and kilogram mean that this relationship is no longer exact, a litre is defined as a special name for a cubic decimetre or 10 centimetres ×10 centimetres ×10 centimetres. Hence 1 L ≡0.001 m3 ≡1000 cm3, from 1901 to 1964, the litre was defined as the volume of one kilogram of pure water at maximum density and standard pressure. The kilogram was in turn specified as the mass of a platinum/iridium cylinder held at Sèvres in France and was intended to be of the mass as the 1 litre of water referred to above. It was subsequently discovered that the cylinder was around 28 parts per million too large and thus, during this time, additionally, the mass-volume relationship of water depends on temperature, pressure, purity and isotopic uniformity. In 1964, the definition relating the litre to mass was abandoned in favour of the current one, although the litre is not an official SI unit, it is accepted by the CGPM for use with the SI. CGPM defines the litre and its acceptable symbols, a litre is equal in volume to the millistere, an obsolete non-SI metric unit customarily used for dry measure. The litre is often used in some calculated measurements, such as density. One litre of water has a mass of almost exactly one kilogram when measured at its maximal density, similarly,1 millilitre of water has a mass of about 1 g,1,000 litres of water has a mass of about 1,000 kg. It is now known that density of water depends on the isotopic ratios of the oxygen and hydrogen atoms in a particular sample. The litre, though not an official SI unit, may be used with SI prefixes, the most commonly used derived unit is the millilitre, defined as one-thousandth of a litre, and also often referred to by the SI derived unit name cubic centimetre. It is a commonly used measure, especially in medicine and cooking, Other units may be found in the table below, where the more often used terms are in bold
10.
Planck constant
–
The Planck constant is a physical constant that is the quantum of action, central in quantum mechanics. The light quantum behaved in some respects as a neutral particle. It was eventually called the photon, the Planck–Einstein relation connects the particulate photon energy E with its associated wave frequency f, E = h f This energy is extremely small in terms of ordinarily perceived everyday objects. Since the frequency f, wavelength λ, and speed of c are related by f = c λ. This leads to another relationship involving the Planck constant, with p denoting the linear momentum of a particle, the de Broglie wavelength λ of the particle is given by λ = h p. In applications where it is natural to use the frequency it is often useful to absorb a factor of 2π into the Planck constant. The resulting constant is called the reduced Planck constant or Dirac constant and it is equal to the Planck constant divided by 2π, and is denoted ħ, ℏ = h 2 π. The energy of a photon with angular frequency ω, where ω = 2πf, is given by E = ℏ ω, while its linear momentum relates to p = ℏ k and this was confirmed by experiments soon afterwards. This holds throughout quantum theory, including electrodynamics and these two relations are the temporal and spatial component parts of the special relativistic expression using 4-Vectors. P μ = = ℏ K μ = ℏ Classical statistical mechanics requires the existence of h, eventually, following upon Plancks discovery, it was recognized that physical action cannot take on an arbitrary value. Instead, it must be multiple of a very small quantity. This is the old quantum theory developed by Bohr and Sommerfeld, in which particle trajectories exist but are hidden. Thus there is no value of the action as classically defined, related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain either quantization of energy or the lack of a particle motion. In many cases, such as for light or for atoms, quantization of energy also implies that only certain energy levels are allowed. The Planck constant has dimensions of physical action, i. e. energy multiplied by time, or momentum multiplied by distance, in SI units, the Planck constant is expressed in joule-seconds or or. The value of the Planck constant is, h =6.626070040 ×10 −34 J⋅s =4.135667662 ×10 −15 eV⋅s. The value of the reduced Planck constant is, ℏ = h 2 π =1.054571800 ×10 −34 J⋅s =6.582119514 ×10 −16 eV⋅s
11.
Fine-structure constant
–
It is related to the elementary charge e, which characterizes the strength of the coupling of an elementary charged particle with the electromagnetic field, by the formula 4πε0ħcα = e2. Being a dimensionless quantity, it has the numerical value of about 1⁄137 in all systems of units. Arnold Sommerfeld introduced the fine-structure constant in 1916, the definition reflects the relationship between α and the elementary charge e, which equals √4παε0ħc. In electrostatic cgs units, the unit of charge, the statcoulomb, is defined so that the Coulomb constant, ke, or the permittivity factor, 4πε0, is 1. Then the expression of the constant, as commonly found in older physics literature. In natural units, commonly used in high energy physics, where ε0 = c = ħ =1, the value of the fine-structure constant is α = e 24 π. As such, the constant is just another, albeit dimensionless, quantity determining the elementary charge. The 2014 CODATA recommended value of α is α = e 2 ℏ c =0.0072973525664 and this has a relative standard uncertainty of 0.32 parts per billion. For reasons of convenience, historically the value of the reciprocal of the constant is often specified. The 2014 CODATA recommended value is given by α −1 =137.035999139, the theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant α.035999173. This measurement of α has a precision of 0.25 parts per billion and this value and uncertainty are about the same as the latest experimental results. The fine-structure constant, α, has several physical interpretations, α is, The square of the ratio of the elementary charge to the Planck charge α =2. The ratio of the velocity of the electron in the first circular orbit of the Bohr model of the atom to the speed of light in vacuum and this is Sommerfelds original physical interpretation. Then the square of α is the ratio between the Hartree energy and the electron rest energy, the theory does not predict its value. Therefore, α must be determined experimentally, in fact, α is one of the about 20 empirical parameters in the Standard Model of particle physics, whose value is not determined within the Standard Model. In the electroweak theory unifying the weak interaction with electromagnetism, α is absorbed into two other coupling constants associated with the gauge fields. In this theory, the interaction is treated as a mixture of interactions associated with the electroweak fields. The strength of the electromagnetic interaction varies with the strength of the energy field, the absorption value for normal-incident light on graphene in vacuum would then be given by πα/2 or 2. 24%, and the transmission by 1/2 or 97. 75%
12.
Speed of light
–
The speed of light in vacuum, commonly denoted c, is a universal physical constant important in many areas of physics. Its exact value is 299792458 metres per second, it is exact because the unit of length, the metre, is defined from this constant, according to special relativity, c is the maximum speed at which all matter and hence information in the universe can travel. It is the speed at which all particles and changes of the associated fields travel in vacuum. Such particles and waves travel at c regardless of the motion of the source or the reference frame of the observer. In the theory of relativity, c interrelates space and time, the speed at which light propagates through transparent materials, such as glass or air, is less than c, similarly, the speed of radio waves in wire cables is slower than c. The ratio between c and the speed v at which light travels in a material is called the index n of the material. In communicating with distant space probes, it can take minutes to hours for a message to get from Earth to the spacecraft, the light seen from stars left them many years ago, allowing the study of the history of the universe by looking at distant objects. The finite speed of light limits the theoretical maximum speed of computers. The speed of light can be used time of flight measurements to measure large distances to high precision. Ole Rømer first demonstrated in 1676 that light travels at a speed by studying the apparent motion of Jupiters moon Io. In 1865, James Clerk Maxwell proposed that light was an electromagnetic wave, in 1905, Albert Einstein postulated that the speed of light c with respect to any inertial frame is a constant and is independent of the motion of the light source. He explored the consequences of that postulate by deriving the theory of relativity and in doing so showed that the parameter c had relevance outside of the context of light and electromagnetism. After centuries of increasingly precise measurements, in 1975 the speed of light was known to be 299792458 m/s with a measurement uncertainty of 4 parts per billion. In 1983, the metre was redefined in the International System of Units as the distance travelled by light in vacuum in 1/299792458 of a second, as a result, the numerical value of c in metres per second is now fixed exactly by the definition of the metre. The speed of light in vacuum is usually denoted by a lowercase c, historically, the symbol V was used as an alternative symbol for the speed of light, introduced by James Clerk Maxwell in 1865. In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used c for a different constant later shown to equal √2 times the speed of light in vacuum, in 1894, Paul Drude redefined c with its modern meaning. Einstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to c, sometimes c is used for the speed of waves in any material medium, and c0 for the speed of light in vacuum. This article uses c exclusively for the speed of light in vacuum, since 1983, the metre has been defined in the International System of Units as the distance light travels in vacuum in 1⁄299792458 of a second
13.
Solid-state physics
–
Solid-state physics is the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. It is the largest branch of condensed matter physics, solid-state physics studies how the large-scale properties of solid materials result from their atomic-scale properties. Thus, solid-state physics forms a basis of materials science. It also has applications, for example in the technology of transistors and semiconductors. Solid materials are formed from densely packed atoms, which interact intensely and these interactions produce the mechanical, thermal, electrical, magnetic and optical properties of solids. Depending on the involved and the conditions in which it was formed. The bulk of physics, as a general theory, is focused on crystals. Primarily, this is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling, likewise, crystalline materials often have electrical, magnetic, optical, or mechanical properties that can be exploited for engineering purposes. The forces between the atoms in a crystal can take a variety of forms, for example, in a crystal of sodium chloride, the crystal is made up of ionic sodium and chlorine, and held together with ionic bonds. In others, the atoms share electrons and form covalent bonds, in metals, electrons are shared amongst the whole crystal in metallic bonding. Finally, the noble gases do not undergo any of these types of bonding, in solid form, the noble gases are held together with van der Waals forces resulting from the polarisation of the electronic charge cloud on each atom. The differences between the types of solid result from the differences between their bonding, the DSSP catered to industrial physicists, and solid-state physics became associated with the technological applications made possible by research on solids. By the early 1960s, the DSSP was the largest division of the American Physical Society, large communities of solid state physicists also emerged in Europe after World War II, in particular in England, Germany, and the Soviet Union. In the United States and Europe, solid state became a prominent field through its investigations into semiconductors, superconductivity, nuclear magnetic resonance, today, solid-state physics is broadly considered to be the subfield of condensed matter physics that focuses on the properties of solids with regular crystal lattices. Many properties of materials are affected by their crystal structure and this structure can be investigated using a range of crystallographic techniques, including X-ray crystallography, neutron diffraction and electron diffraction. The sizes of the crystals in a crystalline solid material vary depending on the material involved. Real crystals feature defects or irregularities in the arrangements. Properties of materials such as electrical conduction and heat capacity are investigated by solid state physics, an early model of electrical conduction was the Drude model, which applied kinetic theory to the electrons in a solid
14.
Atomic physics
–
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. It is primarily concerned with the arrangement of electrons around the nucleus and this comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions. The term atomic physics can be associated with power and nuclear weapons, due to the synonymous use of atomic. Physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, which considers atomic nuclei alone. As with many fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular. Physics research groups are usually so classified, Atomic physics primarily considers atoms in isolation. Atomic models will consist of a nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules, nor does it examine atoms in a state as condensed matter. It is concerned with such as ionization and excitation by photons or collisions with atomic particles. This means that the atoms can be treated as if each were in isolation. By this consideration atomic physics provides the underlying theory in physics and atmospheric physics. Electrons form notional shells around the nucleus and these are normally in a ground state but can be excited by the absorption of energy from light, magnetic fields, or interaction with a colliding particle. Electrons that populate a shell are said to be in a bound state, the energy necessary to remove an electron from its shell is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy, the atom is said to have undergone the process of ionization. If the electron absorbs a quantity of less than the binding energy. After a certain time, the electron in a state will jump to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, if an inner electron has absorbed more than the binding energy, then a more outer electron may undergo a transition to fill the inner orbital. The Auger effect allows one to multiply ionize an atom with a single photon, there are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however there are no such rules for excitation by collision processes
15.
Nuclear physics
–
Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions. Other forms of matter are also studied. Nuclear physics should not be confused with atomic physics, which studies the atom as a whole, discoveries in nuclear physics have led to applications in many fields. Such applications are studied in the field of nuclear engineering, Particle physics evolved out of nuclear physics and the two fields are typically taught in close association. Nuclear astrophysics, the application of physics to astrophysics, is crucial in explaining the inner workings of stars. The discovery of the electron by J. J. Thomson a year later was an indication that the atom had internal structure, in the years that followed, radioactivity was extensively investigated, notably by Marie and Pierre Curie as well as by Ernest Rutherford and his collaborators. By the turn of the physicists had also discovered three types of radiation emanating from atoms, which they named alpha, beta, and gamma radiation. Experiments by Otto Hahn in 1911 and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a range of energies, rather than the discrete amounts of energy that were observed in gamma. This was a problem for physics at the time, because it seemed to indicate that energy was not conserved in these decays. The 1903 Nobel Prize in Physics was awarded jointly to Becquerel for his discovery and to Marie, Rutherford was awarded the Nobel Prize in Chemistry in 1908 for his investigations into the disintegration of the elements and the chemistry of radioactive substances. In 1905 Albert Einstein formulated the idea of mass–energy equivalence, in 1906 Ernest Rutherford published Retardation of the α Particle from Radium in passing through matter. Hans Geiger expanded on this work in a communication to the Royal Society with experiments he and Rutherford had done, passing alpha particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Ernest Marsden, in 1911–1912 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it. The plum pudding model had predicted that the particles should come out of the foil with their trajectories being at most slightly bent. But Rutherford instructed his team to look for something that shocked him to observe and he likened it to firing a bullet at tissue paper and having it bounce off. As an example, in this model consisted of a nucleus with 14 protons and 7 electrons. The Rutherford model worked well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929
16.
Particle physics
–
Particle physics is the branch of physics that studies the nature of the particles that constitute matter and radiation. By our current understanding, these particles are excitations of the quantum fields that also govern their interactions. The currently dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model, in more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. All particles and their interactions observed to date can be described almost entirely by a field theory called the Standard Model. The Standard Model, as formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the tests conducted to date. However, most particle physicists believe that it is a description of nature. In recent years, measurements of mass have provided the first experimental deviations from the Standard Model. The idea that all matter is composed of elementary particles dates from at least the 6th century BC, in the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. Throughout the 1950s and 1960s, a variety of particles were found in collisions of particles from increasingly high-energy beams. It was referred to informally as the particle zoo, the current state of the classification of all elementary particles is explained by the Standard Model. It describes the strong, weak, and electromagnetic fundamental interactions, the species of gauge bosons are the gluons, W−, W+ and Z bosons, and the photons. The Standard Model also contains 24 fundamental particles, which are the constituents of all matter, finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. Early in the morning on 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson, the worlds major particle physics laboratories are, Brookhaven National Laboratory. Its main facility is the Relativistic Heavy Ion Collider, which collides heavy ions such as gold ions and it is the worlds first heavy ion collider, and the worlds only polarized proton collider. Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006 and its main project is now the Large Hadron Collider, which had its first beam circulation on 10 September 2008, and is now the worlds most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions and its main facility is the Hadron Elektron Ring Anlage, which collides electrons and positrons with protons
17.
Metric prefix
–
A metric prefix is a unit prefix that precedes a basic unit of measure to indicate a multiple or fraction of the unit. While all metric prefixes in use today are decadic, historically there have been a number of binary metric prefixes as well. Each prefix has a symbol that is prepended to the unit symbol. The prefix kilo-, for example, may be added to gram to indicate multiplication by one thousand, the prefix milli-, likewise, may be added to metre to indicate division by one thousand, one millimetre is equal to one thousandth of a metre. Decimal multiplicative prefixes have been a feature of all forms of the system with six dating back to the systems introduction in the 1790s. Metric prefixes have even been prepended to non-metric units, the SI prefixes are standardized for use in the International System of Units by the International Bureau of Weights and Measures in resolutions dating from 1960 to 1991. Since 2009, they have formed part of the International System of Quantities, the BIPM specifies twenty prefixes for the International System of Units. Each prefix name has a symbol which is used in combination with the symbols for units of measure. For example, the symbol for kilo- is k, and is used to produce km, kg, and kW, which are the SI symbols for kilometre, kilogram, prefixes corresponding to an integer power of one thousand are generally preferred. Hence 100 m is preferred over 1 hm or 10 dam, the prefixes hecto, deca, deci, and centi are commonly used for everyday purposes, and the centimetre is especially common. However, some building codes require that the millimetre be used in preference to the centimetre, because use of centimetres leads to extensive usage of decimal points. Prefixes may not be used in combination and this also applies to mass, for which the SI base unit already contains a prefix. For example, milligram is used instead of microkilogram, in the arithmetic of measurements having units, the units are treated as multiplicative factors to values. If they have prefixes, all but one of the prefixes must be expanded to their numeric multiplier,1 km2 means one square kilometre, or the area of a square of 1000 m by 1000 m and not 1000 square metres. 2 Mm3 means two cubic megametres, or the volume of two cubes of 1000000 m by 1000000 m by 1000000 m or 2×1018 m3, and not 2000000 cubic metres, examples 5 cm = 5×10−2 m =5 ×0.01 m =0. The prefixes, including those introduced after 1960, are used with any metric unit, metric prefixes may also be used with non-metric units. The choice of prefixes with a unit is usually dictated by convenience of use. Unit prefixes for amounts that are larger or smaller than those actually encountered are seldom used
18.
Bevatron
–
The Bevatron was a particle accelerator — specifically, a weak-focusing proton synchrotron — at Lawrence Berkeley National Laboratory, U. S. which began operating in 1954. The antiproton was discovered there in 1955, resulting in the 1959 Nobel Prize in physics for Emilio Segrè and it accelerated protons into a fixed target, and was named for its ability to impart energies of billions of eV. The anti-electron, or positron had been first observed in the early 1930s, following World War II, positive and negative muons and pions were observed in cosmic-ray interactions seen in cloud chambers and stacks of nuclear photographic emulsions. The Bevatron was built to be enough to create antiprotons. And, in 1955, the antiproton was discovered using the Bevatron, the antineutron was discovered soon thereafter by Oreste Piccioni and co-workers, also at the Bevatron. Confirmation of the charge symmetry conjecture in 1955 led to the Nobel Prize for physics being awarded to Emilio Segrè and Owen Chamberlain in 1959. In order to create antiprotons in collisions with nucleons in a target while conserving both energy and momentum, a proton beam energy of approximately 6.2 GeV is required. At the time it was built, there was no way to confine a particle beam to a narrow aperture. The combination of beam aperture and energy required a huge,10,000 ton iron magnet, a large motor/generator system was used to ramp up the magnetic field for each cycle of acceleration. The characteristic rising and falling, wailing, sound of the system could be heard in the entire complex when the machine was in operation. The cards decks were then analyzed by computers, which reconstructed the three-dimensional tracks through the magnetic fields. Computer programs, extremely complex for their time, then fitted the data associated with a given event to estimate the energies, masses. This period, when hundreds of new particles and excited states were suddenly revealed, marked the beginning of a new era in elementary particle physics, luis Alvarez inspired and directed much of this work, for which he received the Nobel Prize in physics in 1968. The Bevatron received a new lease on life in 1971, when it was joined to the SuperHILAC linear accelerator as an injector for heavy ions, the combination was conceived by Albert Ghiorso, who named it the Bevalac. It could accelerate a wide range of stable nuclei to relativistic energies and it was finally decommissioned in 1993. The next generation of accelerators used strong focusing, and required much smaller apertures, the demolition of the Bevatron began in 2009 by Clauss Construction of Lakeside CA and was completed in 2011. History of the Bevatron The Bevatron E. J, lofgren historical retrospective account, excellent early pictures. Pictures of the Bevatron Shutdown of the Bevatron Bevatron Building Slated for Demolition Historic Atom Smasher Reduced to Rubble and Revelry
19.
Mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States
20.
Natural units
–
In physics, natural units are physical units of measurement based only on universal physical constants. For example, the charge e is a natural unit of electric charge. It precludes the interpretation of an expression in terms of physical constants, such e and c. In this case, the reinsertion of the powers of e, c. Natural units are natural because the origin of their definition comes only from properties of nature, Planck units are often, without qualification, called natural units, although they constitute only one of several systems of natural units, albeit the best known such system. As with other systems of units, the units of a set of natural units will include definitions and values for length, mass, time, temperature. It is possible to disregard temperature as a physical quantity, since it states the energy per degree of freedom of a particle. Virtually every system of natural units normalizes Boltzmanns constant kB to 1, there are two common ways to relate charge to mass, length, and time, In Lorentz–Heaviside units, Coulombs law is F = q1q2/4πr2, and in Gaussian units, Coulombs law is F = q1q2/r2. Both possibilities are incorporated into different natural unit systems, where, α is the fine-structure constant,2 ≈0.007297, αG is the gravitational coupling constant,2 ≈ 6955175200000000000♠1. 752×10−45. Natural units are most commonly used by setting the units to one, for example, many natural unit systems include the equation c =1 in the unit-system definition, where c is the speed of light. If a velocity v is half the speed of light, then as v = c/2 and c =1, the equation v = 1/2 means the velocity v has the value one-half when measured in Planck units, or the velocity v is one-half the Planck unit of velocity. The equation c =1 can be plugged in anywhere else, for example, Einsteins equation E = mc2 can be rewritten in Planck units as E = m. This equation means The energy of a particle, measured in Planck units of energy, equals the mass of the particle, measured in Planck units of mass. For example, the special relativity equation E2 = p2c2 + m2c4 appears somewhat complicated, Physical interpretation, Natural unit systems automatically subsume dimensional analysis. For example, in Planck units, the units are defined by properties of quantum mechanics, not coincidentally, the Planck unit of length is approximately the distance at which quantum gravity effects become important. Likewise, atomic units are based on the mass and charge of an electron, no prototypes, A prototype is a physical object that defines a unit, such as the International Prototype Kilogram, a physical cylinder of metal whose mass is by definition exactly one kilogram. A prototype definition always has imperfect reproducibility between different places and between different times, and it is an advantage of natural systems that they use no prototypes. Less precise measurements, SI units are designed to be used in precision measurements, for example, the second is defined by an atomic transition frequency in cesium atoms, because this transition frequency can be precisely reproduced with atomic clock technology
21.
Positron
–
The positron or antielectron is the antiparticle or the antimatter counterpart of the electron. The positron has a charge of +1 e, a spin of 1/2. When a low-energy positron collides with an electron, annihilation occurs. Positrons may be generated by positron emission radioactive decay, or by production from a sufficiently energetic photon which is interacting with an atom in a material. In 1928, Paul Dirac published a paper proposing that electrons can have both a positive charge and negative energy and this paper introduced the Dirac equation, a unification of quantum mechanics, special relativity, and the then-new concept of electron spin to explain the Zeeman effect. The paper did not explicitly predict a new particle but did allow for electrons having either positive or negative energy as solutions, hermann Weyl then published a paper discussing the mathematical implications of the negative energy solution. The positive-energy solution explained experimental results, but Dirac was puzzled by the equally valid negative-energy solution that the model allowed. However, no such transition had yet been observed experimentally and he referred to the issues raised by this conflict between theory and observation as difficulties that were unresolved. Dirac wrote a paper in December 1929 that attempted to explain the unavoidable negative-energy solution for the relativistic electron. An electron with negative energy moves in a field as though it carries a positive charge. The paper also explored the possibility of the proton being an island in this sea, Dirac acknowledged that the proton having a much greater mass than the electron was a problem, but expressed hope that a future theory would resolve the issue. Robert Oppenheimer argued strongly against the proton being the negative-energy electron solution to Diracs equation and he asserted that if it were, the hydrogen atom would rapidly self-destruct. Feynman, and earlier Stueckelberg, proposed an interpretation of the positron as an electron moving backward in time, electrons moving backward in time would have a positive electric charge. Wheeler invoked this concept to explain the properties shared by all electrons, suggesting that they are all the same electron with a complex. Dmitri Skobeltsyn first observed the positron in 1929, carl David Anderson discovered the positron on August 2,1932, for which he won the Nobel Prize for Physics in 1936. Anderson did not coin the term positron, but allowed it at the suggestion of the Physical Review journal editor to which he submitted his paper in late 1932. The positron was the first evidence of antimatter and was discovered when Anderson allowed cosmic rays to pass through a cloud chamber, a magnet surrounded this apparatus, causing particles to bend in different directions based on their electric charge. The ion trail left by each positron appeared on the plate with a curvature matching the mass-to-charge ratio of an electron
22.
Annihilation
–
The total energy and momentum of the initial pair are conserved in the process, and, more generally are distributed among a set of other particles in the final state. Antiparticles have exactly opposite additive quantum numbers from particles, so the sums of all numbers of such an original pair are zero. Hence, any set of particles may be produced whose total numbers are also zero as long as conservation of energy. During a low-energy annihilation, photon production is favored, since these particles have no mass, however, high-energy particle colliders produce annihilations where a wide variety of exotic heavy particles are created. The word annihilation may also be used informally for the interaction of two particles that are not mutual antiparticles, some quantum numbers may then not sum to zero in the initial state, but must be conserved, with the same totals in the final state. An example is the annihilation of an electron antineutrino with an electron to produce a W−. If the annihilating particles are composite, such as mesons or baryons, if the initial two particles are elementary, then they may combine to produce only a single elementary boson, such as a photon, gluon, Z, or a Higgs boson. If the total energy in the frame is equal to the rest mass of a real boson. Otherwise, the process is understood as the creation of a boson that is virtual. This is called an s-channel process, an example is the annihilation of an electron with a positron to produce a virtual photon, which converts into a muon and anti-muon. If the energy is enough, a Z could replace the photon. Both the annihilating electron and positron particles have a rest energy of about 0.511 million electron volts, if their kinetic energies are relatively negligible, this total rest energy appears as the photon energy of the gamma rays produced. Each of the gamma rays then has an energy of about 0.511 MeV, momentum and energy are both conserved, with 1.022 MeV of gamma rays moving in opposite directions. If one or both charged particles carry a larger amount of energy, various other particles can be produced. The inverse process, pair production by a real photon, is also possible in the electromagnetic field of a third particle. When a proton encounters its antiparticle, the reaction is not as simple as electron-positron annihilation, unlike an electron, a proton is a composite particle consisting of three valence quarks and an indeterminate number of sea quarks bound by gluons. This type of reaction will occur between any baryon and any antibaryon consisting of three antiquarks, one of which corresponds to a quark in the baryon, antiprotons can and do annihilate with neutrons, and likewise antineutrons can annihilate with protons, as discussed below. Reactions in which proton-antiproton annihilation produces as many as nine mesons have been observed, the generated mesons leave the site of the annihilation at moderate fractions of the speed of light, and decay with whatever lifetime is appropriate for their type of meson
23.
Proton
–
A proton is a subatomic particle, symbol p or p+, with a positive electric charge of +1e elementary charge and mass slightly less than that of a neutron. Protons and neutrons, each with masses of one atomic mass unit, are collectively referred to as nucleons. One or more protons are present in the nucleus of every atom, the number of protons in the nucleus is the defining property of an element, and is referred to as the atomic number. Since each element has a number of protons, each element has its own unique atomic number. The word proton is Greek for first, and this name was given to the nucleus by Ernest Rutherford in 1920. In previous years, Rutherford had discovered that the nucleus could be extracted from the nuclei of nitrogen by atomic collisions. Protons were therefore a candidate to be a particle, and hence a building block of nitrogen. In the modern Standard Model of particle physics, protons are hadrons, and like neutrons, although protons were originally considered fundamental or elementary particles, they are now known to be composed of three valence quarks, two up quarks and one down quark. The rest masses of quarks contribute only about 1% of a protons mass, the remainder of a protons mass is due to quantum chromodynamics binding energy, which includes the kinetic energy of the quarks and the energy of the gluon fields that bind the quarks together. At sufficiently low temperatures, free protons will bind to electrons, however, the character of such bound protons does not change, and they remain protons. A fast proton moving through matter will slow by interactions with electrons and nuclei, the result is a protonated atom, which is a chemical compound of hydrogen. In vacuum, when electrons are present, a sufficiently slow proton may pick up a single free electron, becoming a neutral hydrogen atom. Such free hydrogen atoms tend to react chemically with other types of atoms at sufficiently low energies. When free hydrogen atoms react with other, they form neutral hydrogen molecules. Protons are spin-½ fermions and are composed of three quarks, making them baryons. Protons have an exponentially decaying positive charge distribution with a mean square radius of about 0.8 fm. Protons and neutrons are both nucleons, which may be together by the nuclear force to form atomic nuclei. The nucleus of the most common isotope of the atom is a lone proton
24.
Hadron
–
In particle physics, a hadron /ˈhædrɒn/ is a composite particle made of quarks held together by the strong force in a similar way as molecules are held together by the electromagnetic force. Hadrons are categorized into two families, baryons, made of three quarks, and mesons, made of one quark and one antiquark, protons and neutrons are examples of baryons, pions are an example of a meson. Hadrons containing more than three valence quarks have been discovered in recent years, a tetraquark state, named the Z−, was discovered in 2007 by the Belle Collaboration and confirmed as a resonance in 2014 by the LHCb collaboration. Two pentaquark states, named P+ c and P+ c, were discovered in 2015 by the LHCb collaboration, there are several more exotic hadron candidates, and other colour-singlet quark combinations may also exist. Of the hadrons, protons are stable, and neutrons bound within atomic nuclei are stable, other hadrons are unstable under ordinary conditions, free neutrons decay with a half-life of about 611 seconds. Experimentally, hadron physics is studied by colliding protons or nuclei of elements such as lead. The term hadron was introduced by Lev B, okun in a plenary talk at the 1962 International Conference on High Energy Physics. In this talk he said, Notwithstanding the fact that this report deals with weak interactions and these particles pose not only numerous scientific problems, but also a terminological problem. The point is that strongly interacting particles is a very clumsy term which does not yield itself to the formation of an adjective, for this reason, to take but one instance, decays into strongly interacting particles are called non-leptonic. This definition is not exact because non-leptonic may also signify photonic, in this report I shall call strongly interacting particles hadrons, and the corresponding decays hadronic. I hope that this terminology will prove to be convenient, okun,1962 According to the quark model, the properties of hadrons are primarily determined by their so-called valence quarks. For example, a proton is composed of two up quarks and one down quark, adding these together yields the proton charge of +1. Although quarks also carry color charge, hadrons must have total color charge because of a phenomenon called color confinement. That is, hadrons must be colorless or white and these are the simplest of the two ways, three quarks of different colors, or a quark of one color and an antiquark carrying the corresponding anticolor. Hadrons with the first arrangement are called baryons, and those with the arrangement are mesons. Hadrons, however, are not composed of just three or two quarks, because of the strength of the strong force, more accurately, strong force gluons have enough energy to have resonances composed of massive quarks. Thus, virtual quarks and antiquarks, in a 1,1 ratio, the two or three quarks that compose a hadron are the excess of quarks vs. antiquarks, and so too in the case of anti-hadrons. Massless virtual gluons compose the majority of particles inside hadrons
25.
Avogadro constant
–
In chemistry and physics, the Avogadro constant is the number of constituent particles, usually atoms or molecules, that are contained in the amount of substance given by one mole. Thus, it is the proportionality factor that relates the mass of a compound to the mass of a sample. Avogadros constant, often designated with the symbol NA or L, has the value 7023602214085700000♠6. 022140857×1023 mol−1 in the International System of Units and this number is also known as Loschmidt constant in German literature. The constant was later redefined as the number of atoms in 12 grams of the isotope carbon-12, for instance, to a first approximation,1 gram of hydrogen element, having the atomic number 1, has 7023602200000000000♠6. 022×1023 hydrogen atoms. Similarly,12 grams of 12C, with the mass number 12, has the number of carbon atoms. Avogadros number is a quantity, and has the same numerical value of the Avogadro constant given in base units. In contrast, the Avogadro constant has the dimension of reciprocal amount of substance, the Avogadro constant can also be expressed as 0.602214. ML mol−1 Å−3, which can be used to convert from volume per molecule in cubic ångströms to molar volume in millilitres per mole, revisions in the base set of SI units necessitated redefinitions of the concepts of chemical quantity. Avogadros number, and its definition, was deprecated in favor of the Avogadro constant, the French physicist Jean Perrin in 1909 proposed naming the constant in honor of Avogadro. Perrin won the 1926 Nobel Prize in Physics, largely for his work in determining the Avogadro constant by several different methods, accurate determinations of Avogadros number require the measurement of a single quantity on both the atomic and macroscopic scales using the same unit of measurement. This became possible for the first time when American physicist Robert Millikan measured the charge on an electron in 1910, the electric charge per mole of electrons is a constant called the Faraday constant and had been known since 1834 when Michael Faraday published his works on electrolysis. By dividing the charge on a mole of electrons by the charge on a single electron the value of Avogadros number is obtained, since 1910, newer calculations have more accurately determined the values for the Faraday constant and the elementary charge. Perrin originally proposed the name Avogadros number to refer to the number of molecules in one gram-molecule of oxygen, with this recognition, the Avogadro constant was no longer a pure number, but had a unit of measurement, the reciprocal mole. While it is rare to use units of amount of other than the mole, the Avogadro constant can also be expressed in units such as the pound mole. NA = 7026273159734000000♠2. 73159734×1026 −1 = 7025170724843400000♠1. 707248434×1025 −1 Avogadros constant is a factor between macroscopic and microscopic observations of nature. As such, it provides the relationship between other physical constants and properties. The Avogadro constant also enters into the definition of the atomic mass unit. The earliest accurate method to measure the value of the Avogadro constant was based on coulometry
26.
Hydrogen atom
–
A hydrogen atom is an atom of the chemical element hydrogen. The electrically neutral atom contains a positively charged proton and a single negatively charged electron bound to the nucleus by the Coulomb force. Atomic hydrogen constitutes about 75% of the mass of the universe. In everyday life on Earth, isolated hydrogen atoms are extremely rare, instead, hydrogen tends to combine with other atoms in compounds, or with itself to form ordinary hydrogen gas, H2. Atomic hydrogen and hydrogen atom in ordinary English use have overlapping, yet distinct, for example, a water molecule contains two hydrogen atoms, but does not contain atomic hydrogen. Attempts to develop an understanding of the hydrogen atom have been important to the history of quantum mechanics. The most abundant isotope, hydrogen-1, protium, or light hydrogen, contains no neutrons and is just a proton, protium is stable and makes up 99. 9885% of naturally occurring hydrogen by absolute number. Deuterium contains one neutron and one proton, deuterium is stable and makes up 0. 0115% of naturally occurring hydrogen and is used in industrial processes like nuclear reactors and Nuclear Magnetic Resonance. Tritium contains two neutrons and one proton and is not stable, decaying with a half-life of 12.32 years, because of the short half life, Tritium does not exist in nature except in trace amounts. Higher isotopes of hydrogen are only created in artificial accelerators and reactors and have half lives around the order of 10−22 seconds, the formulas below are valid for all three isotopes of hydrogen, but slightly different values of the Rydberg constant must be used for each hydrogen isotope. Hydrogen is not found without its electron in ordinary chemistry, as ionized hydrogen is highly chemically reactive. When ionized hydrogen is written as H+ as in the solvation of classical acids such as hydrochloric acid, in that case, the acid transfers the proton to H2O to form H3O+. Ionized hydrogen without its electron, or free protons, are common in the interstellar medium, experiments by Ernest Rutherford in 1909 showed the structure of the atom to be a dense, positive nucleus with a light, negative charge orbiting around it. This immediately caused problems on how such a system could be stable, classical electromagnetism had shown that any accelerating charge radiates energy described through the Larmor formula. If this were true, all atoms would instantly collapse, however seem to be stable. Furthermore, the spiral inward would release a smear of electromagnetic frequencies as the orbit got smaller, instead, atoms were observed to only emit discrete frequencies of radiation. The resolution would lie in the development of quantum mechanics, in 1913, Niels Bohr obtained the energy levels and spectral frequencies of the hydrogen atom after making a number of simple assumptions in order to correct the failed classical model. The assumptions included, Electrons can only be in certain, discrete circular orbits or stationary states, thereby having a set of possible radii
27.
Momentum
–
In classical mechanics, linear momentum, translational momentum, or simply momentum is the product of the mass and velocity of an object, quantified in kilogram-meters per second. It is dimensionally equivalent to impulse, the product of force and time, Newtons second law of motion states that the change in linear momentum of a body is equal to the net impulse acting on it. If the truck were lighter, or moving slowly, then it would have less momentum. Linear momentum is also a quantity, meaning that if a closed system is not affected by external forces. In classical mechanics, conservation of momentum is implied by Newtons laws. It also holds in special relativity and, with definitions, a linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory. It is ultimately an expression of one of the symmetries of space and time. Linear momentum depends on frame of reference, observers in different frames would find different values of linear momentum of a system. But each would observe that the value of linear momentum does not change with time, momentum has a direction as well as magnitude. Quantities that have both a magnitude and a direction are known as vector quantities, because momentum has a direction, it can be used to predict the resulting direction of objects after they collide, as well as their speeds. Below, the properties of momentum are described in one dimension. The vector equations are almost identical to the scalar equations, the momentum of a particle is traditionally represented by the letter p. It is the product of two quantities, the mass and velocity, p = m v, the units of momentum are the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity in meters per second then the momentum is in kilogram meters/second, in cgs units, if the mass is in grams and the velocity in centimeters per second, then the momentum is in gram centimeters/second. Being a vector, momentum has magnitude and direction, for example, a 1 kg model airplane, traveling due north at 1 m/s in straight and level flight, has a momentum of 1 kg m/s due north measured from the ground. The momentum of a system of particles is the sum of their momenta, if two particles have masses m1 and m2, and velocities v1 and v2, the total momentum is p = p 1 + p 2 = m 1 v 1 + m 2 v 2. If all the particles are moving, the center of mass will generally be moving as well, if the center of mass is moving at velocity vcm, the momentum is, p = m v cm. This is known as Eulers first law, if a force F is applied to a particle for a time interval Δt, the momentum of the particle changes by an amount Δ p = F Δ t
28.
Exponential decay
–
A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the differential equation. The solution to this equation is, N = N0 e − λ t, where N is the quantity at time t, and N0 = N is the initial quantity, i. e. the quantity at time t =0. If the decaying quantity, N, is the number of elements in a certain set. This is called the lifetime, τ, and it can be shown that it relates to the decay rate, λ, in the following way. For example, if the population of the assembly, N, is 1000. A very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, in that case the scaling time is the half-life. A more intuitive characteristic of exponential decay for many people is the time required for the quantity to fall to one half of its initial value. This time is called the half-life, and often denoted by the symbol t1/2, the half-life can be written in terms of the decay constant, or the mean lifetime, as, t 1 /2 = ln λ = τ ln . When this expression is inserted for τ in the equation above, and ln 2 is absorbed into the base. Thus, the amount of left is 2−1 = 1/2 raised to the number of half-lives that have passed. Thus, after 3 half-lives there will be 1/23 = 1/8 of the material left. Therefore, the mean lifetime τ is equal to the half-life divided by the log of 2, or. E. g. polonium-210 has a half-life of 138 days, the equation that describes exponential decay is d N d t = − λ N or, by rearranging, d N N = − λ d t. This is the form of the equation that is most commonly used to describe exponential decay, any one of decay constant, mean lifetime, or half-life is sufficient to characterise the decay. The notation λ for the constant is a remnant of the usual notation for an eigenvalue. In this case, λ is the eigenvalue of the negative of the operator with N as the corresponding eigenfunction. The units of the constant are s−1
29.
Kelvin scale
–
The kelvin is a unit of measure for temperature based upon an absolute scale. It is one of the seven units in the International System of Units and is assigned the unit symbol K. The kelvin is defined as the fraction 1⁄273.16 of the temperature of the triple point of water. In other words, it is defined such that the point of water is exactly 273.16 K. The Kelvin scale is named after the Belfast-born, Glasgow University engineer and physicist William Lord Kelvin, unlike the degree Fahrenheit and degree Celsius, the kelvin is not referred to or typeset as a degree. The kelvin is the unit of temperature measurement in the physical sciences, but is often used in conjunction with the Celsius degree. The definition implies that absolute zero is equivalent to −273.15 °C, Kelvin calculated that absolute zero was equivalent to −273 °C on the air thermometers of the time. This absolute scale is known today as the Kelvin thermodynamic temperature scale, when spelled out or spoken, the unit is pluralised using the same grammatical rules as for other SI units such as the volt or ohm. When reference is made to the Kelvin scale, the word kelvin—which is normally a noun—functions adjectivally to modify the noun scale and is capitalized, as with most other SI unit symbols there is a space between the numeric value and the kelvin symbol. Before the 13th CGPM in 1967–1968, the unit kelvin was called a degree and it was distinguished from the other scales with either the adjective suffix Kelvin or with absolute and its symbol was °K. The latter term, which was the official name from 1948 until 1954, was ambiguous since it could also be interpreted as referring to the Rankine scale. Before the 13th CGPM, the form was degrees absolute. The 13th CGPM changed the name to simply kelvin. Its measured value was 0.01028 °C with an uncertainty of 60 µK, the use of SI prefixed forms of the degree Celsius to express a temperature interval has not been widely adopted. In 2005 the CIPM embarked on a program to redefine the kelvin using a more experimentally rigorous methodology, the current definition as of 2016 is unsatisfactory for temperatures below 20 K and above 1300 K. In particular, the committee proposed redefining the kelvin such that Boltzmanns constant takes the exact value 1. 3806505×10−23 J/K, from a scientific point of view, this will link temperature to the rest of SI and result in a stable definition that is independent of any particular substance. From a practical point of view, the redefinition will pass unnoticed, the kelvin is often used in the measure of the colour temperature of light sources. Colour temperature is based upon the principle that a black body radiator emits light whose colour depends on the temperature of the radiator, black bodies with temperatures below about 4000 K appear reddish, whereas those above about 7500 K appear bluish
30.
Boltzmann constant
–
The Boltzmann constant, which is named after Ludwig Boltzmann, is a physical constant relating the average kinetic energy of particles in a gas with the temperature of the gas. It is the gas constant R divided by the Avogadro constant NA, the Boltzmann constant has the dimension energy divided by temperature, the same as entropy. The accepted value in SI units is 6977138064851999999♠1. 38064852×10−23 J/K, the Boltzmann constant, k, is a bridge between macroscopic and microscopic physics. Introducing the Boltzmann constant transforms the gas law into an alternative form, p V = N k T. For n =1 mol, N is equal to the number of particles in one mole, given a thermodynamic system at an absolute temperature T, the average thermal energy carried by each microscopic degree of freedom in the system is on the order of magnitude of 1/2kT. In classical statistical mechanics, this average is predicted to hold exactly for homogeneous ideal gases, monatomic ideal gases possess three degrees of freedom per atom, corresponding to the three spatial directions, which means a thermal energy of 3/2kT per atom. This corresponds very well with experimental data, the thermal energy can be used to calculate the root-mean-square speed of the atoms, which turns out to be inversely proportional to the square root of the atomic mass. The root mean square speeds found at room temperature accurately reflect this, ranging from 7003137000000000000♠1370 m/s for helium, kinetic theory gives the average pressure p for an ideal gas as p =13 N V m v 2 ¯. Combination with the gas law p V = N k T shows that the average translational kinetic energy is 12 m v 2 ¯ =32 k T. Considering that the translational motion velocity vector v has three degrees of freedom gives the energy per degree of freedom equal to one third of that. Diatomic gases, for example, possess a total of six degrees of freedom per molecule that are related to atomic motion. Again, it is the energy-like quantity kT that takes central importance, consequences of this include the Arrhenius equation in chemical kinetics. This equation, which relates the details, or microstates. Such is its importance that it is inscribed on Boltzmanns tombstone, the constant of proportionality k serves to make the statistical mechanical entropy equal to the classical thermodynamic entropy of Clausius, Δ S = ∫ d Q T. One could choose instead a rescaled dimensionless entropy in terms such that S ′ = ln W, Δ S ′ = ∫ d Q k T. This is a natural form and this rescaled entropy exactly corresponds to Shannons subsequent information entropy. The characteristic energy kT is thus the required to increase the rescaled entropy by one nat. The iconic terse form of the equation S = k ln W on Boltzmanns tombstone is in due to Planck
31.
Magnetic confinement fusion
–
Magnetic confinement fusion is an approach to generating fusion power that uses magnetic fields to confine the hot fusion fuel in the form of a plasma. Magnetic confinement is one of two branches of fusion energy research, the other being inertial confinement fusion. The magnetic approach is highly developed and is usually considered more promising for energy production. Construction of a 500-MW heat generating fusion plant using tokamak magnetic confinement geometry, Fusion reactions combine light atomic nuclei such as hydrogen to form heavier ones such as helium. In addition, sufficient density and energy confinement are required, as specified by the Lawson criterion, magnetic confinement fusion attempts to create the conditions needed for fusion energy production by using the electrical conductivity of the plasma to contain it with magnetic fields. The basic concept can be thought of in a picture as a balance between magnetic pressure and plasma pressure, or in terms of individual particles spiraling along magnetic field lines. The pressure achievable is usually on the order of one bar with a confinement time up to a few seconds, in contrast, inertial confinement has a much higher pressure but a much lower confinement time. Most magnetic confinement schemes also have the advantage of being more or less steady state, the simplest magnetic configuration is a solenoid, a long cylinder wound with magnetic coils producing a field with the lines of force running parallel to the axis of the cylinder. Such a field would hinder ions and electrons from being lost radially, there are two approaches to solving this problem. One is to try to stop up the ends with a magnetic mirror, a simple toroidal field, however, provides poor confinement because the radial gradient of the field strength results in a drift in the direction of the axis. A major area of research in the years of fusion energy research was the magnetic mirror. Most early mirror devices attempted to confine plasma near the focus of a magnetic field, or to be more precise. In order to escape the confinement area, nuclei had to enter a small area near each magnet. It was known that nuclei would escape through this area, a highly developed form, the Mirror Fusion Test Facility, used two mirrors at either end of a solenoid to increase the internal volume of the reaction area. An early attempt to build a magnetic confinement system was the stellarator, essentially the stellarator consists of a torus that has been cut in half and then attached back together with straight crossover sections to form a figure-8. This has the effect of propagating the nuclei from the inside to outside as it orbits the device, thereby canceling out the drift across the axis, at least if the nuclei orbit fast enough. Newer versions of the design have replaced the mechanical drift cancellation with additional magnets that wind the field lines into a helix to cause the same effect. In 1968 Russian research on the toroidal tokamak was first presented in public, with results that far outstripped existing efforts from any competing design, since then the majority of effort in magnetic confinement has been based on the tokamak principle