1.
Speed of light
–
The speed of light in vacuum, commonly denoted c, is a universal physical constant important in many areas of physics. Its exact value is 299792458 metres per second, it is exact because the unit of length, the metre, is defined from this constant, according to special relativity, c is the maximum speed at which all matter and hence information in the universe can travel. It is the speed at which all particles and changes of the associated fields travel in vacuum. Such particles and waves travel at c regardless of the motion of the source or the reference frame of the observer. In the theory of relativity, c interrelates space and time, the speed at which light propagates through transparent materials, such as glass or air, is less than c, similarly, the speed of radio waves in wire cables is slower than c. The ratio between c and the speed v at which light travels in a material is called the index n of the material. In communicating with distant space probes, it can take minutes to hours for a message to get from Earth to the spacecraft, the light seen from stars left them many years ago, allowing the study of the history of the universe by looking at distant objects. The finite speed of light limits the theoretical maximum speed of computers. The speed of light can be used time of flight measurements to measure large distances to high precision. Ole Rømer first demonstrated in 1676 that light travels at a speed by studying the apparent motion of Jupiters moon Io. In 1865, James Clerk Maxwell proposed that light was an electromagnetic wave, in 1905, Albert Einstein postulated that the speed of light c with respect to any inertial frame is a constant and is independent of the motion of the light source. He explored the consequences of that postulate by deriving the theory of relativity and in doing so showed that the parameter c had relevance outside of the context of light and electromagnetism. After centuries of increasingly precise measurements, in 1975 the speed of light was known to be 299792458 m/s with a measurement uncertainty of 4 parts per billion. In 1983, the metre was redefined in the International System of Units as the distance travelled by light in vacuum in 1/299792458 of a second, as a result, the numerical value of c in metres per second is now fixed exactly by the definition of the metre. The speed of light in vacuum is usually denoted by a lowercase c, historically, the symbol V was used as an alternative symbol for the speed of light, introduced by James Clerk Maxwell in 1865. In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used c for a different constant later shown to equal √2 times the speed of light in vacuum, in 1894, Paul Drude redefined c with its modern meaning. Einstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to c, sometimes c is used for the speed of waves in any material medium, and c0 for the speed of light in vacuum. This article uses c exclusively for the speed of light in vacuum, since 1983, the metre has been defined in the International System of Units as the distance light travels in vacuum in 1⁄299792458 of a second
2.
Electron
–
The electron is a subatomic particle, symbol e− or β−, with a negative elementary electric charge. Electrons belong to the first generation of the lepton particle family, the electron has a mass that is approximately 1/1836 that of the proton. Quantum mechanical properties of the include a intrinsic angular momentum of a half-integer value, expressed in units of the reduced Planck constant. As it is a fermion, no two electrons can occupy the same state, in accordance with the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of particles and waves, they can collide with other particles and can be diffracted like light. Since an electron has charge, it has an electric field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law, electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields, special telescopes can detect electron plasma in outer space. Electrons are involved in applications such as electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors. Interactions involving electrons with other particles are of interest in fields such as chemistry. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms, ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of a quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge electron in 1891, electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of isotopes and in high-energy collisions. The antiparticle of the electron is called the positron, it is identical to the electron except that it carries electrical, when an electron collides with a positron, both particles can be totally annihilated, producing gamma ray photons. The ancient Greeks noticed that amber attracted small objects when rubbed with fur, along with lightning, this phenomenon is one of humanitys earliest recorded experiences with electricity. In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electricus, both electric and electricity are derived from the Latin ēlectrum, which came from the Greek word for amber, ἤλεκτρον
3.
National Institute of Standards and Technology
–
The National Institute of Standards and Technology is a measurement standards laboratory, and a non-regulatory agency of the United States Department of Commerce. Its mission is to promote innovation and industrial competitiveness, in 1821, John Quincy Adams had declared Weights and measures may be ranked among the necessities of life to every individual of human society. From 1830 until 1901, the role of overseeing weights and measures was carried out by the Office of Standard Weights and Measures, president Theodore Roosevelt appointed Samuel W. Stratton as the first director. The budget for the first year of operation was $40,000, a laboratory site was constructed in Washington, DC, and instruments were acquired from the national physical laboratories of Europe. In addition to weights and measures, the Bureau developed instruments for electrical units, in 1905 a meeting was called that would be the first National Conference on Weights and Measures. Quality standards were developed for products including some types of clothing, automobile brake systems and headlamps, antifreeze, during World War I, the Bureau worked on multiple problems related to war production, even operating its own facility to produce optical glass when European supplies were cut off. Between the wars, Harry Diamond of the Bureau developed a blind approach radio aircraft landing system, in 1948, financed by the Air Force, the Bureau began design and construction of SEAC, the Standards Eastern Automatic Computer. The computer went into operation in May 1950 using a combination of vacuum tubes, about the same time the Standards Western Automatic Computer, was built at the Los Angeles office of the NBS and used for research there. A mobile version, DYSEAC, was built for the Signal Corps in 1954, due to a changing mission, the National Bureau of Standards became the National Institute of Standards and Technology in 1988. Following 9/11, NIST conducted the investigation into the collapse of the World Trade Center buildings. NIST had a budget for fiscal year 2007 of about $843.3 million. NISTs 2009 budget was $992 million, and it also received $610 million as part of the American Recovery, NIST employs about 2,900 scientists, engineers, technicians, and support and administrative personnel. About 1,800 NIST associates complement the staff, in addition, NIST partners with 1,400 manufacturing specialists and staff at nearly 350 affiliated centers around the country. NIST publishes the Handbook 44 that provides the Specifications, tolerances, the Congress of 1866 made use of the metric system in commerce a legally protected activity through the passage of Metric Act of 1866. NIST is headquartered in Gaithersburg, Maryland, and operates a facility in Boulder, nISTs activities are organized into laboratory programs and extramural programs. Effective October 1,2010, NIST was realigned by reducing the number of NIST laboratory units from ten to six, nISTs Boulder laboratories are best known for NIST‑F1, which houses an atomic clock. NIST‑F1 serves as the source of the official time. NIST also operates a neutron science user facility, the NIST Center for Neutron Research, the NCNR provides scientists access to a variety of neutron scattering instruments, which they use in many research fields
4.
Proposed redefinition of SI base units
–
The metric system was originally conceived as a system of measurement that was derivable from nature. When the metric system was first introduced in France in 1799 technical limitations necessitated the use of such as the prototype metre. In 1960 the metre was redefined in terms of the wavelength of light from a source, making it derivable from nature. If the proposed redefinition is accepted, the system will, for the first time. The proposal can be summarised as follows, There will still be the seven base units. The second, metre and candela are already defined by physical constants, the new definitions will improve the SI without changing the size of any units, thus ensuring continuity with present measurements. Further details are found in the chapter of the Ninth SI Units Brochure. The last major overhaul of the system was in 1960 when the International System of Units was formally published as a coherent set of units of measure. SI is structured around seven base units that have apparently arbitrary definitions, although the set of units form a coherent system, the definitions do not. The proposal before the CIPM seeks to remedy this by using the quantities of nature as the basis for deriving the base units. This will mean, amongst other things, that the prototype kilogram will cease to be used as the replica of the kilogram. The second and the metre are already defined in such a manner, the basic structure of SI was developed over a period of about 170 years. Since 1960 technological advances have made it possible to address weaknesses in SI. Specifically, the metre was defined as one ten-millionth of the distance from the North Pole to the Equator, although these definitions were chosen so that nobody would own the units, they could not be measured with sufficient convenience or precision for practical use. Instead copies were created in the form of the mètre des Archives, in 1875, by which time the use of the metric system had become widespread in Europe and in Latin America, twenty industrially developed nations met for the Convention of the Metre. They were, CGPM —The Conference meets every four to six years, CIPM —The Committee consists of eighteen eminent scientists, each from a different country, nominated by the CGPM. The CIPM meets annually and is tasked to advise the CGPM, the CIPM has set up a number of sub-committees, each charged with a particular area of interest. One of these, the Consultative Committee for Units, amongst other things, the first CGPM formally approved the use of 40 prototype metres and 40 prototype kilograms from the British firm Johnson Matthey as the standards mandated by the Convention of the Metre
5.
Proton
–
A proton is a subatomic particle, symbol p or p+, with a positive electric charge of +1e elementary charge and mass slightly less than that of a neutron. Protons and neutrons, each with masses of one atomic mass unit, are collectively referred to as nucleons. One or more protons are present in the nucleus of every atom, the number of protons in the nucleus is the defining property of an element, and is referred to as the atomic number. Since each element has a number of protons, each element has its own unique atomic number. The word proton is Greek for first, and this name was given to the nucleus by Ernest Rutherford in 1920. In previous years, Rutherford had discovered that the nucleus could be extracted from the nuclei of nitrogen by atomic collisions. Protons were therefore a candidate to be a particle, and hence a building block of nitrogen. In the modern Standard Model of particle physics, protons are hadrons, and like neutrons, although protons were originally considered fundamental or elementary particles, they are now known to be composed of three valence quarks, two up quarks and one down quark. The rest masses of quarks contribute only about 1% of a protons mass, the remainder of a protons mass is due to quantum chromodynamics binding energy, which includes the kinetic energy of the quarks and the energy of the gluon fields that bind the quarks together. At sufficiently low temperatures, free protons will bind to electrons, however, the character of such bound protons does not change, and they remain protons. A fast proton moving through matter will slow by interactions with electrons and nuclei, the result is a protonated atom, which is a chemical compound of hydrogen. In vacuum, when electrons are present, a sufficiently slow proton may pick up a single free electron, becoming a neutral hydrogen atom. Such free hydrogen atoms tend to react chemically with other types of atoms at sufficiently low energies. When free hydrogen atoms react with other, they form neutral hydrogen molecules. Protons are spin-½ fermions and are composed of three quarks, making them baryons. Protons have an exponentially decaying positive charge distribution with a mean square radius of about 0.8 fm. Protons and neutrons are both nucleons, which may be together by the nuclear force to form atomic nuclei. The nucleus of the most common isotope of the atom is a lone proton
6.
Planck constant
–
The Planck constant is a physical constant that is the quantum of action, central in quantum mechanics. The light quantum behaved in some respects as a neutral particle. It was eventually called the photon, the Planck–Einstein relation connects the particulate photon energy E with its associated wave frequency f, E = h f This energy is extremely small in terms of ordinarily perceived everyday objects. Since the frequency f, wavelength λ, and speed of c are related by f = c λ. This leads to another relationship involving the Planck constant, with p denoting the linear momentum of a particle, the de Broglie wavelength λ of the particle is given by λ = h p. In applications where it is natural to use the frequency it is often useful to absorb a factor of 2π into the Planck constant. The resulting constant is called the reduced Planck constant or Dirac constant and it is equal to the Planck constant divided by 2π, and is denoted ħ, ℏ = h 2 π. The energy of a photon with angular frequency ω, where ω = 2πf, is given by E = ℏ ω, while its linear momentum relates to p = ℏ k and this was confirmed by experiments soon afterwards. This holds throughout quantum theory, including electrodynamics and these two relations are the temporal and spatial component parts of the special relativistic expression using 4-Vectors. P μ = = ℏ K μ = ℏ Classical statistical mechanics requires the existence of h, eventually, following upon Plancks discovery, it was recognized that physical action cannot take on an arbitrary value. Instead, it must be multiple of a very small quantity. This is the old quantum theory developed by Bohr and Sommerfeld, in which particle trajectories exist but are hidden. Thus there is no value of the action as classically defined, related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain either quantization of energy or the lack of a particle motion. In many cases, such as for light or for atoms, quantization of energy also implies that only certain energy levels are allowed. The Planck constant has dimensions of physical action, i. e. energy multiplied by time, or momentum multiplied by distance, in SI units, the Planck constant is expressed in joule-seconds or or. The value of the Planck constant is, h =6.626070040 ×10 −34 J⋅s =4.135667662 ×10 −15 eV⋅s. The value of the reduced Planck constant is, ℏ = h 2 π =1.054571800 ×10 −34 J⋅s =6.582119514 ×10 −16 eV⋅s
7.
Wien's displacement law
–
Wiens displacement law states that the black body radiation curve for different temperatures peaks at a wavelength inversely proportional to the temperature. The shift of that peak is a consequence of the Planck radiation law which describes the spectral brightness of black body radiation as a function of wavelength at any given temperature. B is a constant of proportionality called Wiens displacement constant, equal to 2. 8977729×10−3 m⋅K, or more conveniently to obtain wavelength in micrometers, b ≈2900 μm·K. If one is considering the peak of black body emission per unit frequency or per proportional bandwidth, however the form of the law remains the same, the peak wavelength is inversely proportional to temperature. Wiens displacement law may be referred to as Wiens law, a term which is used for the Wien approximation. One easily observes changes in the color of an incandescent light bulb as the temperature of its filament is varied by a light dimmer. As the light is dimmed and the filament temperature decreases, the distribution of color shifts toward longer wavelengths and the light appears redder, as well as dimmer. It is easy to calculate that a fire at 1500 K puts out peak radiation at about 2000 nm. 98% of its radiation is beyond 1000 nm. Consequently, a campfire can keep one warm but is a source of visible light. The effective temperature of the Sun is 5778 K, using Wiens law, one finds a peak emission per nanometer at a wavelength of about 500 nm in the green portion of the spectrum near the peak sensitivity of the human eye. On the other hand, in terms of power per unit optical frequency, in terms of power per percentage bandwidth, the peak is at about 635 nm, a red wavelength. Regardless of how one wants to plot the spectrum, about half of the radiation is at wavelengths shorter than 710 nm. Of that, about 12% is at wavelengths shorter than 400 nm and it can be appreciated that a rather large amount of the Suns radiation falls in the fairly small visible spectrum. The preponderance of emission in the range, however, is not the case in most stars. The hot supergiant Rigel emits 60% of its light in the ultraviolet, with both stars prominent in the constellation of Orion, one can easily appreciate the color difference between the blue-white Rigel and the red Betelgeuse. While few stars are as hot as Rigel, stars cooler than the sun or even as cool as Betelgeuse are very commonplace, mammals with a skin temperature of about 300 K emit peak radiation at around 10 μm in the far infrared. This is therefore the range of infrared wavelengths that pit viper snakes, when comparing the apparent color of lighting sources, it is customary to cite the color temperature. Note that the description of the former color as cool
8.
Boltzmann constant
–
The Boltzmann constant, which is named after Ludwig Boltzmann, is a physical constant relating the average kinetic energy of particles in a gas with the temperature of the gas. It is the gas constant R divided by the Avogadro constant NA, the Boltzmann constant has the dimension energy divided by temperature, the same as entropy. The accepted value in SI units is 6977138064851999999♠1. 38064852×10−23 J/K, the Boltzmann constant, k, is a bridge between macroscopic and microscopic physics. Introducing the Boltzmann constant transforms the gas law into an alternative form, p V = N k T. For n =1 mol, N is equal to the number of particles in one mole, given a thermodynamic system at an absolute temperature T, the average thermal energy carried by each microscopic degree of freedom in the system is on the order of magnitude of 1/2kT. In classical statistical mechanics, this average is predicted to hold exactly for homogeneous ideal gases, monatomic ideal gases possess three degrees of freedom per atom, corresponding to the three spatial directions, which means a thermal energy of 3/2kT per atom. This corresponds very well with experimental data, the thermal energy can be used to calculate the root-mean-square speed of the atoms, which turns out to be inversely proportional to the square root of the atomic mass. The root mean square speeds found at room temperature accurately reflect this, ranging from 7003137000000000000♠1370 m/s for helium, kinetic theory gives the average pressure p for an ideal gas as p =13 N V m v 2 ¯. Combination with the gas law p V = N k T shows that the average translational kinetic energy is 12 m v 2 ¯ =32 k T. Considering that the translational motion velocity vector v has three degrees of freedom gives the energy per degree of freedom equal to one third of that. Diatomic gases, for example, possess a total of six degrees of freedom per molecule that are related to atomic motion. Again, it is the energy-like quantity kT that takes central importance, consequences of this include the Arrhenius equation in chemical kinetics. This equation, which relates the details, or microstates. Such is its importance that it is inscribed on Boltzmanns tombstone, the constant of proportionality k serves to make the statistical mechanical entropy equal to the classical thermodynamic entropy of Clausius, Δ S = ∫ d Q T. One could choose instead a rescaled dimensionless entropy in terms such that S ′ = ln W, Δ S ′ = ∫ d Q k T. This is a natural form and this rescaled entropy exactly corresponds to Shannons subsequent information entropy. The characteristic energy kT is thus the required to increase the rescaled entropy by one nat. The iconic terse form of the equation S = k ln W on Boltzmanns tombstone is in due to Planck
9.
Avogadro constant
–
In chemistry and physics, the Avogadro constant is the number of constituent particles, usually atoms or molecules, that are contained in the amount of substance given by one mole. Thus, it is the proportionality factor that relates the mass of a compound to the mass of a sample. Avogadros constant, often designated with the symbol NA or L, has the value 7023602214085700000♠6. 022140857×1023 mol−1 in the International System of Units and this number is also known as Loschmidt constant in German literature. The constant was later redefined as the number of atoms in 12 grams of the isotope carbon-12, for instance, to a first approximation,1 gram of hydrogen element, having the atomic number 1, has 7023602200000000000♠6. 022×1023 hydrogen atoms. Similarly,12 grams of 12C, with the mass number 12, has the number of carbon atoms. Avogadros number is a quantity, and has the same numerical value of the Avogadro constant given in base units. In contrast, the Avogadro constant has the dimension of reciprocal amount of substance, the Avogadro constant can also be expressed as 0.602214. ML mol−1 Å−3, which can be used to convert from volume per molecule in cubic ångströms to molar volume in millilitres per mole, revisions in the base set of SI units necessitated redefinitions of the concepts of chemical quantity. Avogadros number, and its definition, was deprecated in favor of the Avogadro constant, the French physicist Jean Perrin in 1909 proposed naming the constant in honor of Avogadro. Perrin won the 1926 Nobel Prize in Physics, largely for his work in determining the Avogadro constant by several different methods, accurate determinations of Avogadros number require the measurement of a single quantity on both the atomic and macroscopic scales using the same unit of measurement. This became possible for the first time when American physicist Robert Millikan measured the charge on an electron in 1910, the electric charge per mole of electrons is a constant called the Faraday constant and had been known since 1834 when Michael Faraday published his works on electrolysis. By dividing the charge on a mole of electrons by the charge on a single electron the value of Avogadros number is obtained, since 1910, newer calculations have more accurately determined the values for the Faraday constant and the elementary charge. Perrin originally proposed the name Avogadros number to refer to the number of molecules in one gram-molecule of oxygen, with this recognition, the Avogadro constant was no longer a pure number, but had a unit of measurement, the reciprocal mole. While it is rare to use units of amount of other than the mole, the Avogadro constant can also be expressed in units such as the pound mole. NA = 7026273159734000000♠2. 73159734×1026 −1 = 7025170724843400000♠1. 707248434×1025 −1 Avogadros constant is a factor between macroscopic and microscopic observations of nature. As such, it provides the relationship between other physical constants and properties. The Avogadro constant also enters into the definition of the atomic mass unit. The earliest accurate method to measure the value of the Avogadro constant was based on coulometry
10.
Electronvolt
–
In physics, the electronvolt is a unit of energy equal to approximately 1. 6×10−19 joules. By definition, it is the amount of energy gained by the charge of an electron moving across an electric potential difference of one volt. Thus it is 1 volt multiplied by the elementary charge, therefore, one electronvolt is equal to 6981160217662079999♠1. 6021766208×10−19 J. The electronvolt is not a SI unit, and its definition is empirical, like the elementary charge on which it is based, it is not an independent quantity but is equal to 1 J/C √2hα / μ0c0. It is a unit of energy within physics, widely used in solid state, atomic, nuclear. It is commonly used with the metric prefixes milli-, kilo-, in some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion electronvolts, it is equivalent to the GeV. By mass–energy equivalence, the electronvolt is also a unit of mass and it is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum. It is common to express mass in terms of eV as a unit of mass. The mass equivalent of 1 eV/c2 is 1 eV / c 2 = ⋅1 V2 =1.783 ×10 −36 kg. For example, an electron and a positron, each with a mass of 0.511 MeV/c2, the proton has a mass of 0.938 GeV/c2. In general, the masses of all hadrons are of the order of 1 GeV/c2, the unified atomic mass unit,1 gram divided by Avogadros number, is almost the mass of a hydrogen atom, which is mostly the mass of the proton. To convert to megaelectronvolts, use the formula,1 u =931.4941 MeV/c2 =0.9314941 GeV/c2, in high-energy physics, the electronvolt is often used as a unit of momentum. A potential difference of 1 volt causes an electron to gain an amount of energy and this gives rise to usage of eV as units of momentum, for the energy supplied results in acceleration of the particle. The dimensions of units are LMT−1. The dimensions of units are L2MT−2. Then, dividing the units of energy by a constant that has units of velocity. In the field of particle physics, the fundamental velocity unit is the speed of light in vacuum c. Thus, dividing energy in eV by the speed of light, the fundamental velocity constant c is often dropped from the units of momentum by way of defining units of length such that the value of c is unity
11.
Gravitational constant
–
Its measured value is 6. 67408×10−11 m3⋅kg−1⋅s−2. The constant of proportionality, G, is the gravitational constant, colloquially, the gravitational constant is also called Big G, for disambiguation with small g, which is the local gravitational field of Earth. The two quantities are related by g = GME/r2 E. In the general theory of relativity, the Einstein field equations, R μ ν −12 R g μ ν =8 π G c 4 T μ ν, the scaled gravitational constant κ = 8π/c4G ≈2. 071×10−43 s2·m−1·kg−1 is also known as Einsteins constant. The gravitational constant is a constant that is difficult to measure with high accuracy. This is because the force is extremely weak compared with other fundamental forces. In SI units, the 2014 CODATA-recommended value of the constant is. In cgs, G can be written as G ≈6. 674×10−8 cm3·g−1·s−2, in other words, in Planck units, G has the numerical value of 1. In astrophysics, it is convenient to measure distances in parsecs, velocities in kilometers per second, in these units, the gravitational constant is, G ≈4.302 ×10 −3 p c M ⊙ −12. In orbital mechanics, the period P of an object in orbit around a spherical object obeys G M =3 π V P2 where V is the volume inside the radius of the orbit. It follows that P2 =3 π G V M ≈10.896 h r 2 g c m −3 V M. This way of expressing G shows the relationship between the density of a planet and the period of a satellite orbiting just above its surface. Cavendish measured G implicitly, using a torsion balance invented by the geologist Rev. John Michell and he used a horizontal torsion beam with lead balls whose inertia he could tell by timing the beams oscillation. Their faint attraction to other balls placed alongside the beam was detectable by the deflection it caused, cavendishs aim was not actually to measure the gravitational constant, but rather to measure Earths density relative to water, through the precise knowledge of the gravitational interaction. In modern units, the density that Cavendish calculated implied a value for G of 6. 754×10−11 m3·kg−1·s−2, the accuracy of the measured value of G has increased only modestly since the original Cavendish experiment. G is quite difficult to measure, because gravity is weaker than other fundamental forces. Published values of G have varied rather broadly, and some recent measurements of precision are, in fact. This led to the 2010 CODATA value by NIST having 20% increased uncertainty than in 2006, for the 2014 update, CODATA reduced the uncertainty to less than half the 2010 value
12.
Fine-structure constant
–
It is related to the elementary charge e, which characterizes the strength of the coupling of an elementary charged particle with the electromagnetic field, by the formula 4πε0ħcα = e2. Being a dimensionless quantity, it has the numerical value of about 1⁄137 in all systems of units. Arnold Sommerfeld introduced the fine-structure constant in 1916, the definition reflects the relationship between α and the elementary charge e, which equals √4παε0ħc. In electrostatic cgs units, the unit of charge, the statcoulomb, is defined so that the Coulomb constant, ke, or the permittivity factor, 4πε0, is 1. Then the expression of the constant, as commonly found in older physics literature. In natural units, commonly used in high energy physics, where ε0 = c = ħ =1, the value of the fine-structure constant is α = e 24 π. As such, the constant is just another, albeit dimensionless, quantity determining the elementary charge. The 2014 CODATA recommended value of α is α = e 2 ℏ c =0.0072973525664 and this has a relative standard uncertainty of 0.32 parts per billion. For reasons of convenience, historically the value of the reciprocal of the constant is often specified. The 2014 CODATA recommended value is given by α −1 =137.035999139, the theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant α.035999173. This measurement of α has a precision of 0.25 parts per billion and this value and uncertainty are about the same as the latest experimental results. The fine-structure constant, α, has several physical interpretations, α is, The square of the ratio of the elementary charge to the Planck charge α =2. The ratio of the velocity of the electron in the first circular orbit of the Bohr model of the atom to the speed of light in vacuum and this is Sommerfelds original physical interpretation. Then the square of α is the ratio between the Hartree energy and the electron rest energy, the theory does not predict its value. Therefore, α must be determined experimentally, in fact, α is one of the about 20 empirical parameters in the Standard Model of particle physics, whose value is not determined within the Standard Model. In the electroweak theory unifying the weak interaction with electromagnetism, α is absorbed into two other coupling constants associated with the gauge fields. In this theory, the interaction is treated as a mixture of interactions associated with the electroweak fields. The strength of the electromagnetic interaction varies with the strength of the energy field, the absorption value for normal-incident light on graphene in vacuum would then be given by πα/2 or 2. 24%, and the transmission by 1/2 or 97. 75%