A proton is a subatomic particle, symbol p or p+, with a positive electric charge of +1e elementary charge and a mass less than that of a neutron. Protons and neutrons, each with masses of one atomic mass unit, are collectively referred to as "nucleons". One or more protons are present in the nucleus of every atom; the number of protons in the nucleus is the defining property of an element, is referred to as the atomic number. Since each element has a unique number of protons, each element has its own unique atomic number; the word proton is Greek for "first", this name was given to the hydrogen nucleus by Ernest Rutherford in 1920. In previous years, Rutherford had discovered that the hydrogen nucleus could be extracted from the nuclei of nitrogen by atomic collisions. Protons were therefore a candidate to be a fundamental particle, hence a building block of nitrogen and all other heavier atomic nuclei. In the modern Standard Model of particle physics, protons are hadrons, like neutrons, the other nucleon, are composed of three quarks.
Although protons were considered fundamental or elementary particles, they are now known to be composed of three valence quarks: two up quarks of charge +2/3e and one down quark of charge –1/3e. The rest masses of quarks contribute only about 1% of a proton's mass, however; the remainder of a proton's mass is due to quantum chromodynamics binding energy, which includes the kinetic energy of the quarks and the energy of the gluon fields that bind the quarks together. Because protons are not fundamental particles, they possess a physical size, though not a definite one. At sufficiently low temperatures, free protons will bind to electrons. However, the character of such bound protons does not change, they remain protons. A fast proton moving through matter will slow by interactions with electrons and nuclei, until it is captured by the electron cloud of an atom; the result is a protonated atom, a chemical compound of hydrogen. In vacuum, when free electrons are present, a sufficiently slow proton may pick up a single free electron, becoming a neutral hydrogen atom, chemically a free radical.
Such "free hydrogen atoms" tend to react chemically with many other types of atoms at sufficiently low energies. When free hydrogen atoms react with each other, they form neutral hydrogen molecules, which are the most common molecular component of molecular clouds in interstellar space. Protons are composed of three valence quarks, making them baryons; the two up quarks and one down quark of a proton are held together by the strong force, mediated by gluons. A modern perspective has a proton composed of the valence quarks, the gluons, transitory pairs of sea quarks. Protons have a positive charge distribution which decays exponentially, with a mean square radius of about 0.8 fm. Protons and neutrons are both nucleons, which may be bound together by the nuclear force to form atomic nuclei; the nucleus of the most common isotope of the hydrogen atom is a lone proton. The nuclei of the heavy hydrogen isotopes deuterium and tritium contain one proton bound to one and two neutrons, respectively. All other types of atomic nuclei are composed of two or more protons and various numbers of neutrons.
The concept of a hydrogen-like particle as a constituent of other atoms was developed over a long period. As early as 1815, William Prout proposed that all atoms are composed of hydrogen atoms, based on a simplistic interpretation of early values of atomic weights, disproved when more accurate values were measured. In 1886, Eugen Goldstein discovered canal rays and showed that they were positively charged particles produced from gases. However, since particles from different gases had different values of charge-to-mass ratio, they could not be identified with a single particle, unlike the negative electrons discovered by J. J. Thomson. Wilhelm Wien in 1898 identified the hydrogen ion as particle with highest charge-to-mass ratio in ionized gases. Following the discovery of the atomic nucleus by Ernest Rutherford in 1911, Antonius van den Broek proposed that the place of each element in the periodic table is equal to its nuclear charge; this was confirmed experimentally by Henry Moseley in 1913 using X-ray spectra.
In 1917, Rutherford proved that the hydrogen nucleus is present in other nuclei, a result described as the discovery of protons. Rutherford had earlier learned to produce hydrogen nuclei as a type of radiation produced as a product of the impact of alpha particles on nitrogen gas, recognize them by their unique penetration signature in air and their appearance in scintillation detectors; these experiments were begun when Rutherford had noticed that, when alpha particles were shot into air, his scintillation detectors showed the signatures of typical hydrogen nuclei as a product. After experimentation Rutherford traced the reaction to the nitrogen in air, found that when alphas were produced into pure nitrogen gas, the effect was larger. Rutherford determined that this hydrogen could have come only from the nitrogen, therefore nitrogen must contain hydrogen nuclei. One hydrogen nucleus was being knocked off by the impact of the alpha particle, producing oxygen-17 in the process; this was 14N + α → 17O + p.
(This reaction wo
Renormalization is a collection of techniques in quantum field theory, the statistical mechanics of fields, the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering values of quantities to compensate for effects of their self-interactions. However if it were the case that no infinities arise in loop diagrams in quantum field theory, it can be shown that renormalization of mass and fields appearing in the original Lagrangian is necessary. For example, an electron theory may begin by postulating an electron with an initial mass and charge. In quantum field theory a cloud of virtual particles, such as photons and others surrounds and interacts with the initial electron. Accounting for the interactions of the surrounding particles shows that the electron-system behaves as if it had a different mass and charge than postulated. Renormalization, in this example, mathematically replaces the postulated mass and charge of an electron with the experimentally observed mass and charge.
Mathematics and experiments prove that positrons and more massive particles like protons, exhibit the same observed charge as the electron - in the presence of much stronger interactions and more intense clouds of virtual particles. Renormalization specifies relationships between parameters in the theory when parameters describing large distance scales differ from parameters describing small distance scales. In high-energy particle accelerators like the CERN Large Hadron Collider the concept named pileup occurs when undesirable proton-proton collisions interact with data collection for simultaneous, nearby desirable measurements. Physically, the pileup of contributions from an infinity of scales involved in a problem may result in further infinities; when describing space-time as a continuum, certain statistical and quantum mechanical constructions are not well-defined. To define them, or make them unambiguous, a continuum limit must remove "construction scaffolding" of lattices at various scales.
Renormalization procedures are based on the requirement that certain physical quantities equal observed values. That is, the experimental value of the physical quantity yields practical applications, but due to their empirical nature the observed measurement represents areas of quantum field theory that require deeper derivation from theoretical bases. Renormalization was first developed in quantum electrodynamics to make sense of infinite integrals in perturbation theory. Viewed as a suspect provisional procedure by some of its originators, renormalization was embraced as an important and self-consistent actual mechanism of scale physics in several fields of physics and mathematics. Today, the point of view has shifted: on the basis of the breakthrough renormalization group insights of Nikolay Bogolyubov and Kenneth Wilson, the focus is on variation of physical quantities across contiguous scales, while distant scales are related to each other through "effective" descriptions. All scales are linked in a broadly systematic way, the actual physics pertinent to each is extracted with the suitable specific computational techniques appropriate for each.
Wilson clarified which are redundant. Renormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales; the problem of infinities first arose in the classical electrodynamics of point particles in the 19th and early 20th century. The mass of a charged particle should include the mass-energy in its electrostatic field. Assume that the particle is a charged spherical shell of radius re; the mass–energy in the field is m em = ∫ 1 2 E 2 d V = ∫ r e ∞ 1 2 2 4 π r 2 d r = q 2 8 π r e, which becomes infinite as re → 0. This implies that the point particle would have infinite inertia, making it unable to be accelerated. Incidentally, the value of re that makes m em equal to the electron mass is called the classical electron radius, which turns out to be r e = e 2 4 π ε 0 m e c 2 = α ℏ m e c ≈ 2.8 × 10 − 15 m, where α ≈ 1 / 137 is the fine-structure constant, ℏ / ( m
In physics, the fine-structure constant known as Sommerfeld's constant denoted by α, is a dimensionless physical constant characterizing the strength of the electromagnetic interaction between elementary charged particles. It is related to the elementary charge e, which characterizes the strength of the coupling of an elementary charged particle with the electromagnetic field, by the formula 4πε0ħcα = e2. Being a dimensionless quantity, it has the same numerical value in all systems of units, 1/137; some equivalent definitions of α in terms of other fundamental physical constants are: α = 1 4 π ε 0 e 2 ℏ c = μ 0 4 π e 2 c ℏ = k e e 2 ℏ c = c μ 0 2 R K = e 2 4 π Z 0 ℏ where: e is the elementary charge. The definition reflects the relationship between α and the permeability of free space µ0, which equals µ0 = 2hα/ce2. In the 2019 redefinition of SI base units, 4π × 1.00000000082×10−7 H⋅m−1 is the value for µ0 based upon more accurate measurements of the fine structure constant. In electrostatic cgs units, the unit of electric charge, the statcoulomb, is defined so that the Coulomb constant, ke, or the permittivity factor, 4πε0, is 1 and dimensionless.
The expression of the fine-structure constant, as found in older physics literature, becomes α = e 2 ℏ c. In natural units used in high energy physics, where ε0 = c = ħ = 1, the value of the fine-structure constant is α = e 2 4 π; as such, the fine-structure constant is just another, albeit dimensionless, quantity determining the elementary charge: e = √4πα ≈ 0.30282212 in terms of such a natural unit of charge. In atomic units, the fine structure constant is α = 1 c; the 2014 CODATA recommended value of α is α = e2/4πε0ħc = 0.0072973525664. This has a relative standard uncertainty of 0.23 parts per billion. For reasons of convenience the value of the reciprocal of the fine-structure constant is specified; the 2014 CODATA recommended value is given by α−1 = 137.035999139. While the value of α can be estimated from the values of the constants appearing in any of its definitions, the theory of quantum electrodynamics provides a way to measure α directly using the quantum Hall effect or the anomalous magnetic moment of the electron.
The theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant α. The most precise value of α obtained experimentally is based on a measurement of g using a one-electron so-called "quantum cyclotron" apparatus, together with a calculation via the theory of QED that involved 12672 tenth-order Feynman diagrams: α−1 = 137.035999174. This measurement of α has a precision of 0.25 parts per billion. This value and uncertainty are about the same as the latest experimental results; the fine-structure constant, α, has several physical interpretations. Α is: The square of the ratio of the elementary charge to the Planck charge α = 2. The ratio of two energies: the energy needed to overcome the electrostatic repulsion between two electrons a distance of d apart, the energy of a single photon of wavelength λ = 2πd: α = e 2 4 π ε 0 d / h c λ = e 2 4 π ε 0 d × 2 π d h c = e 2 4 π ε 0 d × d ℏ c = e 2 4 π ε
Hendrik Antoon Lorentz was a Dutch physicist who shared the 1902 Nobel Prize in Physics with Pieter Zeeman for the discovery and theoretical explanation of the Zeeman effect. He derived the transformation equations underpinning Albert Einstein's theory of special relativity. According to the biography published by the Nobel Foundation, "It may well be said that Lorentz was regarded by all theoretical physicists as the world's leading spirit, who completed what was left unfinished by his predecessors and prepared the ground for the fruitful reception of the new ideas based on the quantum theory." He received many honours and distinctions, including a term as chairman of the International Committee on Intellectual Cooperation, the forerunner of UNESCO, between 1925 and 1928. Hendrik Lorentz was born in Arnhem, Netherlands, the son of Gerrit Frederik Lorentz, a well-off nurseryman, Geertruida van Ginkel. In 1862, after his mother's death, his father married Luberta Hupkes. Despite being raised as a Protestant, he was a freethinker in religious matters.
From 1866 to 1869, he attended the "Hogere Burger School" in Arnhem, a new type of public high school established by Johan Rudolph Thorbecke. His results in school were exemplary. In 1870, he passed the exams in classical languages which were required for admission to University. Lorentz studied physics and mathematics at the Leiden University, where he was influenced by the teaching of astronomy professor Frederik Kaiser. After earning a bachelor's degree, he returned to Arnhem in 1871 to teach night school classes in mathematics, but he continued his studies in Leiden in addition to his teaching position. In 1875, Lorentz earned a doctoral degree under Pieter Rijke on a thesis entitled "Over de theorie der terugkaatsing en breking van het licht", in which he refined the electromagnetic theory of James Clerk Maxwell. On 17 November 1877, only 24 years of age, Hendrik Antoon Lorentz was appointed to the newly established chair in theoretical physics at the University of Leiden; the position had been offered to Johan van der Waals, but he accepted a position at the Universiteit van Amsterdam.
On 25 January 1878, Lorentz delivered his inaugural lecture on "De moleculaire theoriën in de natuurkunde". In 1881, he became member of the Royal Netherlands Academy of Sciences. During the first twenty years in Leiden, Lorentz was interested in the electromagnetic theory of electricity and light. After that, he extended his research to a much wider area while still focusing on theoretical physics. Lorentz made significant contributions to fields ranging from hydrodynamics to general relativity, his most important contributions were in the area of electromagnetism, the electron theory, relativity. Lorentz theorized that atoms might consist of charged particles and suggested that the oscillations of these charged particles were the source of light; when a colleague and former student of Lorentz's, Pieter Zeeman, discovered the Zeeman effect in 1896, Lorentz supplied its theoretical interpretation. The experimental and theoretical work was honored with the Nobel prize in physics in 1902. Lorentz' name is now associated with the Lorentz-Lorenz formula, the Lorentz force, the Lorentzian distribution, the Lorentz transformation.
In 1892 and 1895, Lorentz worked on describing electromagnetic phenomena in reference frames that move relative to the postulated luminiferous aether. He discovered that the transition from one to another reference frame could be simplified by using a new time variable that he called local time and which depended on universal time and the location under consideration. Although Lorentz did not give a detailed interpretation of the physical significance of local time, with it, he could explain the aberration of light and the result of the Fizeau experiment. In 1900 and 1904, Henri Poincaré called local time Lorentz's "most ingenious idea" and illustrated it by showing that clocks in moving frames are synchronized by exchanging light signals that are assumed to travel at the same speed against and with the motion of the frame. In 1892, with the attempt to explain the Michelson-Morley experiment, Lorentz proposed that moving bodies contract in the direction of motion. In 1899 and again in 1904, Lorentz added time dilation to his transformations and published what Poincaré in 1905 named Lorentz transformations.
It was unknown to Lorentz that Joseph Larmor had used identical transformations to describe orbiting electrons in 1897. Larmor's and Lorentz's equations look somewhat dissimilar, but they are algebraically equivalent to those presented by Poincaré and Einstein in 1905. Lorentz's 1904 paper includes the covariant formulation of electrodynamics, in which electrodynamic phenomena in different reference frames are described by identical equations with well defined transformation properties; the paper recognizes the significance of this formulation, namely that the outcomes of electrodynamic experiments do not depend on the relative motion of the reference frame. The 1904 paper includes a detailed discussion of the increase of the inertial mass of moving objects in a useless attempt to make momentum look like Newtonian momentum.
The Planck constant is a physical constant, the quantum of electromagnetic action, which relates the energy carried by a photon to its frequency. A photon's energy is equal to its frequency multiplied by the Planck constant; the Planck constant is of fundamental importance in quantum mechanics, in metrology it is the basis for the definition of the kilogram. At the end of the 19th century, physicists were unable to explain why the observed spectrum of black body radiation, which by had been measured, diverged at higher frequencies from that predicted by existing theories. In 1900, Max Planck empirically derived a formula for the observed spectrum, he assumed that a hypothetical electrically charged oscillator in a cavity that contained black body radiation could only change its energy in a minimal increment, E, proportional to the frequency of its associated electromagnetic wave. He was able to calculate the proportionality constant, h, from the experimental measurements, that constant is named in his honor.
In 1905, the value E was associated by Albert Einstein with a "quantum" or minimal element of the energy of the electromagnetic wave itself. The light quantum behaved in some respects as an electrically neutral particle, as opposed to an electromagnetic wave, it was called a photon. Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta". Since energy and mass are equivalent, the Planck constant relates mass to frequency. By 2017, the Planck constant had been measured with sufficient accuracy in terms of the SI base units, that it was central to replacing the metal cylinder, called the International Prototype of the Kilogram, that had defined the kilogram since 1889; the new definition was unanimously approved at the General Conference on Weights and Measures on 16 November 2018 as part of the 2019 redefinition of SI base units. For this new definition of the kilogram, the Planck constant, as defined by the ISO standard, was set to 6.62607015×10−34 J⋅s exactly.
The kilogram was the last SI base unit to be re-defined by a fundamental physical property to replace a physical artefact. In the last years of the 19th century, Max Planck was investigating the problem of black-body radiation first posed by Kirchhoff some 40 years earlier; every physical body continuously emits electromagnetic radiation. At low frequencies, Planck's law tends to the Rayleigh–Jeans law, while in the limit of high frequencies it tends to the Wien approximation but there was no overall expression or explanation for the shape of the observed emission spectrum. Approaching this problem, Planck hypothesized that the equations of motion for light describe a set of harmonic oscillators, one for each possible frequency, he examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, was able to derive an approximate mathematical function for black-body spectrum. To create Planck's law, which predicts blackbody emissions by fitting the observed curves, he multiplied the classical expression by a complex factor that involves a constant, h, in both the numerator and the denominator, which subsequently became known as the Planck Constant.
The spectral radiance of a body, Bν, describes the amount of energy it emits at different radiation frequencies. It is the power emitted per unit area of the body, per unit solid angle of emission, per unit frequency. Planck showed that the spectral radiance of a body for frequency ν at absolute temperature T is given by B ν = 2 h ν 3 c 2 1 e h ν k B T − 1 where kB is the Boltzmann constant, h is the Planck constant, c is the speed of light in the medium, whether material or vacuum; the spectral radiance can be expressed per unit wavelength λ instead of per unit frequency. In this case, it is given by B λ = 2 h c 2 λ 5 1 e h c λ k B T − 1. Showing how radiated energy emitted at shorter wavelengths increases more with temperature than energy emitted at longer wavelengths; the law may be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation. The SI units of Bν are W·sr−1·m−2·Hz−1, while those of Bλ are W·sr−1·m−3.
Planck soon realized. There were several different solutions, each of which gave a different value for the entropy of the oscillators. To save his theory, Planck resorted to using the then-controversial theory of statistical mechanics, which he described as "an act of despair … I was ready to sacrifice any of my previous convictions about physics." One of his new boundary conditions was to interpret UN [the vibrational energy
Cambridge University Press
Cambridge University Press is the publishing business of the University of Cambridge. Granted letters patent by King Henry VIII in 1534, it is the world's oldest publishing house and the second-largest university press in the world, it holds letters patent as the Queen's Printer. The press mission is "to further the University's mission by disseminating knowledge in the pursuit of education and research at the highest international levels of excellence". Cambridge University Press is a department of the University of Cambridge and is both an academic and educational publisher. With a global sales presence, publishing hubs, offices in more than 40 countries, it publishes over 50,000 titles by authors from over 100 countries, its publishing includes academic journals, reference works and English language teaching and learning publications. Cambridge University Press is a charitable enterprise that transfers part of its annual surplus back to the university. Cambridge University Press is both the oldest publishing house in the world and the oldest university press.
It originated from letters patent granted to the University of Cambridge by Henry VIII in 1534, has been producing books continuously since the first University Press book was printed. Cambridge is one of the two privileged presses. Authors published by Cambridge have included John Milton, William Harvey, Isaac Newton, Bertrand Russell, Stephen Hawking. University printing began in Cambridge when the first practising University Printer, Thomas Thomas, set up a printing house on the site of what became the Senate House lawn – a few yards from where the press's bookshop now stands. In those days, the Stationers' Company in London jealously guarded its monopoly of printing, which explains the delay between the date of the university's letters patent and the printing of the first book. In 1591, Thomas's successor, John Legate, printed the first Cambridge Bible, an octavo edition of the popular Geneva Bible; the London Stationers objected strenuously. The university's response was to point out the provision in its charter to print "all manner of books".
Thus began the press's tradition of publishing the Bible, a tradition that has endured for over four centuries, beginning with the Geneva Bible, continuing with the Authorized Version, the Revised Version, the New English Bible and the Revised English Bible. The restrictions and compromises forced upon Cambridge by the dispute with the London Stationers did not come to an end until the scholar Richard Bentley was given the power to set up a'new-style press' in 1696. In July 1697 the Duke of Somerset made a loan of £200 to the university "towards the printing house and presse" and James Halman, Registrary of the University, lent £100 for the same purpose, it was in Bentley's time, in 1698, that a body of senior scholars was appointed to be responsible to the university for the press's affairs. The Press Syndicate's publishing committee still meets and its role still includes the review and approval of the press's planned output. John Baskerville became University Printer in the mid-eighteenth century.
Baskerville's concern was the production of the finest possible books using his own type-design and printing techniques. Baskerville wrote, "The importance of the work demands all my attention. Caxton would have found nothing to surprise him if he had walked into the press's printing house in the eighteenth century: all the type was still being set by hand. A technological breakthrough was badly needed, it came when Lord Stanhope perfected the making of stereotype plates; this involved making a mould of the whole surface of a page of type and casting plates from that mould. The press was the first to use this technique, in 1805 produced the technically successful and much-reprinted Cambridge Stereotype Bible. By the 1850s the press was using steam-powered machine presses, employing two to three hundred people, occupying several buildings in the Silver Street and Mill Lane area, including the one that the press still occupies, the Pitt Building, built for the press and in honour of William Pitt the Younger.
Under the stewardship of C. J. Clay, University Printer from 1854 to 1882, the press increased the size and scale of its academic and educational publishing operation. An important factor in this increase was the inauguration of its list of schoolbooks. During Clay's administration, the press undertook a sizeable co-publishing venture with Oxford: the Revised Version of the Bible, begun in 1870 and completed in 1885, it was in this period as well that the Syndics of the press turned down what became the Oxford English Dictionary—a proposal for, brought to Cambridge by James Murray before he turned to Oxford. The appointment of R. T. Wright as Secretary of the Press Syndicate in 1892 marked the beginning of the press's development as a modern publishing business with a defined editorial policy and administrative structure, it was Wright who devised the plan for one of the most distinctive Cambridge contributions to publishing—the Cambridge Histories. The Cambridge Modern History was published
David J. Griffiths
David Jeffrey Griffiths is a U. S. physicist and educator. He worked at Reed College from 1978 through 2009, becoming the Howard Vollum Professor of Science before his retirement, he is not to be confused with the late physicist David J. Griffiths of Oregon State University. Griffiths was trained at Harvard University, his doctoral work on theoretical particle physics was supervised by Sidney Coleman. He is principally known as the author of three regarded textbooks for undergraduate physics students: Introduction to Elementary Particles, Introduction to Quantum Mechanics, Introduction to Electrodynamics, he was the recipient of the 1997 Robert A. Millikan award reserved for "those who have made outstanding scholarly contributions to physics education". In 2009 he was named a Fellow of the American Physical Society. Griffiths, David. Introduction to Elementary Particles. Wiley-VCH. ISBN 3-527-40601-8. Griffiths, David. Introduction to Electrodynamics. Addison-Wesley. ISBN 0-321-85656-2. Griffiths, David.
Introduction to Quantum Mechanics. Prentice Hall. ISBN 0-13-111892-7. Griffiths, David. Revolutions in Twentieth-Century Physics. Cambridge University Press. ISBN 978-1-107-60217-5; the most recent edition of each book is regarded as a standard undergraduate text. Griffiths's web page Lecture: The charge distribution on a conductor "... could teach physics to gerbils." David Griffiths at the Mathematics Genealogy Project David Griffiths Lecture, Techfest 2012, IIT Bombay - YouTube