Mathematics
Mathematics includes the study of such topics as quantity, structure and change. Mathematicians use patterns to formulate new conjectures; when mathematical structures are good models of real phenomena mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation and the systematic study of the shapes and motions of physical objects. Practical mathematics has been a human activity from as far back; the research required to solve mathematical problems can take years or centuries of sustained inquiry. Rigorous arguments first appeared in Greek mathematics, most notably in Euclid's Elements. Since the pioneering work of Giuseppe Peano, David Hilbert, others on axiomatic systems in the late 19th century, it has become customary to view mathematical research as establishing truth by rigorous deduction from appropriately chosen axioms and definitions. Mathematics developed at a slow pace until the Renaissance, when mathematical innovations interacting with new scientific discoveries led to a rapid increase in the rate of mathematical discovery that has continued to the present day.
Mathematics is essential in many fields, including natural science, medicine and the social sciences. Applied mathematics has led to new mathematical disciplines, such as statistics and game theory. Mathematicians engage in pure mathematics without having any application in mind, but practical applications for what began as pure mathematics are discovered later; the history of mathematics can be seen as an ever-increasing series of abstractions. The first abstraction, shared by many animals, was that of numbers: the realization that a collection of two apples and a collection of two oranges have something in common, namely quantity of their members; as evidenced by tallies found on bone, in addition to recognizing how to count physical objects, prehistoric peoples may have recognized how to count abstract quantities, like time – days, years. Evidence for more complex mathematics does not appear until around 3000 BC, when the Babylonians and Egyptians began using arithmetic and geometry for taxation and other financial calculations, for building and construction, for astronomy.
The most ancient mathematical texts from Mesopotamia and Egypt are from 2000–1800 BC. Many early texts mention Pythagorean triples and so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical development after basic arithmetic and geometry, it is in Babylonian mathematics that elementary arithmetic first appear in the archaeological record. The Babylonians possessed a place-value system, used a sexagesimal numeral system, still in use today for measuring angles and time. Beginning in the 6th century BC with the Pythagoreans, the Ancient Greeks began a systematic study of mathematics as a subject in its own right with Greek mathematics. Around 300 BC, Euclid introduced the axiomatic method still used in mathematics today, consisting of definition, axiom and proof, his textbook Elements is considered the most successful and influential textbook of all time. The greatest mathematician of antiquity is held to be Archimedes of Syracuse, he developed formulas for calculating the surface area and volume of solids of revolution and used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus.
Other notable achievements of Greek mathematics are conic sections, trigonometry (Hipparchus of Nicaea, the beginnings of algebra. The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, evolved over the course of the first millennium AD in India and were transmitted to the Western world via Islamic mathematics. Other notable developments of Indian mathematics include the modern definition of sine and cosine, an early form of infinite series. During the Golden Age of Islam during the 9th and 10th centuries, mathematics saw many important innovations building on Greek mathematics; the most notable achievement of Islamic mathematics was the development of algebra. Other notable achievements of the Islamic period are advances in spherical trigonometry and the addition of the decimal point to the Arabic numeral system. Many notable mathematicians from this period were Persian, such as Al-Khwarismi, Omar Khayyam and Sharaf al-Dīn al-Ṭūsī. During the early modern period, mathematics began to develop at an accelerating pace in Western Europe.
The development of calculus by Newton and Leibniz in the 17th century revolutionized mathematics. Leonhard Euler was the most notable mathematician of the 18th century, contributing numerous theorems and discoveries; the foremost mathematician of the 19th century was the German mathematician Carl Friedrich Gauss, who made numerous contributions to fields such as algebra, differential geometry, matrix theory, number theory, statistics. In the early 20th century, Kurt Gödel transformed mathematics by publishing his incompleteness theorems, which show that any axiomatic system, consistent will contain unprovable propositions. Mathematics has since been extended, there has been a fruitful interaction between mathematics and science, to
Hilbert space
The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is an abstract vector space possessing the structure of an inner product that allows length and angle to be measured. Furthermore, Hilbert spaces are complete: there are enough limits in the space to allow the techniques of calculus to be used. Hilbert spaces arise and in mathematics and physics as infinite-dimensional function spaces; the earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis, ergodic theory. John von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications.
The success of Hilbert space methods ushered in a fruitful era for functional analysis. Apart from the classical Euclidean spaces, examples of Hilbert spaces include spaces of square-integrable functions, spaces of sequences, Sobolev spaces consisting of generalized functions, Hardy spaces of holomorphic functions. Geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space. At a deeper level, perpendicular projection onto a subspace plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be uniquely specified by its coordinates with respect to a set of coordinate axes, in analogy with Cartesian coordinates in the plane; when that set of axes is countably infinite, the Hilbert space can be usefully thought of in terms of the space of infinite sequences that are square-summable. The latter space is in the older literature referred to as the Hilbert space.
Linear operators on a Hilbert space are fairly concrete objects: in good cases, they are transformations that stretch the space by different factors in mutually perpendicular directions in a sense, made precise by the study of their spectrum. One of the most familiar examples of a Hilbert space is the Euclidean space consisting of three-dimensional vectors, denoted by ℝ3, equipped with the dot product; the dot product takes two vectors x and y, produces a real number x · y. If x and y are represented in Cartesian coordinates the dot product is defined by ⋅ = x 1 y 1 + x 2 y 2 + x 3 y 3; the dot product satisfies the properties: It is symmetric in x and y: x · y = y · x. It is linear in its first argument: · y = ax1 · y + bx2 · y for any scalars a, b, vectors x1, x2, y, it is positive definite: for all vectors x, x · x ≥ 0, with equality if and only if x = 0. An operation on pairs of vectors that, like the dot product, satisfies these three properties is known as a inner product. A vector space equipped with such an inner product is known as a inner product space.
Every finite-dimensional inner product space is a Hilbert space. The basic feature of the dot product that connects it with Euclidean geometry is that it is related to both the length of a vector, denoted ||x||, to the angle θ between two vectors x and y by means of the formula x ⋅ y = ‖ x ‖ ‖ y ‖ cos θ. Multivariable calculus in Euclidean space relies on the ability to compute limits, to have useful criteria for concluding that limits exist. A mathematical series ∑ n = 0 ∞ x n consisting of vectors in ℝ3 is convergent provided that the sum of the lengths converges as an ordinary series of real numbers: ∑ k = 0 ∞ ‖ x k ‖ < ∞. Just as with a series of scalars, a series of vectors that converges also converges to some limit vector L in the Euclidean space, in the sense that ‖ L − ∑ k = 0 N x k ‖ → 0 as N → ∞; this property expresses the completeness of
Spinor
In geometry and physics, spinors are elements of a vector space that can be associated with Euclidean space. Like geometric vectors and more general tensors, spinors transform linearly when the Euclidean space is subjected to a slight rotation. However, when a sequence of such small rotations is composed to form an overall final rotation, the resulting spinor transformation depends on which sequence of small rotations was used: unlike vectors and tensors, a spinor transforms to its negative when the space is rotated through a complete turn from 0° to 360°; this property characterizes spinors. It is possible to associate a similar notion of spinor to Minkowski space in which case the Lorentz transformations of special relativity play the role of rotations. Spinors were introduced in geometry by Élie Cartan in 1913. In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or "spin", of the electron and other subatomic particles. Spinors are characterized by the specific way.
They change in different ways depending not just on the overall final rotation, but the details of how that rotation was achieved. There are two topologically distinguishable classes of paths through rotations that result in the same overall rotation, as famously illustrated by the belt trick puzzle; these two inequivalent classes yield spinor transformations of opposite sign. The spin group is the group of all rotations keeping track of the class, it doubly covers the rotation group, since each rotation can be obtained in two inequivalent ways as the endpoint of a path. The space of spinors by definition is equipped with a linear representation of the spin group, meaning that elements of the spin group act as linear transformations on the space of spinors, in a way that genuinely depends on the homotopy class. In mathematical terms, spinors are described by a double-valued projective representation of the rotation group SO. Although spinors can be defined purely as elements of a representation space of the spin group, they are defined as elements of a vector space that carries a linear representation of the Clifford algebra.
The Clifford algebra is an associative algebra that can be constructed from Euclidean space and its inner product in a basis-independent way. Both the spin group and its Lie algebra are embedded inside the Clifford algebra in a natural way, in applications the Clifford algebra is the easiest to work with. After choosing an orthonormal basis of Euclidean space, a representation of the Clifford algebra is generated by gamma matrices, matrices that satisfy a set of canonical anti-commutation relations; the spinors are the column vectors. In three Euclidean dimensions, for instance, the Pauli spin matrices are a set of gamma matrices, the two-component complex column vectors on which these matrices act are spinors. However, the particular matrix representation of the Clifford algebra, hence what constitutes a "column vector", involves the choice of basis and gamma matrices in an essential way; as a representation of the spin group, this realization of spinors as column vectors will either be irreducible if the dimension is odd, or it will decompose into a pair of so-called "half-spin" or Weyl representations if the dimension is even.
What characterizes spinors and distinguishes them from geometric vectors and other tensors is subtle. Consider applying a rotation to the coordinates of a system. No object in the system itself has moved, only the coordinates have, so there will always be a compensating change in those coordinate values when applied to any object of the system. Geometrical vectors, for example, have components that will undergo the same rotation as the coordinates. More broadly, any tensor associated with the system has coordinate descriptions that adjust to compensate for changes to the coordinate system itself. Spinors do not appear at this level of the description of a physical system, when one is concerned only with the properties of a single isolated rotation of the coordinates. Rather, spinors appear when we imagine that instead of a single rotation, the coordinate system is rotated between some initial and final configuration. For any of the familiar and intuitive quantities associated with the system, the transformation law does not depend on the precise details of how the coordinates arrived at their final configuration.
Spinors, on the other hand, are constructed in such a way that makes them sensitive to how the gradual rotation of the coordinates arrived there: they exhibit path-dependence. It turns out that, for any final configuration of the coordinates, there are two inequivalent gradual rotations of the coordinate system that result in this same configuration; this ambiguity is called the homotopy class of the gradual rotation. The belt trick puzzle famously demonstrates two different rotations, one through an angle of 2π and the other through an angle of 4π, having the same final configurations but different classes. Spinors exhibit a sign-reversal that genuinely depends on this homotopy class; this distinguishes them from other tensors, none of which can feel the class. Spinors can be exhibited as concrete objects using a choice of Cartesian coordinates. In three Euclidean dimensions, for instance, spinors can be constructed by making a choice of Pauli spin matrices corresponding to the thre
Eigenvalues and eigenvectors
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector that changes by only a scalar factor when that linear transformation is applied to it. More formally, if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V, not the zero vector v is an eigenvector of T if T is a scalar multiple of v; this condition can be written as the equation T = λ v, where λ is a scalar in the field F, known as the eigenvalue, characteristic value, or characteristic root associated with the eigenvector v. If the vector space V is finite-dimensional the linear transformation T can be represented as a square matrix A, the vector v by a column vector, rendering the above mapping as a matrix multiplication on the left-hand side and a scaling of the column vector on the right-hand side in the equation A v = λ v. There is a direct correspondence between n-by-n square matrices and linear transformations from an n-dimensional vector space to itself, given any basis of the vector space.
For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations. Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction, stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations; the prefix eigen- is adopted from the German word eigen for "proper", "characteristic". Utilized to study principal axes of the rotational motion of rigid bodies and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, matrix diagonalization. In essence, an eigenvector v of a linear transformation T is a non-zero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue.
This condition can be written as the equation T = λ v, referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex; the Mona Lisa example pictured at right provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point; the linear transformation in this example is called a shear mapping. Points in the top half are moved to the right and points in the bottom half are moved to the left proportional to how far they are from the horizontal axis that goes through the middle of the painting; the vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation. Notice that points along the horizontal axis do not move at all. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction.
Moreover, these eigenvectors all have an eigenvalue equal to one because the mapping does not change their length, either. Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can take many forms. For example, the linear transformation could be a differential operator like d d x, in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as d d x e λ x = λ e λ x. Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices that are referred to as eigenvectors. If the linear transformation is expressed in the form of an n by n matrix A the eigenvalue equation above for a linear transformation can be rewritten as the matrix multiplication A v = λ v, where the eigenvector v is an n by 1 matrix. For a matrix and eigenvectors can be used to decompose the matrix, for example by diagonalizing it. Eigenvalues and eigenvectors give rise to many related mathematical concepts, the prefix eigen- is applied liberally when naming them: The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation.
The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T. If the set of eigenvectors of T form a basis of the domain of T this basis is called an eigenbasis. Eigenvalues are introduced in the context of linear algebra or matrix theory. However, they arose in the study of quadratic forms and differential equations. In the 18th century Euler studied the rotational motion of a rigid body and discovered the importance of the pri
Quantum mechanics
Quantum mechanics, including quantum field theory, is a fundamental theory in physics which describes nature at the smallest scales of energy levels of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, describes nature at ordinary scale. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large scale. Quantum mechanics differs from classical physics in that energy, angular momentum and other quantities of a bound system are restricted to discrete values. Quantum mechanics arose from theories to explain observations which could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, from the correspondence between energy and frequency in Albert Einstein's 1905 paper which explained the photoelectric effect. Early quantum theory was profoundly re-conceived in the mid-1920s by Erwin Schrödinger, Werner Heisenberg, Max Born and others; the modern theory is formulated in various specially developed mathematical formalisms.
In one of them, a mathematical function, the wave function, provides information about the probability amplitude of position and other physical properties of a particle. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the laser, the transistor and semiconductors such as the microprocessor and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he described in a paper titled On the nature of light and colours.
This experiment played a major role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays; these studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, the 1900 quantum hypothesis of Max Planck. Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, known as Wien's law in his honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it underestimated the radiance at low frequencies. Planck corrected this model using Boltzmann's statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to the development of quantum mechanics. Following Max Planck's solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect.
Around 1900–1910, the atomic theory and the corpuscular theory of light first came to be accepted as scientific fact. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, Pieter Zeeman, each of whom has a quantum effect named after him. Robert Andrews Millikan studied the photoelectric effect experimentally, Albert Einstein developed a theory for it. At the same time, Ernest Rutherford experimentally discovered the nuclear model of the atom, for which Niels Bohr developed his theory of the atomic structure, confirmed by the experiments of Henry Moseley. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept introduced by Arnold Sommerfeld; this phase is known as old quantum theory. According to Planck, each energy element is proportional to its frequency: E = h ν, where h is Planck's constant. Planck cautiously insisted that this was an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.
In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material, he won the 1921 Nobel Prize in Physics for this work. Einstein further developed this idea to show that an electromagnetic wave such as light could be described as a particle, with a discrete quantum of energy, dependent on its frequency; the foundations of quantum mechanics were established during the first half of the 20th century by Max Planck, Niels Bohr, Werner Heisenberg, Louis de Broglie, Arthur Compton, Albert Einstein, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Enrico Fermi, Wolfgang Pauli, Max von Laue, Freeman Dyson, David Hilbert, Wi
Quaternion
In mathematics, the quaternions are a number system that extends the complex numbers. They were first described by Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. A feature of quaternions is. Hamilton defined a quaternion as the quotient of two directed lines in a three-dimensional space or equivalently as the quotient of two vectors. Quaternions are represented in the form: a + b i + c j + d k where a, b, c, d are real numbers, i, j, k are the fundamental quaternion units. Quaternions find uses in both pure and applied mathematics, in particular for calculations involving three-dimensional rotations such as in three-dimensional computer graphics, computer vision, crystallographic texture analysis. In practical applications, they can be used alongside other methods, such as Euler angles and rotation matrices, or as an alternative to them, depending on the application. In modern mathematical language, quaternions form a four-dimensional associative normed division algebra over the real numbers, therefore a domain.
In fact, the quaternions were the first noncommutative division algebra. The algebra of quaternions is denoted by H, or in blackboard bold by H, it can be given by the Clifford algebra classifications Cℓ0,2 ≅ Cℓ03,0. The algebra ℍ holds a special place in analysis since, according to the Frobenius theorem, it is one of only two finite-dimensional division rings containing the real numbers as a proper subring, the other being the complex numbers; these rings are Euclidean Hurwitz algebras, of which quaternions are the largest associative algebra. Further extending the quaternions yields the non-associative octonions, the last normed division algebra over the reals; the unit quaternions can be thought of as a choice of a group structure on the 3-sphere S3 that gives the group Spin, isomorphic to SU and to the universal cover of SO. Quaternions were introduced by Hamilton in 1843. Important precursors to this work included Euler's four-square identity and Olinde Rodrigues' parameterization of general rotations by four parameters, but neither of these writers treated the four-parameter rotations as an algebra.
Carl Friedrich Gauss had discovered quaternions in 1819, but this work was not published until 1900. Hamilton knew that the complex numbers could be interpreted as points in a plane, he was looking for a way to do the same for points in three-dimensional space. Points in space can be represented by their coordinates, which are triples of numbers, for many years he had known how to add and subtract triples of numbers. However, Hamilton had been stuck on the problem of division for a long time, he could not figure out. The great breakthrough in quaternions came on Monday 16 October 1843 in Dublin, when Hamilton was on his way to the Royal Irish Academy where he was going to preside at a council meeting; as he walked along the towpath of the Royal Canal with his wife, the concepts behind quaternions were taking shape in his mind. When the answer dawned on him, Hamilton could not resist the urge to carve the formula for the quaternions, i 2 = j 2 = k 2 = i j k = − 1 into the stone of Brougham Bridge as he paused on it.
Although the carving has since faded away, there has been an annual pilgrimage since 1989 called the Hamilton Walk for scientists and mathematicians who walk from Dunsink Observatory to the Royal Canal bridge in remembrance of Hamilton's discovery. On the following day, Hamilton wrote a letter to his friend and fellow mathematician, John T. Graves, describing the train of thought that led to his discovery; this letter was published in a letter to a science magazine. An electric circuit seemed to close, a spark flashed forth. Hamilton called a quadruple with these rules of multiplication a quaternion, he devoted most of the remainder of his life to studying and teaching them. Hamilton's treatment is more geometric than the modern approach, which emphasizes quaternions' algebraic properties, he founded a school of "quaternionists", he tried to popularize quaternions in several books. The last and longest of his books, Elements of Quaternions, was 800 pages long. After Hamilton's death, his student Peter Tait continued promoting quaternions.
At this time, quaternions were a mandatory examination topic in Dublin. Topics in physics and geometry that would now be described using vectors, such as kinematics in space and Maxwell's equations, were described in terms of quaternions. There was a professional research association, the Quaternion Society, devoted to the study of quaternions and other hypercomplex number systems. From the mid-1880s, quaternions began to be displaced by vector analysis, developed by Josiah Willard Gibbs, Oliver Heaviside, Hermann von Helmholtz. Vector analys
Nobel Prize in Physics
The Nobel Prize in Physics is a yearly award given by the Royal Swedish Academy of Sciences for those who have made the most outstanding contributions for humankind in the field of physics. It is one of the five Nobel Prizes established by the will of Alfred Nobel in 1895 and awarded since 1901; the first Nobel Prize in Physics was awarded to physicist Wilhelm Röntgen in recognition of the extraordinary services he rendered by the discovery of the remarkable rays. This award is administered by the Nobel Foundation and regarded as the most prestigious award that a scientist can receive in physics, it is presented in Stockholm at an annual ceremony on 10 December, the anniversary of Nobel's death. Through 2018, a total of 209 individuals have been awarded the prize. Only three women have won the Nobel Prize in Physics: Marie Curie in 1903, Maria Goeppert Mayer in 1963, Donna Strickland in 2018. Alfred Nobel, in his last will and testament, stated that his wealth be used to create a series of prizes for those who confer the "greatest benefit on mankind" in the fields of physics, peace, physiology or medicine, literature.
Though Nobel wrote several wills during his lifetime, the last one was written a year before he died and was signed at the Swedish-Norwegian Club in Paris on 27 November 1895. Nobel bequeathed 94% of his total assets, 31 million Swedish kronor, to establish and endow the five Nobel Prizes. Due to the level of skepticism surrounding the will, it was not until April 26, 1897 that it was approved by the Storting; the executors of his will were Ragnar Sohlman and Rudolf Lilljequist, who formed the Nobel Foundation to take care of Nobel's fortune and organise the prizes. The members of the Norwegian Nobel Committee who were to award the Peace Prize were appointed shortly after the will was approved; the prize-awarding organisations followed: the Karolinska Institutet on June 7, the Swedish Academy on June 9, the Royal Swedish Academy of Sciences on June 11. The Nobel Foundation reached an agreement on guidelines for how the Nobel Prize should be awarded. In 1900, the Nobel Foundation's newly created statutes were promulgated by King Oscar II.
According to Nobel's will, The Royal Swedish Academy of sciences were to award the Prize in Physics. A maximum of three Nobel laureates and two different works may be selected for the Nobel Prize in Physics. Compared with other Nobel Prizes, the nomination and selection process for the prize in Physics is long and rigorous; this is a key reason why it has grown in importance over the years to become the most important prize in Physics. The Nobel laureates are selected by the Nobel Committee for Physics, a Nobel Committee that consists of five members elected by The Royal Swedish Academy of Sciences. In the first stage that begins in September, around 3,000 people – selected university professors, Nobel Laureates in Physics and Chemistry, etc. – are sent confidential forms to nominate candidates. The completed nomination forms arrive at the Nobel Committee no than 31 January of the following year; these nominees are scrutinized and discussed by experts who narrow it to fifteen names. The committee submits a report with recommendations on the final candidates into the Academy, where, in the Physics Class, it is further discussed.
The Academy makes the final selection of the Laureates in Physics through a majority vote. The names of the nominees are never publicly announced, neither are they told that they have been considered for the prize. Nomination records are sealed for fifty years. While posthumous nominations are not permitted, awards can be made if the individual died in the months between the decision of the prize committee and the ceremony in December. Prior to 1974, posthumous awards were permitted; the rules for the Nobel Prize in Physics require that the significance of achievements being recognized has been "tested by time". In practice, it means that the lag between the discovery and the award is on the order of 20 years and can be much longer. For example, half of the 1983 Nobel Prize in Physics was awarded to Subrahmanyan Chandrasekhar for his work on stellar structure and evolution, done during the 1930s; as a downside of this approach, not all scientists live long enough for their work to be recognized.
Some important scientific discoveries are never considered for a prize, as the discoverers die by the time the impact of their work is appreciated. A Physics Nobel Prize laureate earns a gold medal, a diploma bearing a citation, a sum of money; the Nobel Prize medals, minted by Myntverket in Sweden and the Mint of Norway since 1902, are registered trademarks of the Nobel Foundation. Each medal has an image of Alfred Nobel in left profile on the obverse; the Nobel Prize medals for Physics, Physiology or Medicine, Literature have identical obverses, showing the image of Alfred Nobel and the years of his birth and death. Nobel's portrait appears on the obverse of the Nobel Peace Prize medal and the Medal for the Prize in Economics, but with a different design; the image on the reverse of a medal varies according to the institution awarding the prize. The reverse sides of the Nobel Prize medals for Chemistry and Physics share the same design of Nature, as a Goddess, whose veil is held up by the Genius of Science.
These medals and the ones for Physiology/Medicine and Literature were designed by Erik Lindberg in 1902. Nobel laureates receive a diploma directly from the hands of the