Quantum mechanics

Quantum mechanics, including quantum field theory, is a fundamental theory in physics which describes nature at the smallest scales of energy levels of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, describes nature at ordinary scale. Most theories in classical physics can be derived from quantum mechanics as an approximation valid at large scale. Quantum mechanics differs from classical physics in that energy, angular momentum and other quantities of a bound system are restricted to discrete values. Quantum mechanics arose from theories to explain observations which could not be reconciled with classical physics, such as Max Planck's solution in 1900 to the black-body radiation problem, from the correspondence between energy and frequency in Albert Einstein's 1905 paper which explained the photoelectric effect. Early quantum theory was profoundly re-conceived in the mid-1920s by Erwin Schrödinger, Werner Heisenberg, Max Born and others; the modern theory is formulated in various specially developed mathematical formalisms.

In one of them, a mathematical function, the wave function, provides information about the probability amplitude of position and other physical properties of a particle. Important applications of quantum theory include quantum chemistry, quantum optics, quantum computing, superconducting magnets, light-emitting diodes, the laser, the transistor and semiconductors such as the microprocessor and research imaging such as magnetic resonance imaging and electron microscopy. Explanations for many biological and physical phenomena are rooted in the nature of the chemical bond, most notably the macro-molecule DNA. Scientific inquiry into the wave nature of light began in the 17th and 18th centuries, when scientists such as Robert Hooke, Christiaan Huygens and Leonhard Euler proposed a wave theory of light based on experimental observations. In 1803, Thomas Young, an English polymath, performed the famous double-slit experiment that he described in a paper titled On the nature of light and colours.

This experiment played a major role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays; these studies were followed by the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system can be discrete, the 1900 quantum hypothesis of Max Planck. Planck's hypothesis that energy is radiated and absorbed in discrete "quanta" matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, known as Wien's law in his honor. Ludwig Boltzmann independently arrived at this result by considerations of Maxwell's equations. However, it underestimated the radiance at low frequencies. Planck corrected this model using Boltzmann's statistical interpretation of thermodynamics and proposed what is now called Planck's law, which led to the development of quantum mechanics. Following Max Planck's solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect.

Around 1900–1910, the atomic theory and the corpuscular theory of light first came to be accepted as scientific fact. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, Pieter Zeeman, each of whom has a quantum effect named after him. Robert Andrews Millikan studied the photoelectric effect experimentally, Albert Einstein developed a theory for it. At the same time, Ernest Rutherford experimentally discovered the nuclear model of the atom, for which Niels Bohr developed his theory of the atomic structure, confirmed by the experiments of Henry Moseley. In 1913, Peter Debye extended Niels Bohr's theory of atomic structure, introducing elliptical orbits, a concept introduced by Arnold Sommerfeld; this phase is known as old quantum theory. According to Planck, each energy element is proportional to its frequency: E = h ν, where h is Planck's constant. Planck cautiously insisted that this was an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.

In fact, he considered his quantum hypothesis a mathematical trick to get the right answer rather than a sizable discovery. However, in 1905 Albert Einstein interpreted Planck's quantum hypothesis realistically and used it to explain the photoelectric effect, in which shining light on certain materials can eject electrons from the material, he won the 1921 Nobel Prize in Physics for this work. Einstein further developed this idea to show that an electromagnetic wave such as light could be described as a particle, with a discrete quantum of energy, dependent on its frequency; the foundations of quantum mechanics were established during the first half of the 20th century by Max Planck, Niels Bohr, Werner Heisenberg, Louis de Broglie, Arthur Compton, Albert Einstein, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Enrico Fermi, Wolfgang Pauli, Max von Laue, Freeman Dyson, David Hilbert, Wi

Weight (representation theory)

In the mathematical field of representation theory, a weight of an algebra A over a field F is an algebra homomorphism from A to F, or equivalently, a one-dimensional representation of A over F. It is the algebra analogue of a multiplicative character of a group; the importance of the concept, stems from its application to representations of Lie algebras and hence to representations of algebraic and Lie groups. In this context, a weight of a representation is a generalization of the notion of an eigenvalue, the corresponding eigenspace is called a weight space. Given a set S of matrices, each of, diagonalizable, any two of which commute, it is always possible to diagonalize all of the elements of S. Equivalently, for any set S of mutually commuting semisimple linear transformations of a finite-dimensional vector space V there exists a basis of V consisting of simultaneous eigenvectors of all elements of S; each of these common eigenvectors v ∈ V defines a linear functional on the subalgebra U of End generated by the set of endomorphisms S.

This map is multiplicative, sends the identity to 1. This "generalized eigenvalue" is a prototype for the notion of a weight; the notion is related to the idea of a multiplicative character in group theory, a homomorphism χ from a group G to the multiplicative group of a field F. Thus χ: G → F× satisfies χ = 1 and χ = χ χ for all g, h in G. Indeed, if G acts on a vector space V over F, each simultaneous eigenspace for every element of G, if such exists, determines a multiplicative character on G: the eigenvalue on this common eigenspace of each element of the group; the notion of multiplicative character can be extended to any algebra A over F, by replacing χ: G → F× by a linear map χ: A → F with: χ = χ χ for all a, b in A. If an algebra A acts on a vector space V over F to any simultaneous eigenspace, this corresponds an algebra homomorphism from A to F assigning to each element of A its eigenvalue. If A is a Lie algebra instead of requiring multiplicativity of a character, one requires that it maps any Lie bracket to the corresponding commutator.

A weight on a Lie algebra g over a field F is a linear map λ: g → F with λ=0 for all x, y in g. Any weight on a Lie algebra g vanishes on the derived algebra and hence descends to a weight on the abelian Lie algebra g/, thus weights are of interest for abelian Lie algebras, where they reduce to the simple notion of a generalized eigenvalue for space of commuting linear transformations. If G is a Lie group or an algebraic group a multiplicative character θ: G → F× induces a weight χ = dθ: g → F on its Lie algebra by differentiation. Let g be a complex semisimple Lie algebra and h a Cartan subalgebra of g. In this section, we describe the concepts needed to formulate the "theorem of the highest weight" classifying the finite-dimensional representations of g. Notably, we will explain the notion of a "dominant integral element." The representations themselves are described in the article linked to above. Let V be a representation of a Lie algebra g over C and let λ a linear functional on h; the weight space of V with weight λ is the subspace V λ given by V λ:=.

A weight of the representation V is a linear functional λ such that the corresponding weight space is nonzero. Nonzero elements of the weight space are called weight vectors; that is to say, a weight vector is a simultaneous eigenvector for the action of the elements of h, with the corresponding eigenvalues given by λ. If V is the direct sum of its weight spaces V = ⨁ λ ∈ h ∗ V λ it is called a weight module. If G is group with Lie algebra g, every finite-dimensional representation of G induces a representation of g {\displaystyle {\mathfrak

Theoretical physics

Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize and predict natural phenomena. This is in contrast to experimental physics; the advancement of science depends on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations. For example, while developing special relativity, Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was uninterested in the Michelson–Morley experiment on Earth's drift through a luminiferous aether. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect an experimental result lacking a theoretical formulation. A physical theory is a model of physical events, it is judged by the extent. The quality of a physical theory is judged on its ability to make new predictions which can be verified by new observations.

A physical theory differs from a mathematical theorem in that while both are based on some form of axioms, judgment of mathematical applicability is not based on agreement with any experimental results. A physical theory differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms. A physical theory involves one or more relationships between various measurable quantities. Archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles and the quantum mechanical idea that energy are not continuously variable. Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example. For instance: "phenomenologists" might employ empirical formulas to agree with experimental results without deep physical understanding.

"Modelers" appear much like phenomenologists, but try to model speculative theories that have certain desirable features, or apply the techniques of mathematical modeling to physics problems. Some attempt to create approximate theories, called effective theories, because developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, reinterpret or generalise extant theories, or create new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a physical system might be modeled. Theoretical problems that need computational investigation are the concern of computational physics. Theoretical advances may consist in setting aside old, incorrect paradigms or may be an alternative model that provides answers that are more accurate or that can be more applied. In the latter case, a correspondence principle will be required to recover the known result. Sometimes though, advances may proceed along different paths. For example, an correct theory may need some conceptual or factual revisions.

However, an exception to all the above is the wave–particle duality, a theory combining aspects of different, opposing models via the Bohr complementarity principle. Physical theories become accepted if they are able to make correct predictions and no incorrect ones; the theory should have, at least as a secondary objective, a certain economy and elegance, a notion sometimes called "Occam's razor" after the 13th-century English philosopher William of Occam, in which the simpler of two theories that describe the same matter just as adequately is preferred. They are more to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method. Physical theories can be grouped into three categories: mainstream theories, proposed theories and fringe theories. Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, continued by Plato and Aristotle, whose views held sway for a millennium. During the rise of medieval universities, the only acknowledged intellectual disciplines were the seven liberal arts of the Trivium like grammar and rhetoric and of the Quadrivium like arithmetic, geometry and astronomy.

During the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon. As the Scientific Revolution gathered pace, the concepts of matter, space and causality began to acquire the form we know today, other sciences spun off from the rubric of natural philosophy, thus began the modern era of theory with the Copernican paradigm shift in astronomy, soon followed by Johannes Kepler's expressions for planetary orbits, which summarized the meticulous observations of Tycho Brahe.