1.
Quantum field theory
–
QFT treats particles as excited states of the underlying physical field, so these are called field quanta. In quantum field theory, quantum mechanical interactions among particles are described by interaction terms among the corresponding underlying quantum fields and these interactions are conveniently visualized by Feynman diagrams, which are a formal tool of relativistically covariant perturbation theory, serving to evaluate particle processes. The first achievement of quantum theory, namely quantum electrodynamics, is still the paradigmatic example of a successful quantum field theory. Ordinarily, quantum mechanics cannot give an account of photons which constitute the prime case of relativistic particles, since photons have rest mass zero, and correspondingly travel in the vacuum at the speed c, a non-relativistic theory such as ordinary QM cannot give even an approximate description. Photons are implicit in the emission and absorption processes which have to be postulated, for instance, the formalism of QFT is needed for an explicit description of photons. In fact most topics in the development of quantum theory were related to the interaction of radiation and matter. However, quantum mechanics as formulated by Dirac, Heisenberg, and Schrödinger in 1926–27 started from atomic spectra, as soon as the conceptual framework of quantum mechanics was developed, a small group of theoreticians tried to extend quantum methods to electromagnetic fields. A good example is the paper by Born, Jordan & Heisenberg. The basic idea was that in QFT the electromagnetic field should be represented by matrices in the way that position. The ideas of QM were thus extended to systems having a number of degrees of freedom. The inception of QFT is usually considered to be Diracs famous 1927 paper on The quantum theory of the emission and absorption of radiation, here Dirac coined the name quantum electrodynamics for the part of QFT that was developed first. Employing the theory of the harmonic oscillator, Dirac gave a theoretical description of how photons appear in the quantization of the electromagnetic radiation field. Later, Diracs procedure became a model for the quantization of fields as well. These first approaches to QFT were further developed during the three years. P. Jordan introduced creation and annihilation operators for fields obeying Fermi–Dirac statistics and these differ from the corresponding operators for Bose–Einstein statistics in that the former satisfy anti-commutation relations while the latter satisfy commutation relations. The methods of QFT could be applied to derive equations resulting from the treatment of particles, e. g. the Dirac equation, the Klein–Gordon equation. Schweber points out that the idea and procedure of second quantization goes back to Jordan, in a number of papers from 1927, some difficult problems concerning commutation relations, statistics, and Lorentz invariance were eventually solved. The first comprehensive account of a theory of quantum fields, in particular
2.
Feynman diagram
–
In theoretical physics, Feynman diagrams are pictorial representations of the mathematical expressions describing the behavior of subatomic particles. The scheme is named after its inventor, American physicist Richard Feynman, the interaction of sub-atomic particles can be complex and difficult to understand intuitively. Feynman diagrams give a simple visualization of what would otherwise be a rather arcane, while the diagrams are applied primarily to quantum field theory, they can also be used in other fields, such as solid-state theory. Feynman used Ernst Stueckelbergs interpretation of the positron as if it were an electron moving backward in time, thus, antiparticles are represented as moving backward along the time axis in Feynman diagrams. The calculation of probability amplitudes in theoretical particle physics requires the use of rather large and these integrals do, however, have a regular structure, and may be represented graphically as Feynman diagrams. A Feynman diagram is a contribution of a class of particle paths. Within the canonical formulation of field theory, a Feynman diagram represents a term in the Wicks expansion of the perturbative S-matrix. The transition amplitude is given as the matrix element of the S-matrix between the initial and the final states of the quantum system. The amplitude for scattering is the sum of each possible interaction history over all possible intermediate particle states, the number of times the interaction Hamiltonian acts is the order of the perturbation expansion, and the time-dependent perturbation theory for fields is known as the Dyson series. When the intermediate states at times are energy eigenstates the series is called old-fashioned perturbation theory. The Feynman diagrams are much easier to track of than old-fashioned terms. Each Feynman diagram is the sum of exponentially many old-fashioned terms, in a non-relativistic theory, there are no antiparticles and there is no doubling, so each Feynman diagram includes only one term. Feynman gave a prescription for calculating the amplitude for any given diagram from a field theory Lagrangian—the Feynman rules, in addition to their value as a mathematical tool, Feynman diagrams provide deep physical insight into the nature of particle interactions. Particles interact in every way available, in fact, intermediate virtual particles are allowed to propagate faster than light, the probability of each final state is then obtained by summing over all such possibilities. This is closely tied to the integral formulation of quantum mechanics. After renormalization, calculations using Feynman diagrams match experimental results with very high accuracy, Feynman diagram and path integral methods are also used in statistical mechanics and can even be applied to classical mechanics. Murray Gell-Mann always referred to Feynman diagrams as Stueckelberg diagrams, after a Swiss physicist, Ernst Stueckelberg, Feynman had to lobby hard for the diagrams, which confused the establishment physicists trained in equations and graphs. In quantum field theories the Feynman diagrams are obtained from Lagrangian by Feynman rules, dimensional regularization writes a Feynman integral as an integral depending on the spacetime dimension d and spacetime points
3.
History of quantum field theory
–
In particle physics, the history of quantum field theory starts with its creation by Paul Dirac, when he attempted to quantize the electromagnetic field in the late 1920s. Major advances in the theory were made in the 1950s, QED was so successful and accurately predictive that efforts were made to apply the same basic concepts for the other forces of nature. By the late 1970s, these efforts were successful in the utilization of gauge theory to the nuclear force and weak nuclear force. Efforts to describe gravity using the techniques have, to date. The study of field theory is still flourishing, as are applications of its methods to many physical problems. It remains one of the most vital areas of physics today. Quantum field theory originated in the 1920s from the problem of creating a mechanical theory of the electromagnetic field. This theory assumed that no charges or currents were present. It is now understood that the ability to such processes is one of the most important features of quantum field theory. The final crucial step was Enrico Fermis theory of β-decay and this need to put together relativity and quantum mechanics was the second major motivation in the development of quantum field theory. Pascual Jordan and Wolfgang Pauli showed in 1928 that quantum fields could be made to behave in the way predicted by special relativity during coordinate transformations. The Dirac equation accommodated the spin-1/2 value of the electron and accounted for its magnetic moment as well as giving accurate predictions for the spectra of hydrogen. This work was performed first by Dirac himself with the invention of hole theory in 1930 and by Wendell Furry, Robert Oppenheimer, Vladimir Fock, all relativistic wave equations that describe spin-zero particles are said to be of the Klein–Gordon type. This limitation is crucial for the formulation and interpretation of a quantum field theory of photons and electrons. The analysis of Bohr and Rosenfeld explains fluctuations in the values of the field that differ from the classically allowed values distant from the sources of the field. Their analysis was crucial to showing that the limitations and physical implications of the uncertainty principle apply to all dynamical systems, the third thread in the development of quantum field theory was the need to handle the statistics of many-particle systems consistently and with ease. This thread of development was incorporated into many-body theory and strongly influenced condensed matter physics, despite its early successes quantum field theory was plagued by several serious theoretical difficulties. The situation was dire, and had features that reminded many of the Rayleigh–Jeans catastrophe
4.
Field (physics)
–
In physics, a field is a physical quantity, typically a number or tensor, that has a value for each point in space and time. For example, on a map, the surface wind velocity is described by assigning a vector to each point on a map. Each vector represents the speed and direction of the movement of air at that point, as another example, an electric field can be thought of as a condition in space emanating from an electric charge and extending throughout the whole of space. When a test electric charge is placed in this electric field, physicists have found the notion of a field to be of such practical utility for the analysis of forces that they have come to think of a force as due to a field. In the modern framework of the theory of fields, even without referring to a test particle, a field occupies space, contains energy. This led physicists to consider electromagnetic fields to be a physical entity, the fact that the electromagnetic field can possess momentum and energy makes it very real. A particle makes a field, and a field acts on another particle, in practice, the strength of most fields has been found to diminish with distance to the point of being undetectable. One consequence is that the Earths gravitational field quickly becomes undetectable on cosmic scales, a field has a unique tensorial character in every point where it is defined, i. e. a field cannot be a scalar field somewhere and a vector field somewhere else. For example, the Newtonian gravitational field is a vector field, moreover, within each category, a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. In fact in this theory an equivalent representation of field is a field particle, to Isaac Newton his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces and this quantity, the gravitational field, gave at each point in space the total gravitational force which would be felt by an object with unit mass at that point. The development of the independent concept of a field began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became more natural to take the field approach and express these laws in terms of electric and magnetic fields. The independent nature of the field became more apparent with James Clerk Maxwells discovery that waves in these fields propagated at a finite speed, Maxwell, at first, did not adopt the modern concept of a field as fundamental quantity that could independently exist. Instead, he supposed that the field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no evidence of such an effect was ever found
5.
Electromagnetism
–
Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force usually exhibits electromagnetic fields such as fields, magnetic fields. The other three fundamental interactions are the interaction, the weak interaction, and gravitation. The word electromagnetism is a form of two Greek terms, ἤλεκτρον, ēlektron, amber, and μαγνῆτις λίθος magnētis lithos, which means magnesian stone. The electromagnetic force plays a role in determining the internal properties of most objects encountered in daily life. Ordinary matter takes its form as a result of forces between individual atoms and molecules in matter, and is a manifestation of the electromagnetic force. Electrons are bound by the force to atomic nuclei, and their orbital shapes. The electromagnetic force governs the processes involved in chemistry, which arise from interactions between the electrons of neighboring atoms, there are numerous mathematical descriptions of the electromagnetic field. In classical electrodynamics, electric fields are described as electric potential, although electromagnetism is considered one of the four fundamental forces, at high energy the weak force and electromagnetic force are unified as a single electroweak force. In the history of the universe, during the epoch the unified force broke into the two separate forces as the universe cooled. Originally, electricity and magnetism were considered to be two separate forces, Magnetic poles attract or repel one another in a manner similar to positive and negative charges and always exist as pairs, every north pole is yoked to a south pole. An electric current inside a wire creates a corresponding magnetic field outside the wire. Its direction depends on the direction of the current in the wire. A current is induced in a loop of wire when it is moved toward or away from a field, or a magnet is moved towards or away from it. While preparing for a lecture on 21 April 1820, Hans Christian Ørsted made a surprising observation. As he was setting up his materials, he noticed a compass needle deflected away from north when the electric current from the battery he was using was switched on. At the time of discovery, Ørsted did not suggest any explanation of the phenomenon. However, three later he began more intensive investigations
6.
Weak interaction
–
In particle physics, the weak interaction is one of the four known fundamental interactions of nature, alongside the strong interaction, electromagnetism, and gravitation. The weak interaction is responsible for radioactive decay, which plays an role in nuclear fission. The theory of the interaction is sometimes called quantum flavourdynamics, in analogy with the terms QCD dealing with the strong interaction. However the term QFD is rarely used because the force is best understood in terms of electro-weak theory. The Standard Model of particle physics, which does not address gravity, provides a framework for understanding how the electromagnetic, weak. An interaction occurs when two particles, typically but not necessarily half-integer spin fermions, exchange integer-spin, force-carrying bosons, the fermions involved in such exchanges can be either elementary or composite, although at the deepest levels, all weak interactions ultimately are between elementary particles. In the case of the interaction, fermions can exchange three distinct types of force carriers known as the W+, W−, and Z bosons. The mass of each of these bosons is far greater than the mass of a proton or neutron, the force is in fact termed weak because its field strength over a given distance is typically several orders of magnitude less than that of the strong nuclear force or electromagnetic force. During the quark epoch of the universe, the electroweak force separated into the electromagnetic. Important examples of the weak interaction include beta decay, and the fusion of hydrogen into deuterium that powers the Suns thermonuclear process, most fermions will decay by a weak interaction over time. Such decay makes radiocarbon dating possible, as carbon-14 decays through the interaction to nitrogen-14. It can also create radioluminescence, commonly used in tritium illumination, quarks, which make up composite particles like neutrons and protons, come in six flavours – up, down, strange, charm, top and bottom – which give those composite particles their properties. The weak interaction is unique in that it allows for quarks to swap their flavour for another, the swapping of those properties is mediated by the force carrier bosons. Also, the interaction is the only fundamental interaction that breaks parity-symmetry, and similarly. In 1933, Enrico Fermi proposed the first theory of the weak interaction and he suggested that beta decay could be explained by a four-fermion interaction, involving a contact force with no range. However, it is described as a non-contact force field having a finite range. The existence of the W and Z bosons was not directly confirmed until 1983, the weak interaction is unique in a number of respects, It is the only interaction capable of changing the flavour of quarks. It is the interaction that violates P or parity-symmetry
7.
Strong interaction
–
At the range of 10−15 m, the strong force is approximately 137 times as strong as electromagnetism, a million times as strong as the weak interaction and 1038 times as strong as gravitation. The strong nuclear force holds most ordinary matter together because it confines quarks into hadron particles such as proton and neutron, in addition, the strong force binds neutrons and protons to create atomic nuclei. Most of the mass of a proton or neutron is the result of the strong force field energy. The strong interaction is observable at two ranges, on a scale, it is the force that binds protons and neutrons together to form the nucleus of an atom. On the smaller scale, it is the force that holds together to form protons, neutrons. In the latter context, it is known as the color force. The strong force inherently has such a strength that hadrons bound by the strong force can produce new massive particles. Thus, if hadrons are struck by particles, they give rise to new hadrons instead of emitting freely moving radiation. This property of the force is called color confinement, and it prevents the free emission of the strong force, instead, in practice. In the context of binding protons and neutrons together to form atomic nuclei, in this case, it is the residuum of the strong interaction between the quarks that make up the protons and neutrons. As such, the strong interaction obeys a quite different distance-dependent behavior between nucleons, from when it is acting to bind quarks within nucleons. The binding energy that is released on the breakup of a nucleus is related to the residual strong force and is harnessed as fission energy in nuclear power. The strong interaction is mediated by the exchange of particles called gluons that act between quarks, antiquarks, and other gluons. Gluons are thought to interact with quarks and other gluons by way of a type of charge called color charge. Color charge is analogous to electromagnetic charge, but it comes in three rather than one, which results in a different type of force, with different rules of behavior. These rules are detailed in the theory of quantum chromodynamics, which is the theory of quark-gluon interactions, after the Big Bang and during the electroweak epoch of the universe, the electroweak force separated from the strong force. A Grand Unified Theory is hypothesized to exist to describe this, but no theory has yet been successfully formulated. Before the 1970s, physicists were uncertain as to how the nucleus was bound together
8.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a branch of physics which is the fundamental theory of nature at small scales and low energies of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, derives from quantum mechanics as an approximation valid only at large scales, early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms, in one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. In 1803, Thomas Young, an English polymath, performed the famous experiment that he later described in a paper titled On the nature of light. This experiment played a role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays, Plancks hypothesis that energy is radiated and absorbed in discrete quanta precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, ludwig Boltzmann independently arrived at this result by considerations of Maxwells equations. However, it was only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmanns statistical interpretation of thermodynamics and proposed what is now called Plancks law, following Max Plancks solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohrs theory of structure, introducing elliptical orbits. This phase is known as old quantum theory, according to Planck, each energy element is proportional to its frequency, E = h ν, where h is Plancks constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the reality of the radiation itself. In fact, he considered his quantum hypothesis a mathematical trick to get the right rather than a sizable discovery. He won the 1921 Nobel Prize in Physics for this work, lower energy/frequency means increased time and vice versa, photons of differing frequencies all deliver the same amount of action, but do so in varying time intervals. High frequency waves are damaging to human tissue because they deliver their action packets concentrated in time, the Copenhagen interpretation of Niels Bohr became widely accepted. In the mid-1920s, developments in mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory, out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons
9.
Special relativity
–
In physics, special relativity is the generally accepted and experimentally well-confirmed physical theory regarding the relationship between space and time. In Albert Einsteins original pedagogical treatment, it is based on two postulates, The laws of physics are invariant in all inertial systems, the speed of light in a vacuum is the same for all observers, regardless of the motion of the light source. It was originally proposed in 1905 by Albert Einstein in the paper On the Electrodynamics of Moving Bodies, as of today, special relativity is the most accurate model of motion at any speed. Even so, the Newtonian mechanics model is useful as an approximation at small velocities relative to the speed of light. Not until Einstein developed general relativity, to incorporate general frames of reference, a translation that has often been used is restricted relativity, special really means special case. It has replaced the notion of an absolute universal time with the notion of a time that is dependent on reference frame. Rather than an invariant time interval between two events, there is an invariant spacetime interval, a defining feature of special relativity is the replacement of the Galilean transformations of Newtonian mechanics with the Lorentz transformations. Time and space cannot be defined separately from each other, rather space and time are interwoven into a single continuum known as spacetime. Events that occur at the time for one observer can occur at different times for another. The theory is special in that it applies in the special case where the curvature of spacetime due to gravity is negligible. In order to include gravity, Einstein formulated general relativity in 1915, Special relativity, contrary to some outdated descriptions, is capable of handling accelerations as well as accelerated frames of reference. e. At a sufficiently small scale and in conditions of free fall, a locally Lorentz-invariant frame that abides by special relativity can be defined at sufficiently small scales, even in curved spacetime. Galileo Galilei had already postulated that there is no absolute and well-defined state of rest, Einstein extended this principle so that it accounted for the constant speed of light, a phenomenon that had been recently observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the known laws of either mechanics or electrodynamics. These propositions were the constancy of the speed of light and the independence of physical laws from the choice of inertial system, the Principle of Invariant Light Speed –. Light is always propagated in empty space with a definite velocity c which is independent of the state of motion of the emitting body. That is, light in vacuum propagates with the c in at least one system of inertial coordinates. Following Einsteins original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations, however, the most common set of postulates remains those employed by Einstein in his original paper
10.
General relativity
–
General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. General relativity generalizes special relativity and Newtons law of gravitation, providing a unified description of gravity as a geometric property of space and time. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter, the relation is specified by the Einstein field equations, a system of partial differential equations. Examples of such differences include gravitational time dilation, gravitational lensing, the redshift of light. The predictions of relativity have been confirmed in all observations. Although general relativity is not the only theory of gravity. Einsteins theory has important astrophysical implications, for example, it implies the existence of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape—as an end-state for massive stars. The bending of light by gravity can lead to the phenomenon of gravitational lensing, General relativity also predicts the existence of gravitational waves, which have since been observed directly by physics collaboration LIGO. In addition, general relativity is the basis of current cosmological models of an expanding universe. Soon after publishing the special theory of relativity in 1905, Einstein started thinking about how to incorporate gravity into his new relativistic framework. In 1907, beginning with a thought experiment involving an observer in free fall. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present, the Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory, but as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the stages of gravitational collapse. In 1917, Einstein applied his theory to the universe as a whole, in line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that our universe is expanding and this is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot, Einstein later declared the cosmological constant the biggest blunder of his life
11.
Gauge theory
–
In physics, a gauge theory is a type of field theory in which the Lagrangian is invariant under a continuous group of local transformations. An invariant is a model that holds no matter the mathematical procedure applied to it. This is the concept behind gauge invariance, the idea of fields as described by Michael Faraday in his study of electromagnetism led to the postulate that fields could be described mathematically as scalars and vectors. When a field is transformed, but the result is not, applying gauge theory creates a unification which describes mathematical formulas or models that hold good for all fields of the same class. The term gauge refers to any specific mathematical formalism to regulate redundant degrees of freedom in the Lagrangian, the transformations between possible gauges, called gauge transformations, form a Lie group—referred to as the symmetry group or the gauge group of the theory. Associated with any Lie group is the Lie algebra of group generators, for each group generator there necessarily arises a corresponding field called the gauge field. Gauge fields are included in the Lagrangian to ensure its invariance under the local group transformations, when such a theory is quantized, the quanta of the gauge fields are called gauge bosons. If the symmetry group is non-commutative, the theory is referred to as non-abelian. Many powerful theories in physics are described by Lagrangians that are invariant under some symmetry transformation groups, when they are invariant under a transformation identically performed at every point in the spacetime in which the physical processes occur, they are said to have a global symmetry. Local symmetry, the cornerstone of gauge theories, is a stricter constraint, in fact, a global symmetry is just a local symmetry whose groups parameters are fixed in spacetime. Gauge theories are important as the field theories explaining the dynamics of elementary particles. Quantum electrodynamics is a gauge theory with the symmetry group U and has one gauge field. The Standard Model is a gauge theory with the symmetry group U×SU×SU and has a total of twelve gauge bosons. Gauge theories are important in explaining gravitation in the theory of general relativity. Its case is unusual in that the gauge field is a tensor. Theories of quantum gravity, beginning with gauge gravitation theory, also postulate the existence of a gauge boson known as the graviton, both gauge invariance and diffeomorphism invariance reflect a redundancy in the description of the system. An alternative theory of gravitation, gauge theory gravity, replaces the principle of covariance with a true gauge principle with new gauge fields. Historically, these ideas were first stated in the context of classical electromagnetism, however, the modern importance of gauge symmetries appeared first in the relativistic quantum mechanics of electrons – quantum electrodynamics, elaborated on below
12.
Symmetry (physics)
–
In physics, a symmetry of a physical system is a physical or mathematical feature of the system that is preserved or remains unchanged under some transformation. A family of particular transformations may be continuous or discrete, continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups and these two concepts, Lie and finite groups, are the foundation for the fundamental theories of modern physics. Symmetries are frequently amenable to mathematical formulations such as group representations and can, in addition, another important example is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations, which is an important idea in general relativity. Invariance is specified mathematically by transformations that leave some property unchanged and this idea can apply to basic real-world observations. For example, temperature may be homogeneous throughout a room, since the temperature does not depend on the position of an observer within the room, we say that the temperature is invariant under a shift in an observers position within the room. Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation, the sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve how the sphere looks, the above ideas lead to the useful idea of invariance when discussing observed physical symmetry, this can be applied to symmetries in forces as well. Rotating the wire about its own axis does not change its position or charge density, the field strength at a rotated position is the same. This is not true in general for a system of charges. The total kinetic energy is preserved under a reflection in the y-axis, the last example above illustrates another way of expressing symmetries, namely through the equations that describe some aspect of the physical system. The above example shows that the kinetic energy will be the same if v1. Symmetries may be classified as global or local. Local symmetries play an important role in physics as they form the basis for gauge theories, the two examples of rotational symmetry described above - spherical and cylindrical - are each instances of continuous symmetry. These are characterised by invariance following a change in the geometry of the system. For example, the wire may be rotated through any angle about its axis, mathematically, continuous symmetries are described by continuous or smooth functions. An important subclass of continuous symmetries in physics are spacetime symmetries, continuous spacetime symmetries are symmetries involving transformations of space and time. For example, in mechanics, a particle solely acted upon by gravity will have gravitational potential energy m g h when suspended from a height h above the Earths surface
13.
Symmetry in quantum mechanics
–
In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. In practice, they are methods for solving problems and predicting what can happen. While conservation laws do not always give the answer to the problem directly, they form the correct constraints, the notational conventions used in this article are as follows. Boldface indicates vectors, four vectors, matrices, and vectorial operators, wide hats are for operators, narrow hats are for unit vectors. The summation convention on the repeated indices is used, unless stated otherwise. Generally, the correspondence between continuous symmetries and conservation laws is given by Noethers theorem and this can be done for displacements, durations, and angles. Additionally, the invariance of certain quantities can be seen by making changes in lengths and angles. In what follows, transformations on only one-particle wavefunctions in the form, Ω ^ ψ = ψ are considered, unitarity is generally required for operators representing transformations of space, time, and spin, since the norm of a state must be invariant under these transformations. The inverse is the Hermitian conjugate Ω ^ −1 = Ω ^ †, the results can be extended to many-particle wavefunctions. Quantum operators representing observables are also required to be Hermitian so that their eigenvalues are real numbers, i. e. the operator equals its Hermitian conjugate, following are the key points of group theory relevant to quantum theory, examples are given throughout the article. For an alternative approach using matrix groups, see the books of Hall Let G be a Lie group, ξN. the dimension of the group, N, is the number of parameters it has. The generators satisfy the commutator, = i f a b c X c where fabc are the constants of the group. This makes, together with the vector space property, the set of all generators of a group a Lie algebra, due to the antisymmetry of the bracket, the structure constants of the group are antisymmetric in the first two indices. The representations of the group are denoted using a capital D and defined by, representations are linear operators that take in group elements and preserve the composition rule, D D = D. A representation which cannot be decomposed into a sum of other representations, is called irreducible. It is conventional to label irreducible representations by a number n in brackets, as in D, or if there is more than one number. Representations also exist for the generators and the notation of a capital D is used in this context. An example of abuse is to be found in the defining equation above
14.
Parity (physics)
–
In quantum mechanics, a parity transformation is the flip in the sign of one spatial coordinate. In three dimensions, it is often described by the simultaneous flip in the sign of all three spatial coordinates, P, ↦. It can also be thought of as a test for chirality of a physical phenomenon, a parity transformation on something achiral, on the other hand, can be viewed as an identity transformation. All fundamental interactions of particles, with the exception of the weak interaction, are symmetric under parity. The weak interaction is chiral and thus provides a means for probing chirality in physics, in interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions. A matrix representation of P has determinant equal to −1, and hence is distinct from a rotation, in a two-dimensional plane, a simultaneous flip of all coordinates in sign is not a parity transformation, it is the same as a 180°-rotation. Under rotations, classical geometrical objects can be classified into scalars, vectors, in classical physics, physical configurations need to transform under representations of every symmetry group. Quantum theory predicts that states in a Hilbert space do not need to transform under representations of the group of rotations, the projective representations of any group are isomorphic to the ordinary representations of a central extension of the group. For example, projective representations of the 3-dimensional rotation group, which is the orthogonal group SO, are ordinary representations of the special unitary group SU. Projective representations of the group that are not representations are called spinors. If one adds to this a classification by parity, these can be extended, for example, vectors and axial vectors which both transform as vectors under rotation. One can define reflections such as V x, ↦, which also have negative determinant, then, combining them with rotations one can recover the particular parity transformation defined earlier. The first parity transformation given does not work in an number of dimensions, though. In odd number of only the latter example of a parity transformation can be used. Parity forms the abelian group Z2 due to the relation P2 =1, all Abelian groups have only one-dimensional irreducible representations. For Z2, there are two representations, one is even under parity, the other is odd. These are useful in quantum mechanics, newtons equation of motion F = ma equates two vectors, and hence is invariant under parity. The law of gravity also involves only vectors and is also, therefore, however, angular momentum L is an axial vector, L = r × p, P = × = L
15.
T-symmetry
–
In theoretical physics, T-symmetry is the theoretical symmetry of physical laws under a time reversal transformation, T, t ↦ − t. Although in restricted contexts one may find this symmetry, the universe itself does not show symmetry under time reversal. Hence time is said to be non-symmetric, or asymmetric, except for equilibrium states when the law of thermodynamics predicts the time symmetry to hold. However, quantum measurements are predicted to violate time symmetry even in equilibrium, contrary to their classical counterparts. Time asymmetries are generally distinguished as between those intrinsic to the physical laws, those due to the initial conditions of our universe. Physicists also discuss the time-reversal invariance of local and/or macroscopic descriptions of physical systems and our daily experience shows that T-symmetry does not hold for the behavior of bulk materials. Of these macroscopic laws, most notable is the law of thermodynamics. Many other phenomena, such as the motion of bodies with friction, or viscous motion of fluids, reduce to this. The question of whether this time-asymmetric dissipation is really inevitable has been considered by many physicists, the name comes from a thought experiment described by James Clerk Maxwell in which a microscopic demon guards a gate between two halves of a room. It only lets slow molecules into one half, only fast ones into the other, by eventually making one side of the room cooler than before and the other hotter, it seems to reduce the entropy of the room, and reverse the arrow of time. Many analyses have made of this, all show that when the entropy of room and demon are taken together. Modern analyses of this problem have taken into account Claude E. Shannons relation between entropy and information, many interesting results in modern computing are closely related to this problem — reversible computing, quantum computing and physical limits to computing, are examples. These seemingly metaphysical questions are today, in ways, slowly being converted into hypotheses of the physical sciences. The current consensus hinges upon the Boltzmann-Shannon identification of the logarithm of phase space volume with the negative of Shannon information, in this notion, a fixed initial state of a macroscopic system corresponds to relatively low entropy because the coordinates of the molecules of the body are constrained. As the system evolves in the presence of dissipation, the coordinates can move into larger volumes of phase space, becoming more uncertain. One can, however, equally well imagine a state of the universe in which the motions of all of the particles at one instant were the reverse, such a state would then evolve in reverse, so presumably entropy would decrease. Why is our state preferred over the other, one position is to say that the constant increase of entropy we observe happens only because of the initial state of our universe. Other possible states of the universe would actually result in no increase of entropy, in this view, the apparent T-asymmetry of our universe is a problem in cosmology, why did the universe start with a low entropy
16.
Translational symmetry
–
In geometry, a translation slides a thing by a, Ta = p + a. In physics and mathematics, continuous translational symmetry is the invariance of a system of equations under any translation, discrete translational symmetry is invariant under discrete translation. More precisely it must hold that ∀ δ A f = A, laws of physics are translationally invariant under a spatial translation if they do not distinguish different points in space. According to Noethers theorem, space translational symmetry of a system is equivalent to the momentum conservation law. Translational symmetry of a means that a particular translation does not change the object. Fundamental domains are e. g. H + a for any hyperplane H for which a has an independent direction. This is in 1D a line segment, in 2D an infinite strip, Note that the strip and slab need not be perpendicular to the vector, hence can be narrower or thinner than the length of the vector. In spaces with higher than 1, there may be multiple translational symmetry. For each set of k independent translation vectors the symmetry group is isomorphic with Zk, in particular the multiplicity may be equal to the dimension. This implies that the object is infinite in all directions, in this case the set of all translations forms a lattice. The absolute value of the determinant of the matrix formed by a set of vectors is the hypervolume of the n-dimensional parallelepiped the set subtends. This parallelepiped is a region of the symmetry, any pattern on or in it is possible. E. g. in 2D, instead of a and b we can take a. In general in 2D, we can take pa + qb and ra + sb for integers p, q, r and this ensures that a and b themselves are integer linear combinations of the other two vectors. If not, not all translations are possible with the other pair, each pair a, b defines a parallelogram, all with the same area, the magnitude of the cross product. One parallelogram fully defines the whole object, without further symmetry, this parallelogram is a fundamental domain. The vectors a and b can be represented by complex numbers, for two given lattice points, equivalence of choices of a third point to generate a lattice shape is represented by the modular group, see lattice. With rotational symmetry of order two of the pattern on the tile we have p2, the rectangle is a more convenient unit to consider as fundamental domain than a parallelogram consisting of part of a tile and part of another one
17.
Rotation symmetry
–
Rotational symmetry, also known as radial symmetry in biology, is the property a shape has when it looks the same after some rotation by a partial turn. An objects degree of symmetry is the number of distinct orientations in which it looks the same. Formally the rotational symmetry is symmetry with respect to some or all rotations in m-dimensional Euclidean space, rotations are direct isometries, i. e. isometries preserving orientation. With the modified notion of symmetry for vector fields the symmetry group can also be E+, for symmetry with respect to rotations about a point we can take that point as origin. These rotations form the orthogonal group SO, the group of m×m orthogonal matrices with determinant 1. For m =3 this is the rotation group SO, for chiral objects it is the same as the full symmetry group. Laws of physics are SO-invariant if they do not distinguish different directions in space, because of Noethers theorem, rotational symmetry of a physical system is equivalent to the angular momentum conservation law. Note that 1-fold symmetry is no symmetry, the notation for n-fold symmetry is Cn or simply n. The actual symmetry group is specified by the point or axis of symmetry, for each point or axis of symmetry, the abstract group type is cyclic group of order n, Zn. The fundamental domain is a sector of 360°/n, if there is e. g. rotational symmetry with respect to an angle of 100°, then also with respect to one of 20°, the greatest common divisor of 100° and 360°. A typical 3D object with rotational symmetry but no mirror symmetry is a propeller and this is the rotation group of a regular prism, or regular bipyramid. 4×3-fold and 3×2-fold axes, the rotation group T of order 12 of a regular tetrahedron, the group is isomorphic to alternating group A4. 3×4-fold, 4×3-fold, and 6×2-fold axes, the rotation group O of order 24 of a cube, the group is isomorphic to symmetric group S4. 6×5-fold, 10×3-fold, and 15×2-fold axes, the rotation group I of order 60 of a dodecahedron, the group is isomorphic to alternating group A5. The group contains 10 versions of D3 and 6 versions of D5, in the case of the Platonic solids, the 2-fold axes are through the midpoints of opposite edges, the number of them is half the number of edges. Rotational symmetry with respect to any angle is, in two dimensions, circular symmetry, the fundamental domain is a half-line. In three dimensions we can distinguish cylindrical symmetry and spherical symmetry and that is, no dependence on the angle using cylindrical coordinates and no dependence on either angle using spherical coordinates. The fundamental domain is a half-plane through the axis, and a radial half-line, axisymmetric or axisymmetrical are adjectives which refer to an object having cylindrical symmetry, or axisymmetry
18.
Noether charge
–
Noethers theorem states that every differentiable symmetry of the action of a physical system has a corresponding conservation law. The theorem was proven by mathematician Emmy Noether in 1915 and published in 1918, the action of a physical system is the integral over time of a Lagrangian function, from which the systems behavior can be determined by the principle of least action. Noethers theorem is used in physics and the calculus of variations. A generalization of the formulations on constants of motion in Lagrangian and Hamiltonian mechanics, in particular, dissipative systems with continuous symmetries need not have a corresponding conservation law. The physical system itself need not be symmetric, a jagged asteroid tumbling in space conserves angular momentum despite its asymmetry and it is the laws of its motion that are symmetric. Noethers theorem is important, both because of the insight it gives into conservation laws, and also as a calculational tool. It allows investigators to determine the quantities from the observed symmetries of a physical system. Conversely, it allows researchers to consider whole classes of hypothetical Lagrangians with given invariants, as an illustration, suppose that a physical theory is proposed which conserves a quantity X. A researcher can calculate the types of Lagrangians that conserve X through a continuous symmetry, due to Noethers theorem, the properties of these Lagrangians provide further criteria to understand the implications and judge the fitness of the new theory. There are numerous versions of Noethers theorem, with varying degrees of generality, the original version applied only to ordinary differential equations and not partial differential equations. The original versions also assume that the Lagrangian depends only upon the first derivative, there are natural quantum counterparts of this theorem, expressed in the Ward–Takahashi identities. Generalizations of Noethers theorem to superspaces are also available, all fine technical points aside, Noethers theorem can be stated informally If a system has a continuous symmetry property, then there are corresponding quantities whose values are conserved in time. A more sophisticated version of the theorem involving fields states that, To every differentiable symmetry generated by local actions, the conservation law of a physical quantity is usually expressed as a continuity equation. The formal proof of the theorem utilizes the condition of invariance to derive an expression for a current associated with a physical quantity. In modern terminology, the quantity is called the Noether charge. The Noether current is defined up to a vector field. A conservation law states that some quantity X in the description of a systems evolution remains constant throughout its motion — it is an invariant. Mathematically, the rate of change of X vanishes, d X d t =0, such quantities are said to be conserved, they are often called constants of motion
19.
Anomaly (physics)
–
In quantum physics an anomaly or quantum anomaly is the failure of a symmetry of a theorys classical action to be a symmetry of any regularization of the full quantum theory. In classical physics, an anomaly is the failure of a symmetry to be restored in the limit in which the symmetry-breaking parameter goes to zero. Perhaps the first known anomaly was the dissipative anomaly in turbulence, technically, an anomalous symmetry in a quantum theory is a symmetry of the action, but not of the measure, and so not of the partition function as a whole. A global anomaly is associated with global symmetry. The most prevalent global anomaly in physics is associated with the violation of scale invariance by quantum corrections, since regulators generally introduce a distance scale, the classically scale-invariant theories are subject to renormalization group flow, i. e. changing behavior with energy scale. Anomalies in abelian global symmetries pose no problems in a field theory. In particular the corresponding symmetries can be fixed by fixing the boundary conditions of the path integral, global anomalies in symmetries that approach the identity sufficiently quickly at infinity do, however, pose problems. In known examples such symmetries correspond to disconnected components of gauge symmetries, as these symmetries vanish at infinity, they cannot be constrained by boundary conditions and so must be summed over in the path integral. The sum of the orbit of a state is a sum of phases which form a subgroup of U. As there is an anomaly, not all of these phases are the same, therefore it is not the identity subgroup. The sum of the phases in every other subgroup of U is equal to zero, and so all path integrals are equal to zero when there is such an anomaly and a theory does not exist. An exception may occur when the space of configurations is itself disconnected, in this case the large gauge transformations do not act on the system and do not cause the path integral to vanish. In SU gauge theory in 4 dimensional Minkowski space, a gauge transformation corresponds to a choice of an element of the unitary group SU at each point in spacetime. The group of gauge transformations is connected. If the 3-sphere at infinity is identified with a point, our Minkowski space is identified with the 4-sphere, thus we see that the group of gauge transformations vanishing at infinity in Minkowski 4-space is isomorphic to the group of all gauge transformations on the 4-sphere. This is the group consists of a continuous choice of a gauge transformation in SU for each point on the 4-sphere. In other words, the symmetries are in one-to-one correspondence with maps from the 4-sphere to the 3-sphere. The space of maps is not connected, instead the connected components are classified by the fourth homotopy group of the 3-sphere which is the cyclic group of order two
20.
Crossing (physics)
–
In quantum field theory, a branch of theoretical physics, crossing is the property of scattering amplitudes that allows antiparticles to be interpreted as particles going backwards in time. The only difference is that the value of the energy is negative for the antiparticle, the formal way to state this property is that the antiparticle scattering amplitudes are the analytic continuation of particle scattering amplitudes to negative energies. The interpretation of this statement is that the antiparticle is in every way a particle going backwards in time, crossing was already implicit in the work of Feynman, but came to its own in the 1950s and 1960s as part of the analytic S-matrix program. We concentrate our attention on one of the particles with momentum p. The quantum field ϕ, corresponding to the particle is allowed to be either bosonic or fermionic. Crossing symmetry states that we can relate the amplitude of this process to the amplitude of a process with an outgoing antiparticle ϕ ¯ replacing the incoming particle ϕ, M = M. In the bosonic case, the idea behind crossing symmetry can be understood intuitively using Feynman diagrams, consider any process involving an incoming particle with momentum p. For the particle to give a contribution to the amplitude. Conservation of momentum implies ∑ k =1 n q k = p, in fermionic case, one can apply the same argument but now the relative phase convention for the external spinors must be taken into account. For example, the annihilation of an electron with a positron into two photons is related to a scattering of an electron with a photon by crossing symmetry. This relation allows to calculate the scattering amplitude of one process from the amplitude for the process if negative values of energy of some particles are substituted. Feynman–Stueckelberg interpretation Feynman diagram Regge theory Detailed balance Peskin, M. Schroeder, an Introduction to Quantum Field Theory
21.
Effective field theory
–
In physics, an effective field theory is a type of approximation to an underlying physical theory, such as a quantum field theory or a statistical mechanics model. Intuitively, one averages over the behavior of the theory at shorter length scales to derive what is hoped to be a simplified model at longer length scales. Effective field theories typically work best when there is a separation between length scale of interest and the length scale of the underlying dynamics. Effective field theories have found use in physics, statistical mechanics, condensed matter physics, general relativity. They simplify calculations, and allow treatment of dissipation and radiation effects, presently, effective field theories are discussed in the context of the renormalization group where the process of integrating out short distance degrees of freedom is made systematic. Although this method is not sufficiently concrete to allow the construction of effective field theories. This method also lends credence to the technique of constructing effective field theories. If there is a mass scale M in the microscopic theory. The construction of a field theory accurate to some power of 1/M requires a new set of free parameters at each order of the expansion in 1/M. This technique is useful for scattering or other processes where the momentum scale k satisfies the condition k/M≪1. Since effective field theories are not valid at small length scales, the best-known example of an effective field theory is the Fermi theory of beta decay. This theory was developed during the study of weak decays of nuclei when only the hadrons and leptons undergoing weak decay were known. The typical reactions studied were, n → p + e − + ν ¯ e μ − → e − + ν ¯ e + ν μ and this theory posited a pointlike interaction between the four fermions involved in these reactions. The theory had great success and was eventually understood to arise from the gauge theory of electroweak interactions. In this more fundamental theory, the interactions are mediated by a gauge boson. The immense success of the Fermi theory was because the W particle has mass of about 80 GeV, such a separation of scales, by over 3 orders of magnitude, has not been met in any other situation as yet. Another famous example is the BCS theory of superconductivity, here the underlying theory is of electrons in a metal interacting with lattice vibrations called phonons. The phonons cause attractive interactions between electrons, causing them to form Cooper pairs
22.
Vacuum expectation value
–
In quantum field theory the vacuum expectation value of an operator is its average, expected value in the vacuum. The vacuum expectation value of an operator O is usually denoted by ⟨ O ⟩, one of the most widely used, but controversial, examples of an observable physical effect that results from the vacuum expectation value of an operator is the Casimir effect. This concept is important for working with functions in quantum field theory. It is also important in spontaneous symmetry breaking, examples are, The Higgs field has a vacuum expectation value of 246 GeV This nonzero value underlies the Higgs mechanism of the Standard Model. The chiral condensate in Quantum chromodynamics, about a factor of a smaller than the above, gives a large effective mass to quarks. This underlies the bulk of the mass of most hadrons, the gluon condensate in Quantum chromodynamics may also be partly responsible for masses of hadrons. The observed Lorentz invariance of space-time allows only the formation of condensates which are Lorentz scalars and have vanishing charge, thus fermion condensates must be of the form ⟨ ψ ¯ ψ ⟩, where ψ is the fermion field. Similarly a tensor field, Gμν, can only have an expectation value such as ⟨ G μ ν G μ ν ⟩. In some vacua of string theory, however, non-scalar condensates are found, if these describe our universe, then Lorentz symmetry violation may be observable. Wightman axioms and Correlation function vacuum energy or dark energy Spontaneous symmetry breaking
23.
Lattice gauge theory
–
In physics, lattice gauge theory is the study of gauge theories on a spacetime that has been discretized into a lattice. Gauge theories are important in physics, and include the prevailing theories of elementary particles, quantum electrodynamics, quantum chromodynamics. Non-perturbative gauge theory calculations in continuous spacetime formally involve evaluating an infinite-dimensional path integral, by working on a discrete spacetime, the path integral becomes finite-dimensional, and can be evaluated by stochastic simulation techniques such as the Monte Carlo method. When the size of the lattice is infinitely large and its sites infinitesimally close to each other. In lattice gauge theory, the spacetime is Wick rotated into Euclidean space and discretized into a lattice with sites separated by distance a and connected by links. In the most commonly considered cases, such as lattice QCD, fermion fields are defined at lattice sites and that is, an element U of the compact Lie group G is assigned to each link. Hence to simulate QCD, with Lie group SU, a 3×3 unitary matrix, is defined on each link, the link is assigned an orientation, with the inverse element corresponding to the same link with the opposite orientation. The Yang–Mills action is written on the lattice using Wilson loops, given a faithful irreducible representation ρ of G, the lattice Yang-Mills action is the sum over all lattice sites of the trace over the n links e1. En in the Wilson loop, S = ∑ F − ℜ, if ρ is a real representation, taking the real component is redundant, because even if the orientation of a Wilson loop is flipped, its contribution to the action remains unchanged. There are many possible lattice Yang-Mills actions, depending on which Wilson loops are used in the action, the simplest Wilson action uses only the 1×1 Wilson loop, and differs from the continuum action by lattice artifacts proportional to the small lattice spacing a. By using more complicated Wilson loops to construct improved actions, lattice artifacts can be reduced to be proportional to a 2, quantities such as particle masses are stochastically calculated using techniques such as the Monte Carlo method. Gauge field configurations are generated with probabilities proportional to e − β S, the quantity of interest is calculated for each configuration, and averaged. Calculations are often repeated at different lattice spacings a so that the result can be extrapolated to the continuum, such calculations are often extremely computationally intensive, and can require the use of the largest available supercomputers. To reduce the burden, the so-called quenched approximation can be used. While this was common in early lattice QCD calculations, dynamical fermions are now standard and these simulations typically utilize algorithms based upon molecular dynamics or microcanonical ensemble algorithms. The results of lattice QCD computations show e. g. that in a not only the particles. Lattice gauge theory is important for the study of quantum triviality by the real-space renormalization group. The most important information in the RG flow are its fixed points, the possible macroscopic states of the system, at a large scale, are given by this set of fixed points
24.
LSZ reduction formula
–
In quantum field theory, the LSZ reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path starts from the Lagrangian of some quantum field theory. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann, furthermore, the method, or variants thereof, have turned out to be also fruitful in other fields of theoretical physics. For example in statistical physics they can be used to get a general formulation of the fluctuation-dissipation theorem. S-matrix elements are amplitudes of transitions between in states and out states, the easy way to build in and out states is to seek appropriate field operators that provide the right creation and annihilation operators. These fields are called respectively in and out fields and this hypothesis is named the adiabatic hypothesis. However self interaction never fades away and, besides many other effects, the field φin is indeed the in field we were seeking, as it describes the asymptotic behaviour of the interacting field as x0 → −∞, though this statement will be made more precise later. Yet the current j contains also self interactions like those producing the mass shift from m0 to m and these interactions do not fade away as particles drift apart, so much care must be used in establishing asymptotic relations between the interacting field and the in field. The correct prescription, as developed by Lehmann, Symanzik and Zimmermann, with appropriate changes the same steps can be followed to construct an out field that builds out states. In particular the definition of the out field is, φ = Z φ o u t + ∫ d 4 y Δ a d v j where Δadv is the advanced Greens function of the Klein–Gordon operator. For future convenience we start with the element, M = ⟨ β o u t | T φ … φ | α p i n ⟩ which is slightly more general than an S-matrix element. Indeed, M is the value of the time-ordered product of a number of fields φ ⋯ φ between an out state and an in state. The out state can contain anything from the vacuum to a number of particles. The in state contains at least a particle of momentum p, if there are no fields in the time-ordered product, then M is obviously an S-matrix element. The Lorentz-invariant measure is written as d p ~, = d 3 p /32 ω p, with ω p = p 2 + m 2. The probability amplitude for this process is given by M = ⟨ β o u t | α i n ⟩, the situation considered will be the scattering of n b-type particles to n ′ b-type particles. Suppose that the in state consists of n particles with momenta and spins, while the out state contains particles of momenta, the in and out states are then given by | α i n ⟩ = | p 1 s 1. P n s n ⟩ and | β o u t ⟩ = | k 1 σ1, extracting an in particle from | α i n ⟩ yields a free-field creation operator b p 1, i n † s 1 acting on the state with one less particle
25.
Partition function (quantum field theory)
–
In quantum field theory, the partition function Z is the generating functional of a correlation function. It is usually expressed by something like the functional integral. The partition function in quantum theory is a special case of the mathematical partition function. The D ϕ on the right-hand side means integrate over all possible field configurations ϕ with a phase given by the classical action S evaluated in that field configuration. The generating functional Z can be used to calculate the path integrals using an auxiliary function J. The generating functional is the grail of any particular field theory, if you have an exact closed-form expression for Z for a particular theory. Unlike the partition function in statistical mechanics, the function in quantum field theory contains an extra factor of i in front of the action, making the integrand complex. This i points to a connection between quantum field theory and the statistical theory of fields. This connection can be seen by Wick rotating the integrand in the exponential of the path integral, the i arises from the fact that the partition function in QFT calculates quantum-mechanical probability amplitudes between states, which take on values in a complex projective space. The fields in statistical mechanics are random variables that are real-valued as opposed to operators on a Hilbert space, kleinert, Hagen, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, World Scientific, paperback ISBN 981-238-107-4
26.
Propagator
–
These may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called Greens functions. In non-relativistic quantum mechanics, the propagator gives the probability amplitude for a particle to travel from one point at one time to another spatial point at a later time. It is the Greens function for the Schrödinger equation and this propagator can also be written as K = ⟨ x ∣ U ^ ∣ x ′ ⟩, where Û is the unitary time-evolution operator for the system taking states at time t′ to states at time t. The quantum mechanical propagator may also be found by using a path integral, here L denotes the Lagrangian of the system. The paths that are summed over move only forwards in time, in non-relativistic quantum mechanics, the propagator lets you find the state of a system given an initial state and a time interval. The new state is given by the equation ψ = ∫ − ∞ ∞ ψ K d x ′, if K only depends on the difference x − x′, this is a convolution of the initial state and the propagator. This kernel is the kernel of integral transform, for a time-translationally invariant system, the propagator only depends on the time difference t − t′, so it may be rewritten as K = K. For the N-dimensional case, the propagator can be obtained by the product K = ∏ q =1 N K. In relativistic quantum mechanics and quantum theory the propagators are Lorentz invariant. They give the amplitude for a particle to travel between two spacetime points, in quantum field theory the theory of a free scalar field is a useful and simple example which serves to illustrate the concepts needed for more complicated theories. There are a number of possible propagators for free scalar field theory and we now describe the most common ones. The position space propagators are Greens functions for the Klein–Gordon equation and this means they are functions G which satisfy G = − δ where, x, y are two points in Minkowski spacetime. ◻ x = ∂2 ∂ t 2 − ∇2 is the dAlembertian operator acting on the x coordinates and we shall restrict attention to 4-dimensional Minkowski spacetime. We can perform a Fourier transform of the equation for the propagator, obtaining G = −1. This equation can be inverted in the sense of distributions noting that the equation xf=1 has the solution, f =1 x ± i ε =1 x ∓ i π δ, below, we discuss the right choice of the sign arising from causality requirements. The solution is p, = p 0 − p → ⋅ is the 4-vector inner product. The different choices for how to deform the integration contour in the above lead to different forms for the propagator. The choice of contour is usually phrased in terms of the p 0 integral, the integrand then has two poles at p 0 = ± p →2 + m 2 so different choices of how to avoid these lead to different propagators
27.
Quantization (physics)
–
In physics, quantization is the process of transition from a classical understanding of physical phenomena to a newer understanding known as quantum mechanics. It is a procedure for constructing a quantum field theory starting from a field theory. This is a generalization of the procedure for building quantum mechanics from classical mechanics, one also speaks of field quantization, as in the quantization of the electromagnetic field, where one refers to photons as field quanta. This procedure is basic to theories of physics, nuclear physics, condensed matter physics. Quantization converts classical fields into operators acting on states of the field theory. The lowest energy state is called the vacuum state, the reason for quantizing a theory is to deduce properties of materials, objects or particles through the computation of quantum amplitudes, which may be very complicated. Such computations have to deal with certain subtleties called renormalization, which, if neglected, can lead to nonsense results. The full specification of a procedure requires methods of performing renormalization. The first method to be developed for quantization of field theories was canonical quantization, however, the use of canonical quantization has left its mark on the language and interpretation of quantum field theory. Canonical quantization of a theory is analogous to the construction of quantum mechanics from classical mechanics. The classical field is treated as a variable called the canonical coordinate. One introduces a commutation relation between these which is exactly the same as the relation between a particles position and momentum in quantum mechanics. Technically, one converts the field to an operator, through combinations of creation and annihilation operators, the field operator acts on quantum states of the theory. The lowest energy state is called the vacuum state, the procedure is also called second quantization. This procedure can be applied to the quantization of any theory, whether of fermions or bosons. There is a way to perform a canonical quantization without having to resort to the non covariant approach of foliating spacetime and this method is based upon a classical action, but is different from the functional integral approach. The method does not apply to all possible actions and it starts with the classical algebra of all functionals over the configuration space. This algebra is quotiented over by the ideal generated by the Euler–Lagrange equations, then, this quotient algebra is converted into a Poisson algebra by introducing a Poisson bracket derivable from the action, called the Peierls bracket
28.
Renormalization
–
For example, a theory of the electron may begin by postulating a mass and charge. However, in field theory this electron is surrounded by a cloud of possibilities of other virtual particles such as photons. Taking these interactions into account shows that the electron-system in fact behaves as if it had a different mass, Renormalization replaces the originally postulated mass and charge with new numbers such that the observed mass and charge matches those originally postulated. Renormalization specifies relationships between parameters in the theory when the parameters describing large distance scales differ from the parameters describing small distances, physically, the pileup of contributions from an infinity of scales involved in a problem may then result in infinities. When describing space and time as a continuum, certain statistical, to define them, this continuum limit—the removal of the construction scaffolding of lattices at various scales—has to be taken carefully. Renormalization procedures are based on the requirement that certain quantities are equal to the observed values. Renormalization was first developed in quantum electrodynamics to make sense of infinite integrals in perturbation theory, all scales are linked in a broadly systematic way, and the actual physics pertinent to each is extracted with the suitable specific computational techniques appropriate for each. Wilson clarified which variables of a system are crucial and which are redundant, Renormalization is distinct from regularization, another technique to control infinities by assuming the existence of new unknown physics at new scales. The problem of infinities first arose in the electrodynamics of point particles in the 19th. The mass of a charged particle should include the mass–energy in its electrostatic field, assume that the particle is a charged spherical shell of radius re. This implies that the point particle would have infinite inertia, making it unable to be accelerated, Renormalization, The total effective mass of a spherical charged particle includes the actual bare mass of the spherical shell. If the shells bare mass is allowed to be negative, it might be possible to take a consistent point limit and this was called renormalization, and Lorentz and Abraham attempted to develop a classical theory of the electron this way. This early work was the inspiration for later attempts at regularization and renormalization in quantum field theory, when calculating the electromagnetic interactions of charged particles, it is tempting to ignore the back-reaction of a particles own field on itself. But this back-reaction is necessary to explain the friction on charged particles when they emit radiation, if the electron is assumed to be a point, the value of the back-reaction diverges, for the same reason that the mass diverges, because the field is inverse-square. The Abraham–Lorentz theory had a noncausal pre-acceleration, sometimes an electron would start moving before the force is applied. This is a sign that the point limit is inconsistent, in quantum electrodynamics at small coupling the electromagnetic mass only diverges as the logarithm of the radius of the particle. When developing quantum electrodynamics in the 1930s, Max Born, Werner Heisenberg, Pascual Jordan, the divergences appear in radiative corrections involving Feynman diagrams with closed loops of virtual particles in them. Such a particle is called off-shell, when there is a loop, the momentum of the particles involved in the loop is not uniquely determined by the energies and momenta of incoming and outgoing particles
29.
Vacuum state
–
In quantum field theory, the vacuum state is the quantum state with the lowest possible energy. Generally, it contains no physical particles, zero-point field is sometimes used as a synonym for the vacuum state of an individual quantized field. According to present-day understanding of what is called the state or the quantum vacuum. According to quantum mechanics, the state is not truly empty but instead contains fleeting electromagnetic waves and particles that pop into. The QED vacuum of quantum electrodynamics was the first vacuum of quantum theory to be developed. QED originated in the 1930s, and in the late 1940s and early 1950s it was reformulated by Feynman, Tomonaga and Schwinger, today the electromagnetic interactions and the weak interactions are unified in the theory of the electroweak interaction. The Standard Model is a generalization of the QED work to all the known elementary particles. Quantum chromodynamics is the portion of the Standard Model that deals with strong interactions and it is the object of study in the Large Hadron Collider and the Relativistic Heavy Ion Collider, and is related to the so-called vacuum structure of strong interactions. In this case the vacuum value of any field operator vanishes. For quantum field theories in which perturbation theory breaks down at low energies field operators may have non-vanishing vacuum expectation values called condensates. In the Standard Model, the vacuum expectation value of the Higgs field. In many situations, the state can be defined to have zero energy. The vacuum state is associated with a zero-point energy, and this zero-point energy has measurable effects, in the laboratory, it may be detected as the Casimir effect. In physical cosmology, the energy of the cosmological vacuum appears as the cosmological constant, in fact, the energy of a cubic centimeter of empty space has been calculated figuratively to be one trillionth of an erg. An outstanding requirement imposed on a potential Theory of Everything is that the energy of the vacuum state must explain the physically observed cosmological constant. For a relativistic theory, the vacuum is Poincaré invariant. Poincaré invariance implies that only scalar combinations of field operators have non-vanishing VEVs, the VEV may break some of the internal symmetries of the Lagrangian of the field theory. In this case the vacuum has less symmetry than the theory allows, in principle, quantum corrections to Maxwells equations can cause the experimental electrical permittivity ε of the vacuum state to deviate from the defined scalar value ε0 of the electric constant
30.
Wick's theorem
–
Wicks theorem is a method of reducing high-order derivatives to a combinatorics problem. It is named after Gian-Carlo Wick and it is used extensively in quantum field theory to reduce arbitrary products of creation and annihilation operators to sums of products of pairs of these operators. This allows for the use of Greens function methods, and consequently the use of Feynman diagrams in the field under study, a more general idea in probability theory is Isserlis’ theorem. Alternatively, contractions can be denoted by a line joining A ^ and B ^ and we shall look in detail at four special cases where A ^ and B ^ are equal to creation and annihilation operators. For N particles well denote the creation operators by a ^ i † and they satisfy the usual commutation relations = δ i j, where δ i j denotes the Kronecker delta. These relationships hold true for bosonic operators or fermionic operators because of the way normal ordering is defined and we can use contractions and normal ordering to express any product of creation and annihilation operators as a sum of normal ordered terms. This is the basis of Wicks theorem, before stating the theorem fully we shall look at some examples. We can use these relations, and the definition of contraction, to express products of a ^ i. It is an even lengthier calculation for more complicated products, luckily Wicks theorem provides a shortcut. Applying the theorem to the above examples provides a quicker method to arrive at the final expressions. A warning, In terms on the hand side containing multiple contractions care must be taken when the operators are fermionic. In this case an appropriate minus sign must be introduced according to the following rule, the contraction can then be applied. Which means that A B ¯ = T A B −, T A B and this means that A B ¯ is a contraction over T A B. Note that the contraction of a string of two field operators is a c-number. Applying this theorem to S-matrix elements, we discover that normal-ordered terms acting on vacuum state give a contribution to the sum. We conclude that m is even and only completely contracted terms remain. For example, if v = g y 4 ⇒, v i, =, ϕ i ϕ i ϕ i ϕ i, note that this discussion is in terms of the usual definition of normal ordering which is appropriate for the vacuum expectation values of fields. There are any other definitions of normal ordering, and Wicks theorem is valid irrespective
31.
Wightman axioms
–
In physics the Wightman axioms are an attempt at a mathematically rigorous formulation of quantum field theory. Arthur Wightman formulated the axioms in the early 1950s but they were first published only in 1964, one of the Millennium Problems is to realize the Wightman axioms in the case of Yang–Mills fields. One basic idea of the Wightman axioms is that there is a Hilbert space upon which the Poincaré group acts unitarily, in this way, the concepts of energy, momentum, angular momentum and center of mass are implemented. There is also a stability assumption which restricts the spectrum of the four-momentum to the light cone. However, this isnt enough to implement locality, for that, the Wightman axioms have position dependent operators called quantum fields which form covariant representations of the Poincaré group. Since quantum field theory suffers from problems, the value of a field at a point is not well-defined. To get around this, the Wightman axioms introduce the idea of smearing over a test function to tame the UV divergences which arise even in a field theory. Because the axioms are dealing with unbounded operators, the domains of the operators have to be specified, the Wightman axioms restrict the causal structure of the theory by imposing either commutativity or anticommutativity between spacelike separated fields. They also postulate the existence of a Poincaré-invariant state called the vacuum, moreover, the axioms assume that the vacuum is cyclic, i. e. Quantum mechanics is described according to von Neumann, in particular, in the following, the scalar product of Hilbert space vectors Ψ and Φ will be denoted by ⟨ Ψ, Φ ⟩, and the norm of Ψ will be denoted by ∥ Ψ ∥. The theory of symmetry is described according to Wigner and this is to take advantage of the successful description of relativistic particles by Eugene Paul Wigner in his famous paper of 1939. Wigner postulated the transition probability between states to be the same to all related by a transformation of special relativity. More generally, he considered the statement that a theory be invariant under a group G to be expressed in terms of the invariance of the transition probability between any two rays. The statement postulates that the acts on the set of rays. Let be an element of the Poincaré group and this being done for each group element, we get a family of unitary or antiunitary operators U on our Hilbert space, such that the ray Ψ transformed by is the same as the ray containing U ψ. If we restrict attention to elements of the connected to the identity. Let and be two Poincaré transformations, and let us denote their group product by, from the physical interpretation we see that the ray containing U must be the ray containing Uψ. These phase cant always be cancelled by redefining each U, example for particles of spin ½, Wigner showed that the best one can get is U U = ± U i. e. the phase is a multiple of π
32.
Dirac equation
–
In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles such as electrons and it was validated by accounting for the fine details of the hydrogen spectrum in a completely rigorous way. The equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved, moreover, in the limit of zero mass, the Dirac equation reduces to the Weyl equation. This accomplishment has been described as fully on a par with the works of Newton, Maxwell, in the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-1/2 particles. The Dirac equation in the originally proposed by Dirac is. The p1, p2, p3 are the components of the momentum, also, c is the speed of light, and ħ is the Planck constant divided by 2π. These fundamental physical constants reflect special relativity and quantum mechanics, respectively, Diracs purpose in casting this equation was to explain the behavior of the relativistically moving electron, and so to allow the atom to be treated in a manner consistent with relativity. His rather modest hope was that the corrections introduced this way might have a bearing on the problem of atomic spectra, the new elements in this equation are the 4 ×4 matrices αk and β, and the four-component wave function ψ. There are four components in ψ because the evaluation of it at any point in configuration space is a bispinor. It is interpreted as a superposition of an electron, a spin-down electron, a spin-up positron. These matrices and the form of the function have a deep mathematical significance. The algebraic structure represented by the matrices had been created some 50 years earlier by the English mathematician W. K. Clifford. In turn, Cliffords ideas had emerged from the work of the German mathematician Hermann Grassmann in his Lineale Ausdehnungslehre. The latter had been regarded as well-nigh incomprehensible by most of his contemporaries, the appearance of something so seemingly abstract, at such a late date, and in such a direct physical manner, is one of the most remarkable chapters in the history of physics. The Dirac equation is similar to the Schrödinger equation for a massive free particle. The left side represents the square of the momentum operator divided by twice the mass, space and time derivatives both enter to second order. This has a consequence for the interpretation of the equation. Because the equation is second order in the derivative, one must specify initial values both of the wave function itself and of its first-time derivative in order to solve definite problems
33.
Proca action
–
In physics, specifically field theory and particle physics, the Proca action describes a massive spin-1 field of mass m in Minkowski spacetime. The corresponding equation is a wave equation called the Proca equation. The Proca action and equation are named after Romanian physicist Alexandru Proca, the Proca equation is involved in the Standard model and describes there the three massive vector bosons, i. e. the Z and W bosons. This article uses the signature and tensor index notation in the language of 4-vectors. The field involved is a complex 4-potential B μ =, where ϕ is some kind of an electric potential. The field B μ transforms like a complex four-vector, when m =0, the equations reduce to Maxwells equations without charge or current. The Proca equation is related to the Klein–Gordon equation, because it is second order in space. In the vector calculus notation, the equations are, ◻ ϕ − ∂ ∂ t = −2 ϕ ◻ A + ∇ = −2 A and ◻ is the DAlembert operator, the Proca action is the gauge-fixed version of the Stueckelberg action via the Higgs mechanism. Quantizing the Proca action requires the use of class constraints. If m ≠0, they are not invariant under the transformations of electromagnetism B μ → B μ − ∂ μ f where f is an arbitrary function
34.
Standard Model
–
The Standard Model of particle physics is a theory concerning the electromagnetic, weak, and strong interactions, as well as classifying all the elementary particles known. It was developed throughout the half of the 20th century. The current formulation was finalized in the mid-1970s upon experimental confirmation of the existence of quarks, since then, discoveries of the top quark, the tau neutrino, and the Higgs boson have given further credence to the Standard Model. Because of its success in explaining a wide variety of experimental results and it does not incorporate the full theory of gravitation as described by general relativity, or account for the accelerating expansion of the Universe. The model does not contain any viable dark matter particle that all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations, the development of the Standard Model was driven by theoretical and experimental particle physicists alike. For theorists, the Standard Model is a paradigm of a field theory. The first step towards the Standard Model was Sheldon Glashows discovery in 1961 of a way to combine the electromagnetic, in 1967 Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashows electroweak interaction, giving it its modern form. The Higgs mechanism is believed to rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, the W± and Z0 bosons were discovered experimentally in 1983, and the ratio of their masses was found to be as the Standard Model predicted. The theory of the interaction, to which many contributed, acquired its modern form around 1973–74. At present, matter and energy are best understood in terms of the kinematics, to date, physics has reduced the laws governing the behavior and interaction of all known forms of matter and energy to a small set of fundamental laws and theories. The Standard Model includes members of classes of elementary particles. All particles can be summarized as follows, The Standard Model includes 12 elementary particles of spin known as fermions. According to the theorem, fermions respect the Pauli exclusion principle. Each fermion has a corresponding antiparticle, the fermions of the Standard Model are classified according to how they interact. There are six quarks, and six leptons, pairs from each classification are grouped together to form a generation, with corresponding particles exhibiting similar physical behavior. The defining property of the quarks is that they carry color charge, a phenomenon called color confinement results in quarks being very strongly bound to one another, forming color-neutral composite particles containing either a quark and an antiquark or three quarks
35.
Quantum electrodynamics
–
In particle physics, quantum electrodynamics is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved, in technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum. Dirac described the quantization of the field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. At higher orders in the series emerged, making such computations meaningless. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics, difficulties with the theory increased through the end of 1940. Improvements in microwave technology made it possible to more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift. These experiments exposed discrepancies which the theory was unable to explain, a first indication of a possible way out was given by Hans Bethe in 1947, after attending the Shelter Island Conference. While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the atom as measured by Lamb. Despite the limitations of the computation, agreement was excellent, the idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants, sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with a Nobel prize in physics in 1965 for their work in this area. Even though renormalization works very well in practice, Feynman was never comfortable with its mathematical validity, even referring to renormalization as a shell game. QED has served as the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1975 work by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Near the end of his life, Richard P. Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman, QED, The strange theory of light and matter, the key components of Feynmans presentation of QED are three basic actions. A photon goes from one place and time to another place, an electron goes from one place and time to another place and time. An electron emits or absorbs a photon at a certain place and these can all be seen in the adjacent diagram. It is important not to over-interpret these diagrams, nothing is implied about how a particle gets from one point to another
36.
Electroweak interaction
–
In particle physics, the electroweak interaction is the unified description of two of the four known fundamental interactions of nature, electromagnetism and the weak interaction. Although these two forces appear very different at low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 246 GeV, thus, if the universe is hot enough, then the electromagnetic force and weak force merge into a combined electroweak force. During the electroweak epoch, the electroweak force separated from the strong force, during the quark epoch, the electroweak force split into the electromagnetic and weak force. In 1999, Gerardus t Hooft and Martinus Veltman were awarded the Nobel prize for showing that the theory is renormalizable. Mathematically, the unification is accomplished under an SU × U gauge group, the corresponding gauge bosons are the three W bosons of weak isospin from SU, and the B boson of weak hypercharge from U, respectively, all of which are massless. In the Standard Model, the W± and Z0 bosons, UY and Uem are different copies of U, the generator of Uem is given by Q = Y/2 + I3, where Y is the generator of UY, and I3 is one of the SU generators. The spontaneous symmetry breaking makes the W3 and B bosons coalesce into two different bosons – the Z0 boson, and the photon, = Where θW is the mixing angle. The axes representing the particles have essentially just been rotated, in the plane and this also introduces a mismatch between the mass of the Z0 and the mass of the W± particles, M Z = M W cos θ W. The W1 and W2 bosons, in turn, combine to give massive charged bosons W ± =12, the Lagrangian for the electroweak interactions is divided into four parts before electroweak symmetry breaking L E W = L g + L f + L h + L y. The L g term describes the interaction between the three W particles and the B particle, L f is the kinetic term for the Standard Model fermions. The interaction of the bosons and the fermions are through the gauge covariant derivative. The h term describes the Higgs field F. L h = | D μ h |2 − λ2 The y term gives the Yukawa interaction that generates the masses after the Higgs acquires a nonzero vacuum expectation value. The Lagrangian reorganizes itself after the Higgs boson acquires a vacuum expectation value, due to its complexity, this Lagrangian is best described by breaking it up into several parts as follows. The neutral current L N and charged current L C components of the Lagrangian contain the interactions between the fermions and gauge bosons, the charged current part of the Lagrangian is given by L C = − g 2 W μ + + h. c. L H contains the Higgs three-point and four-point self interaction terms, L H = − g m H24 m W H3 − g 2 m H232 m W2 H4 L H V contains the Higgs interactions with gauge vector bosons. L H V = L W W V contains the gauge three-point self interactions. L Y = − ∑ f g m f 2 m W f ¯ f H Note the 1 − γ52 factors in the weak couplings and this is why electroweak theory is commonly said to be a chiral theory
37.
Quantum chromodynamics
–
QCD is a type of quantum field theory called a non-abelian gauge theory with symmetry group SU. The QCD analog of electric charge is a property called color, gluons are the force carrier of the theory, like photons are for the electromagnetic force in quantum electrodynamics. The theory is an important part of the Standard Model of particle physics, a large body of experimental evidence for QCD has been gathered over the years. QCD enjoys two peculiar properties, Confinement, which means that the force between quarks does not diminish as they are separated. Although analytically unproven, confinement is widely believed to be true because it explains the consistent failure of free quark searches, asymptotic freedom, which means that in very high-energy reactions, quarks and gluons interact very weakly creating a quark–gluon plasma. This prediction of QCD was first discovered in the early 1970s by David Politzer, Frank Wilczek, for this work they were awarded the 2004 Nobel Prize in Physics. The phase transition temperature between two properties has been measured by the ALICE experiment to be well above 160 MeV. Below this temperature, confinement is dominant, while above it, american physicist Murray Gell-Mann coined the word quark in its present sense. It originally comes from the phrase Three quarks for Muster Mark in Finnegans Wake by James Joyce, Gell-Mann, however, wanted to pronounce the word to rhyme with fork rather than with park, as Joyce seemed to indicate by rhyming words in the vicinity such as Mark. Gell-Mann got around that by supposing that one ingredient of the line Three quarks for Muster Mark was a cry of Three quarts for Mister, earwickers pub, a plausible suggestion given the complex punning in Joyces novel. The three kinds of charge in QCD are usually referred to as color charge by loose analogy to the three kinds of color perceived by humans, other than this nomenclature, the quantum parameter color is completely unrelated to the everyday, familiar phenomenon of color. Since the theory of charge is dubbed electrodynamics, the Greek word χρῶμα chroma color is applied to the theory of color charge. With the invention of bubble chambers and spark chambers in the 1950s, experimental particle physics discovered a large and it seemed that such a large number of particles could not all be fundamental. First, the particles were classified by charge and isospin by Eugene Wigner and Werner Heisenberg, then, in 1953, according to strangeness by Murray Gell-Mann and Kazuhiko Nishijima. To gain greater insight, the hadrons were sorted into groups having similar properties and masses using the way, invented in 1961 by Gell-Mann. The problem considered in this preprint was suggested by Nikolay Bogolyubov, in the beginning of 1965, Nikolay Bogolyubov, Boris Struminsky and Albert Tavkhelidze wrote a preprint with a more detailed discussion of the additional quark quantum degree of freedom. This work was presented by Albert Tavchelidze without obtaining consent of his collaborators for doing so at an international conference in Trieste. A similar mysterious situation was with the Δ++ baryon, in the quark model, han and Nambu noted that quarks might interact via an octet of vector gauge bosons, the gluons