1.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a branch of physics which is the fundamental theory of nature at small scales and low energies of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, derives from quantum mechanics as an approximation valid only at large scales, early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms, in one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. In 1803, Thomas Young, an English polymath, performed the famous experiment that he later described in a paper titled On the nature of light. This experiment played a role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays, Plancks hypothesis that energy is radiated and absorbed in discrete quanta precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, ludwig Boltzmann independently arrived at this result by considerations of Maxwells equations. However, it was only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmanns statistical interpretation of thermodynamics and proposed what is now called Plancks law, following Max Plancks solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohrs theory of structure, introducing elliptical orbits. This phase is known as old quantum theory, according to Planck, each energy element is proportional to its frequency, E = h ν, where h is Plancks constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the reality of the radiation itself. In fact, he considered his quantum hypothesis a mathematical trick to get the right rather than a sizable discovery. He won the 1921 Nobel Prize in Physics for this work, lower energy/frequency means increased time and vice versa, photons of differing frequencies all deliver the same amount of action, but do so in varying time intervals. High frequency waves are damaging to human tissue because they deliver their action packets concentrated in time, the Copenhagen interpretation of Niels Bohr became widely accepted. In the mid-1920s, developments in mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory, out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons
2.
Three-dimensional space
–
Three-dimensional space is a geometric setting in which three values are required to determine the position of an element. This is the meaning of the term dimension. In physics and mathematics, a sequence of n numbers can be understood as a location in n-dimensional space, when n =3, the set of all such locations is called three-dimensional Euclidean space. It is commonly represented by the symbol ℝ3 and this serves as a three-parameter model of the physical universe in which all known matter exists. However, this space is one example of a large variety of spaces in three dimensions called 3-manifolds. Furthermore, in case, these three values can be labeled by any combination of three chosen from the terms width, height, depth, and breadth. In mathematics, analytic geometry describes every point in space by means of three coordinates. Three coordinate axes are given, each perpendicular to the two at the origin, the point at which they cross. They are usually labeled x, y, and z, below are images of the above-mentioned systems. Two distinct points determine a line. Three distinct points are either collinear or determine a unique plane, four distinct points can either be collinear, coplanar or determine the entire space. Two distinct lines can intersect, be parallel or be skew. Two parallel lines, or two intersecting lines, lie in a plane, so skew lines are lines that do not meet. Two distinct planes can either meet in a line or are parallel. Three distinct planes, no pair of which are parallel, can meet in a common line. In the last case, the three lines of intersection of each pair of planes are mutually parallel, a line can lie in a given plane, intersect that plane in a unique point or be parallel to the plane. In the last case, there will be lines in the plane that are parallel to the given line, a hyperplane is a subspace of one dimension less than the dimension of the full space. The hyperplanes of a space are the two-dimensional subspaces, that is
3.
Coordinate
–
The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in the x-coordinate. The coordinates are taken to be real numbers in elementary mathematics, the use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa, this is the basis of analytic geometry. The simplest example of a system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O is chosen on a given line. The coordinate of a point P is defined as the distance from O to P. Each point is given a unique coordinate and each number is the coordinate of a unique point. The prototypical example of a system is the Cartesian coordinate system. In the plane, two lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space, depending on the direction and order of the coordinate axis the system may be a right-hand or a left-hand system. This is one of many coordinate systems, another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis, for a given angle θ, there is a single line through the pole whose angle with the polar axis is θ. Then there is a point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates there is a single point, for example, and are all polar coordinates for the same point. The pole is represented by for any value of θ, there are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple. Spherical coordinates take this a further by converting the pair of cylindrical coordinates to polar coordinates giving a triple. A point in the plane may be represented in coordinates by a triple where x/z and y/z are the Cartesian coordinates of the point
4.
Point reflection
–
Not to be confused with inversive geometry, in which inversion is through a circle instead of a point. In geometry, a point reflection or inversion in a point is a type of isometry of Euclidean space, point reflection can be classified as an affine transformation. Namely, it is an isometric involutive affine transformation, which has one fixed point. It is equivalent to a transformation with scale factor equal to -1. The point of inversion is called homothetic center. The term reflection is loose, and considered by some an abuse of language, with preferred, however. Such maps are involutions, meaning that they have order 2 – they are their own inverse, in dimension 1 these coincide, as a point is a hyperplane in the line. In terms of algebra, assuming the origin is fixed. Reflection in a hyperplane has a single −1 eigenvalue, while point reflection has only the −1 eigenvalue. The term inversion should not be confused with inversive geometry, where inversion is defined with respect to a circle In two dimensions, a point reflection is the same as a rotation of 180 degrees. In three dimensions, a point reflection can be described as a 180-degree rotation composed with reflection across a plane perpendicular to the axis of rotation, in dimension n, point reflections are orientation-preserving if n is even, and orientation-reversing if n is odd. Given a vector a in the Euclidean space Rn, the formula for the reflection of a across the point p is R e f p =2 p − a, in the case where p is the origin, point reflection is simply the negation of the vector a. In Euclidean geometry, the inversion of a point X with respect to a point P is a point X* such that P is the midpoint of the segment with endpoints X. In other words, the vector from X to P is the same as the vector from P to X*, the formula for the inversion in P is x*=2a−x where a, x and x* are the position vectors of P, X and X* respectively. This mapping is an isometric involutive affine transformation which has one fixed point. When the inversion point P coincides with the origin, point reflection is equivalent to a case of uniform scaling. This is an example of linear transformation, when P does not coincide with the origin, point reflection is equivalent to a special case of homothetic transformation, homothety with homothetic center coinciding with P, and scale factor = -1. This is an example of non-linear affine transformation), the composition of two point reflections is a translation
5.
Chirality (physics)
–
A chiral phenomenon is one that is not identical to its mirror image. The spin of a particle may be used to define a handedness, or helicity, for that particle, a symmetry transformation between the two is called parity. Invariance under parity by a Dirac fermion is called chiral symmetry, an experiment on the weak decay of cobalt-60 nuclei carried out by Chien-Shiung Wu and collaborators in 1957 demonstrated that parity is not a symmetry of the universe. The helicity of a particle is right-handed if the direction of its spin is the same as the direction of its motion and it is left-handed if the directions of spin and motion are opposite. By convention for rotation, a clock, with its spin vector defined by the rotation of its hands. Mathematically, helicity is the sign of the projection of the vector onto the momentum vector, left is negative. The chirality of a particle is more abstract and it is determined by whether the particle transforms in a right- or left-handed representation of the Poincaré group. For massive particles—such as electrons, quarks, and neutrinos—chirality and helicity must be distinguished, with the discovery of neutrino oscillation, which implies that neutrinos have mass, the only observed massless particle is the photon. The gluon is also expected to be massless, although the assumption that it is has not been conclusively tested, hence, these are the only two particles now known for which helicity could be identical to chirality, and only one of them has been confirmed by measurement. All other observed particles have mass and thus may have different helicities in different reference frames and it is still possible that as-yet unobserved particles, like the graviton, might be massless, and hence have invariant helicity like the photon. Only left-handed fermions and right-handed antifermions interact with the weak interaction, Chirality for a Dirac fermion ψ is defined through the operator γ5, which has eigenvalues ±1. Any Dirac field can thus be projected into its left- or right-handed component by acting with the projection operators /2 or /2 on ψ, the coupling of the charged weak interaction to fermions is proportional to the first projection operator, which is responsible for this interactions parity symmetry violation. A common source of confusion is due to conflating this operator with the helicity operator, since the helicity of massive particles is frame-dependent, it might seem that the same particle would interact with the weak force according to one frame of reference, but not another. The resolution to this paradox is that the chirality operator is equivalent to helicity for massless fields only. A theory that is asymmetric with respect to chiralities is called a chiral theory, many pieces of the Standard Model of physics are non-chiral, which is traceable to anomaly cancellation in chiral theories. Quantum chromodynamics is an example of a theory, since both chiralities of all quarks appear in the theory, and couple to gluons in the same way. The electroweak theory, developed in the mid 20th century, is an example of a chiral theory, originally, it assumed that neutrinos were massless, and only assumed the existence of left-handed neutrinos. However, it is still a chiral theory, as it does not respect parity symmetry, vector gauge theories with massless Dirac fermion fields ψ exhibit chiral symmetry, i. e. rotating the left-handed and the right-handed components independently makes no difference to the theory
6.
Elementary particle
–
In particle physics, an elementary particle or fundamental particle is a particle whose substructure is unknown, thus, it is unknown whether it is composed of other particles. A particle containing two or more elementary particles is a composite particle, soon, subatomic constituents of the atom were identified. As the 1930s opened, the electron and the proton had been observed, along with the photon, via quantum theory, protons and neutrons were found to contain quarks—up quarks and down quarks—now considered elementary particles. And within a molecule, the three degrees of freedom can separate via wavefunction into three quasiparticles. Yet a free electron—which, not orbiting a nucleus, lacks orbital motion—appears unsplittable. Meanwhile, an elementary boson mediating gravitation—the graviton—remains hypothetical, all elementary particles are—depending on their spin—either bosons or fermions. These are differentiated via the theorem of quantum statistics. Particles of half-integer spin exhibit Fermi–Dirac statistics and are fermions, Particles of integer spin, in other words full-integer, exhibit Bose–Einstein statistics and are bosons. In the Standard Model, elementary particles are represented for predictive utility as point particles, though extremely successful, the Standard Model is limited to the microcosm by its omission of gravitation and has some parameters arbitrarily added but unexplained. According to the current models of big bang nucleosynthesis, the composition of visible matter of the universe should be about 75% hydrogen. Neutrons are made up of one up and two down quark, while protons are made of two up and one down quark. Since the other elementary particles are so light or so rare when compared to atomic nuclei. Therefore, one can conclude that most of the mass of the universe consists of protons and neutrons. Some estimates imply that there are roughly 1080 baryons in the observable universe, the number of protons in the observable universe is called the Eddington number. Other estimates imply that roughly 1097 elementary particles exist in the universe, mostly photons, gravitons. However, the Standard Model is widely considered to be a theory rather than a truly fundamental one. The 12 fundamental fermionic flavours are divided into three generations of four particles each, six of the particles are quarks. The remaining six are leptons, three of which are neutrinos, and the three of which have an electric charge of −1, the electron and its two cousins, the muon and the tau
7.
Weak interaction
–
In particle physics, the weak interaction is one of the four known fundamental interactions of nature, alongside the strong interaction, electromagnetism, and gravitation. The weak interaction is responsible for radioactive decay, which plays an role in nuclear fission. The theory of the interaction is sometimes called quantum flavourdynamics, in analogy with the terms QCD dealing with the strong interaction. However the term QFD is rarely used because the force is best understood in terms of electro-weak theory. The Standard Model of particle physics, which does not address gravity, provides a framework for understanding how the electromagnetic, weak. An interaction occurs when two particles, typically but not necessarily half-integer spin fermions, exchange integer-spin, force-carrying bosons, the fermions involved in such exchanges can be either elementary or composite, although at the deepest levels, all weak interactions ultimately are between elementary particles. In the case of the interaction, fermions can exchange three distinct types of force carriers known as the W+, W−, and Z bosons. The mass of each of these bosons is far greater than the mass of a proton or neutron, the force is in fact termed weak because its field strength over a given distance is typically several orders of magnitude less than that of the strong nuclear force or electromagnetic force. During the quark epoch of the universe, the electroweak force separated into the electromagnetic. Important examples of the weak interaction include beta decay, and the fusion of hydrogen into deuterium that powers the Suns thermonuclear process, most fermions will decay by a weak interaction over time. Such decay makes radiocarbon dating possible, as carbon-14 decays through the interaction to nitrogen-14. It can also create radioluminescence, commonly used in tritium illumination, quarks, which make up composite particles like neutrons and protons, come in six flavours – up, down, strange, charm, top and bottom – which give those composite particles their properties. The weak interaction is unique in that it allows for quarks to swap their flavour for another, the swapping of those properties is mediated by the force carrier bosons. Also, the interaction is the only fundamental interaction that breaks parity-symmetry, and similarly. In 1933, Enrico Fermi proposed the first theory of the weak interaction and he suggested that beta decay could be explained by a four-fermion interaction, involving a contact force with no range. However, it is described as a non-contact force field having a finite range. The existence of the W and Z bosons was not directly confirmed until 1983, the weak interaction is unique in a number of respects, It is the only interaction capable of changing the flavour of quarks. It is the interaction that violates P or parity-symmetry
8.
Determinant
–
In linear algebra, the determinant is a useful value that can be computed from the elements of a square matrix. The determinant of a matrix A is denoted det, detA and it can be viewed as the scaling factor of the transformation described by the matrix. In the case of a 2 ×2 matrix, the formula for the determinant. Each determinant of a 2 ×2 matrix in this equation is called a minor of the matrix A, the same sort of procedure can be used to find the determinant of a 4 ×4 matrix, the determinant of a 5 ×5 matrix, and so forth. The use of determinants in calculus includes the Jacobian determinant in the change of rule for integrals of functions of several variables. Determinants are also used to define the characteristic polynomial of a matrix, in analytical geometry, determinants express the signed n-dimensional volumes of n-dimensional parallelepipeds. Sometimes, determinants are used merely as a notation for expressions that would otherwise be unwieldy to write down. When the entries of the matrix are taken from a field, it can be proven that any matrix has an inverse if. There are various equivalent ways to define the determinant of a square matrix A, i. e. one with the number of rows. Another way to define the determinant is expressed in terms of the columns of the matrix and these properties mean that the determinant is an alternating multilinear function of the columns that maps the identity matrix to the underlying unit scalar. These suffice to uniquely calculate the determinant of any square matrix, provided the underlying scalars form a field, the definition below shows that such a function exists, and it can be shown to be unique. Assume A is a matrix with n rows and n columns. The entries can be numbers or expressions, the definition of the determinant depends only on the fact that they can be added and multiplied together in a commutative manner. The determinant of a 2 ×2 matrix is defined by | a b c d | = a d − b c. If the matrix entries are numbers, the matrix A can be used to represent two linear maps, one that maps the standard basis vectors to the rows of A. In either case, the images of the vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogram defined by the rows of the matrix is the one with vertices at. The absolute value of ad − bc is the area of the parallelogram, the absolute value of the determinant together with the sign becomes the oriented area of the parallelogram
9.
Rotation
–
A rotation is a circular movement of an object around a center of rotation. A three-dimensional object always rotates around a line called a rotation axis. If the axis passes through the center of mass, the body is said to rotate upon itself. A rotation about a point, e. g. the Earth about the Sun, is called a revolution or orbital revolution. The axis is called a pole, mathematically, a rotation is a rigid body movement which, unlike a translation, keeps a point fixed. This definition applies to rotations within both two and three dimensions All rigid body movements are rotations, translations, or combinations of the two, a rotation is simply a progressive radial orientation to a common point. That common point lies within the axis of that motion, the axis is 90 degrees perpendicular to the plane of the motion. If the axis of the rotation lies external of the body in question then the body is said to orbit, there is no fundamental difference between a “rotation” and an “orbit” and or spin. The key distinction is simply where the axis of the rotation lies and this distinction can be demonstrated for both “rigid” and “non rigid” bodies. If a rotation around a point or axis is followed by a rotation around the same point/axis. The reverse of a rotation is also a rotation, thus, the rotations around a point/axis form a group. However, a rotation around a point or axis and a rotation around a different point/axis may result in something other than a rotation, Rotations around the x, y and z axes are called principal rotations. Rotation around any axis can be performed by taking a rotation around the x axis, followed by a rotation around the y axis and that is to say, any spatial rotation can be decomposed into a combination of principal rotations. In flight dynamics, the rotations are known as yaw, pitch. This terminology is used in computer graphics. In astronomy, rotation is an observed phenomenon. Stars, planets and similar bodies all spin around on their axes, the rotation rate of planets in the solar system was first measured by tracking visual features. Stellar rotation is measured through Doppler shift or by tracking active surface features and this rotation induces a centrifugal acceleration in the reference frame of the Earth which slightly counteracts the effect of gravity the closer one is to the equator
10.
Even and odd functions
–
In mathematics, even functions and odd functions are functions which satisfy particular symmetry relations, with respect to taking additive inverses. They are important in areas of mathematical analysis, especially the theory of power series. The concept of evenness or oddness is defined for functions whose domain and this includes additive groups, all rings, all fields, and all vector spaces. Thus, for example, a function of a real variable could be even or odd, as could a complex-valued function of a vector variable. The examples are real-valued functions of a variable, to illustrate the symmetry of their graphs. Let f be a function of a real variable. Then f is even if the equation holds for all x and -x in the domain of f, f = f. Geometrically speaking, the face of an even function is symmetric with respect to the y-axis. Examples of even functions are |x|, x2, x4, cos, cosh, again, let f be a real-valued function of a real variable. Then f is odd if the equation holds for all x and -x in the domain of f, − f = f. Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, examples of odd functions are x, x3, sin, sinh, erf, or any linear combination of these. A functions being odd or even does not imply differentiability, or even continuity, for example, the Dirichlet function is even, but is nowhere continuous. Properties involving Fourier series, Taylor series, derivatives and so on may only be used when they can be assumed to exist, there is only 1 odd even function, f=0. If a function is even and odd, it is equal to 0 everywhere it is defined, if a function is odd, the absolute value of that function is an even function. The sum of two functions is even, and any constant multiple of an even function is even. The sum of two odd functions is odd, and any constant multiple of an odd function is odd, the difference between two odd functions is odd. The difference between two functions is even. The sum of an even and odd function is neither even nor odd, the product of two even functions is an even function
11.
Euclidean vector
–
In mathematics, physics, and engineering, a Euclidean vector is a geometric object that has magnitude and direction. Vectors can be added to other vectors according to vector algebra, a Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by A B →. A vector is what is needed to carry the point A to the point B and it was first used by 18th century astronomers investigating planet rotation around the Sun. The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics, the velocity and acceleration of a moving object, many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances, their magnitude and direction can still be represented by the length, the mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the system include pseudovectors and tensors. The concept of vector, as we know it today, evolved gradually over a period of more than 200 years, about a dozen people made significant contributions. Giusto Bellavitis abstracted the basic idea in 1835 when he established the concept of equipollence, working in a Euclidean plane, he made equipollent any pair of line segments of the same length and orientation. Essentially he realized an equivalence relation on the pairs of points in the plane, the term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum q = s + v of a Real number s and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments, grassmanns work was largely neglected until the 1870s. Peter Guthrie Tait carried the standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇, in 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product and this approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwells Treatise on Electricity and Magnetism, the first half of Gibbss Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs lectures, in physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a line segment, or arrow
12.
Tensor
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis
13.
Classical physics
–
Classical physics refers to theories of physics that predate modern, more complete, or more widely applicable theories. As such, the definition of a classical theory depends on context, classical physical concepts are often used when modern theories are unnecessarily complex for a particular situation. Classical theory has at least two meanings in physics. In the context of mechanics, classical theory refers to theories of physics that do not use the quantisation paradigm. Likewise, classical field theories, such as general relativity and classical electromagnetism, are those that do not use quantum mechanics, in the context of general and special relativity, classical theories are those that obey Galilean relativity. Modern physics includes quantum theory and relativity, when applicable, a physical system can be described by classical physics when it satisfies conditions such that the laws of classical physics are approximately valid. In practice, physical objects ranging from larger than atoms and molecules, to objects in the macroscopic and astronomical realm. Beginning at the level and lower, the laws of classical physics break down. Electromagnetic fields and forces can be described well by classical electrodynamics at length scales, unlike quantum physics, classical physics is generally characterized by the principle of complete determinism, although deterministic interpretations of quantum mechanics do exist. Mathematically, classical physics equations are those in which Plancks constant does not appear, according to the correspondence principle and Ehrenfests theorem, as a system becomes larger or more massive the classical dynamics tends to emerge, with some exceptions, such as superfluidity. This is why we can usually ignore quantum mechanics when dealing with everyday objects, however, one of the most vigorous on-going fields of research in physics is classical-quantum correspondence. This field of research is concerned with the discovery of how the laws of physics give rise to classical physics found at the limit of the large scales of the classical level. Computer modeling is essential for quantum and relativistic physics, classic physics is considered the limit of quantum mechanics for large number of particles. On the other hand, classic mechanics is derived from relativistic mechanics, for example, in many formulations from special relativity, a correction factor 2 appears, where v is the velocity of the object and c is the speed of light. For velocities much smaller than that of light, one can neglect the terms with c2 and these formulas then reduce to the standard definitions of Newtonian kinetic energy and momentum. This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities, computer modeling has to be as real as possible. Classical physics would introduce an error as in the superfluidity case, in order to produce reliable models of the world, we can not use classic physics. It is true that quantum theories consume time and computer resources, and the equations of physics could be resorted to provide a quick solution
14.
Hilbert space
–
The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of algebra and calculus from the two-dimensional Euclidean plane. A Hilbert space is a vector space possessing the structure of an inner product that allows length. Furthermore, Hilbert spaces are complete, there are limits in the space to allow the techniques of calculus to be used. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as infinite-dimensional function spaces, the earliest Hilbert spaces were studied from this point of view in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis —and ergodic theory, john von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis, geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space, at a deeper level, perpendicular projection onto a subspace plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be specified by its coordinates with respect to a set of coordinate axes. When that set of axes is countably infinite, this means that the Hilbert space can also usefully be thought of in terms of the space of sequences that are square-summable. The latter space is often in the literature referred to as the Hilbert space. One of the most familiar examples of a Hilbert space is the Euclidean space consisting of vectors, denoted by ℝ3. The dot product takes two vectors x and y, and produces a real number x·y, If x and y are represented in Cartesian coordinates, then the dot product is defined by ⋅ = x 1 y 1 + x 2 y 2 + x 3 y 3. The dot product satisfies the properties, It is symmetric in x and y, x · y = y · x. It is linear in its first argument, · y = ax1 · y + bx2 · y for any scalars a, b, and vectors x1, x2, and y. It is positive definite, for all x, x · x ≥0, with equality if. An operation on pairs of vectors that, like the dot product, a vector space equipped with such an inner product is known as a inner product space. Every finite-dimensional inner product space is also a Hilbert space, multivariable calculus in Euclidean space relies on the ability to compute limits, and to have useful criteria for concluding that limits exist
15.
Group (mathematics)
–
In mathematics, a group is an algebraic structure consisting of a set of elements equipped with an operation that combines any two elements to form a third element. The operation satisfies four conditions called the group axioms, namely closure and it allows entities with highly diverse mathematical origins in abstract algebra and beyond to be handled in a flexible way while retaining their essential structural aspects. The ubiquity of groups in areas within and outside mathematics makes them a central organizing principle of contemporary mathematics. Groups share a kinship with the notion of symmetry. The concept of a group arose from the study of polynomial equations, after contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right, to explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. A theory has developed for finite groups, which culminated with the classification of finite simple groups. Since the mid-1980s, geometric group theory, which studies finitely generated groups as objects, has become a particularly active area in group theory. One of the most familiar groups is the set of integers Z which consists of the numbers, −4, −3, −2, −1,0,1,2,3,4. The following properties of integer addition serve as a model for the group axioms given in the definition below. For any two integers a and b, the sum a + b is also an integer and that is, addition of integers always yields an integer. This property is known as closure under addition, for all integers a, b and c, + c = a +. Expressed in words, adding a to b first, and then adding the result to c gives the final result as adding a to the sum of b and c. If a is any integer, then 0 + a = a +0 = a, zero is called the identity element of addition because adding it to any integer returns the same integer. For every integer a, there is a b such that a + b = b + a =0. The integer b is called the element of the integer a and is denoted −a. The integers, together with the operation +, form a mathematical object belonging to a class sharing similar structural aspects. To appropriately understand these structures as a collective, the abstract definition is developed
16.
Projective representation
–
The interest for algebra is in the process in the other direction, given a projective representation, try to lift it to a conventional linear representation. However, one can lift a projective representation of G to a representation of a different group C. To understand this, note that GL → PGL is a extension of PGL. The analysis of the question involves group cohomology. Indeed, if one fixes for each g in G a lifted element L in lifting from PGL back to GL and it follows that the 2-cocycle or Schur multiplier c satisfies the cocycle equation c c = c c for all g, h, k in G. This c depends on the choice of the lift L, a different choice of lift L′ = f L will result in a different cocycle c ′ = f f −1 f −1 c cohomologous to c, thus L defines a unique class in H2. This class might not be trivial, in general, a nontrivial class leads to an extension problem for G. If G is correctly extended we obtain a linear representation of the extended group, the solution is always a central extension. From Schurs lemma, it follows that the representations of central extensions of G. Studying projective representations of Lie groups leads one to consider true representations of their central extensions, in particular, the group SO is doubly covered by SU. This has important applications in quantum mechanics, as the study of representations of SU leads to a theory of spin. The group SO+, isomorphic to the Möbius group, is doubly covered by SL2. Both are supergroups of aforementioned SO and SU respectively and form a relativistic spin theory, the orthogonal group O is double covered by the Pin group Pin±. The symplectic group Sp is double covered by the metaplectic group Mp
17.
Group extension
–
In mathematics, a group extension is a general means of describing a group in terms of a particular normal subgroup and quotient group. If Q and N are two groups, then G is an extension of Q by N if there is an exact sequence 1 → N → G → Q →1. If G is an extension of Q by N, then G is a group, N is a subgroup of G. Group extensions arise in the context of the problem, where the groups Q and N are known. Note that the phrasing G is an extension of N by Q is also used by some, since any finite group G possesses a maximal normal subgroup N with simple factor group G/N, all finite groups may be constructed as a series of extensions with finite simple groups. This fact was a motivation for completing the classification of simple groups. An extension is called an extension if the subgroup N lies in the center of G. One extension, the product, is immediately obvious. Several other general classes of extensions are known but no theory exists which treats all the possible extensions at one time, Group extension is usually described as a hard problem, it is termed the extension problem. To consider some examples, if G = H × K, more generally, if G is a semidirect product of K and H, then G is an extension of H by K, so such products as the wreath product provide further examples of extensions. The question of what groups G are extensions of H by N is called the extension problem, as to its motivation, consider that the composition series of a finite group is a finite sequence of subgroups, where each Ai+1 is an extension of Ai by some simple group. In general, this problem is hard, and all the most useful results classify extensions that satisfy some additional condition. It is important to know when two extensions are equivalent or congruent, in fact it is sufficient to have a group homomorphism, due to the assumed commutativity of the diagram, the map T is forced to be an isomorphism by the short five lemma. e. In this situation, it is said that s splits the above exact sequence. For a full discussion of why this is true, see semidirect product, in general in mathematics, an extension of a structure K is usually regarded as a structure L of which K is a substructure. However, in theory the opposite terminology has crept in, partly because of the notation Ext , which reads easily as extensions of Q by N. The paper of Brown and Porter on the Schreier theory of nonabelian extensions uses the terminology that an extension of K gives a larger structure. A central extension of a group G is an exact sequence of groups 1 → A → E → G →1 such that A is in Z
18.
Special orthogonal group
–
Equivalently, it is the group of n×n orthogonal matrices, where the group operation is given by matrix multiplication, an orthogonal matrix is a real matrix whose inverse equals its transpose. An important subgroup of O is the orthogonal group, denoted SO. This group is called the rotation group, because, in dimensions 2 and 3. In low dimension, these groups have been studied, see SO, SO and SO. This is a subgroup of the linear group GL given by O = where QT is the transpose of Q and I is the identity matrix. This article mainly discusses the groups of quadratic forms that may be expressed over some bases as the dot product, over the reals. Over the reals, for any quadratic form, there is a basis. Thus the orthogonal group depends only on the numbers of 1 and of −1, and is denoted O, for details, see indefinite orthogonal group. The derived subgroup Ω of O is an often studied object because, the Cartan–Dieudonné theorem describes the structure of the orthogonal group for a non-singular form. The determinant of any orthogonal matrix is either 1 or −1, the orthogonal n-by-n matrices with determinant 1 form a normal subgroup of O known as the special orthogonal group SO, consisting of all proper rotations. By analogy with GL–SL, the group is sometimes called the general orthogonal group and denoted GO. The term rotation group can be used to either the special or general orthogonal group. When this distinction is to be emphasized, the groups may be denoted O and O, reserving n for the dimension of the space. The letters p or r are also used, indicating the rank of the corresponding Lie algebra, in odd dimension the corresponding Lie algebra is s o, while in even dimension the Lie algebra is s o. In two dimensions, O is the group of all rotations about the origin and all reflections along a line through the origin, SO is the group of all rotations about the origin. These groups are related, SO is a subgroup of O of index 2. More generally, in any number of dimensions an even number of reflections gives a rotation, therefore, the rotations define a subgroup of O, but the reflections do not define a subgroup. A reflection through the origin may be generated as a combination of one reflection along each of the axes, the reflection through the origin is not a reflection in the usual sense in even dimensions, but rather a rotation
19.
Special unitary group
–
In mathematics, the special unitary group of degree n, denoted SU, is the Lie group of n×n unitary matrices with determinant 1. The group operation is matrix multiplication, the special unitary group is a subgroup of the unitary group U, consisting of all n×n unitary matrices. As a compact group, U is the group that preserves the standard inner product on Cn. It is itself a subgroup of the linear group, SU ⊂ U ⊂ GL. The SU groups find application in the Standard Model of particle physics, especially SU in the electroweak interaction. The simplest case, SU, is the group, having only a single element. The group SU is isomorphic to the group of quaternions of norm 1, since unit quaternions can be used to represent rotations in 3-dimensional space, there is a surjective homomorphism from SU to the rotation group SO whose kernel is. SU is also identical to one of the groups of spinors, Spin. The special unitary group SU is a real Lie group and its dimension as a real manifold is n2 −1. Topologically, it is compact and simply connected, algebraically, it is a simple Lie group. The center of SU is isomorphic to the cyclic group Zn and its outer automorphism group, for n ≥3, is Z2, while the outer automorphism group of SU is the trivial group. A maximal torus, of rank n −1, is given by the set of matrices with determinant 1. The Weyl group is the symmetric group Sn, which is represented by signed permutation matrices, the Lie algebra of SU, denoted by su, can be identified with the set of traceless antihermitian n×n complex matrices, with the regular commutator as Lie bracket. Particle physicists often use a different, equivalent representation, the set of traceless hermitian n×n complex matrices with Lie bracket given by −i times the commutator, the Lie algebra su can be generated by n2 operators O ^ i j, i, j=1,2. N, which satisfy the commutator relationships = δ j k O ^ i ℓ − δ i ℓ O ^ k j for i, j, k, ℓ =1,2, N, where δjk denotes the Kronecker delta. Additionally, the operator N ^ = ∑ i =1 n O ^ i i satisfies =0, which implies that the number of independent generators of the Lie algebra is n2 −1. We also take ∑ c, e =1 n 2 −1 d a c e d b c e = n 2 −4 n δ a b as a normalization convention. In the -dimensional adjoint representation, the generators are represented by × matrices, whose elements are defined by the structure constants themselves, SU is the following group, S U =, where the overline denotes complex conjugation
20.
Spinor
–
In geometry and physics, spinors are elements of a vector space that can be associated with Euclidean space. Like geometric vectors and more general tensors, spinors transform linearly when the Euclidean space is subjected to a slight rotation and it is also possible to associate a substantially similar notion of spinor to Minkowski space in which case the Lorentz transformations of special relativity play the role of rotations. Spinors were introduced in geometry by Élie Cartan in 1913, in the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or spin, of the electron and other subatomic particles. Spinors are characterized by the way in which they behave under rotations. They change in different ways depending not just on the final rotation. There are two topologically distinguishable classes of paths through rotations that result in the overall rotation, as famously illustrated by the belt trick puzzle. These two inequivalent classes yield spinor transformations of opposite sign, the spin group is the group of all rotations keeping track of the class. It doubly covers the group, since each rotation can be obtained in two inequivalent ways as the endpoint of a path. The Clifford algebra is an algebra that can be constructed from Euclidean space. Both the spin group and its Lie algebra are embedded inside the Clifford algebra in a natural way, after choosing an orthonormal basis of Euclidean space, a representation of the Clifford algebra is generated by gamma matrices, matrices that satisfy a set of canonical anti-commutation relations. The spinors are the vectors on which these matrices act. In three Euclidean dimensions, for instance, the Pauli spin matrices are a set of matrices. However, the matrix representation of the Clifford algebra, and hence what precisely constitutes a column vector, involves the choice of basis. What characterizes spinors and distinguishes them from vectors and other tensors is subtle. Consider applying a rotation to the coordinates of a system, no object in the system itself has moved, only the coordinates have, so there will always be a compensating change in those coordinate values when applied to any object of the system. Geometrical vectors, for example, have components that undergo the same rotation as the coordinates. More broadly, any tensor associated with the system also has coordinate descriptions that adjust to compensate for changes to the system itself. Spinors do not appear at this level of the description of a physical system, rather, spinors appear when we imagine that instead of a single rotation, the coordinate system is gradually rotated between some initial and final configuration
21.
Pseudovector
–
Geometrically it is the opposite, of equal magnitude but in the opposite direction, of its mirror image. This is as opposed to a vector, also known as a polar vector. In three dimensions the pseudovector p is associated with the product of two polar vectors a and b, p = a × b. The vector p calculated this way is a pseudovector, one example is the normal to an oriented plane. An oriented plane can be defined by two vectors, a and b, which can be said to span the plane. The vector a × b is a normal to the plane and this has consequences in computer graphics where it has to be considered when transforming surface normals. A number of quantities in physics behave as pseudovectors rather than polar vectors, including magnetic field, in mathematics pseudovectors are equivalent to three-dimensional bivectors, from which the transformation rules of pseudovectors can be derived. More generally in geometric algebra pseudovectors are the elements of the algebra with dimension n −1. The label pseudo can be generalized to pseudoscalars and pseudotensors. Physical examples of pseudovectors include magnetic field, torque, vorticity, consider the pseudovector angular momentum L = r × p. Driving in a car, and looking forward, each of the wheels has an angular momentum vector pointing to the left. The distinction between vectors and pseudovectors becomes important in understanding the effect of symmetry on the solution to physical systems, consider an electric current loop in the z =0 plane that inside the loop generates a magnetic field oriented in the z direction. This system is symmetric under mirror reflections through this plane, with the field unchanged by the reflection. The definition of a vector in physics is more specific than the definition of vector. This important requirement is what distinguishes a vector from any other triplet of physical quantities The discussion so far only relates to proper rotations, however, one can also consider improper rotations, i. e. a mirror-reflection possibly followed by a proper rotation. Suppose everything in the universe undergoes an improper rotation described by the rotation matrix R, if the vector v is a polar vector, it will be transformed to v′ = Rv. If it is a pseudovector, it will be transformed to v′ = −Rv, the symbol det denotes determinant, this formula works because the determinant of proper and improper rotation matrices are +1 and -1, respectively. Suppose v1 and v2 are known pseudovectors, and v3 is defined to be their sum, if the universe is transformed by a rotation matrix R, then v3 is transformed to v 3 ′ = v 1 ′ + v 2 ′ = + = =. So v3 is also a pseudovector, on the other hand, suppose v1 is known to be a polar vector, v2 is known to be a pseudovector, and v3 is defined to be their sum, v3 = v1 + v2
22.
Abelian group
–
That is, these are the groups that obey the axiom of commutativity. Abelian groups generalize the arithmetic of addition of integers and they are named after Niels Henrik Abel. The concept of a group is one of the first concepts encountered in undergraduate abstract algebra, from which many other basic concepts, such as modules. The theory of groups is generally simpler than that of their non-abelian counterparts. On the other hand, the theory of abelian groups is an area of current research. An abelian group is a set, A, together with an operation • that combines any two elements a and b to form another element denoted a • b, the symbol • is a general placeholder for a concretely given operation. Identity element There exists an element e in A, such that for all elements a in A, the equation e • a = a • e = a holds. Inverse element For each a in A, there exists an element b in A such that a • b = b • a = e, commutativity For all a, b in A, a • b = b • a. A group in which the operation is not commutative is called a non-abelian group or non-commutative group. There are two main conventions for abelian groups – additive and multiplicative. Generally, the notation is the usual notation for groups, while the additive notation is the usual notation for modules. To verify that a group is abelian, a table – known as a Cayley table – can be constructed in a similar fashion to a multiplication table. If the group is G = under the operation ⋅, the th entry of this contains the product gi ⋅ gj. The group is abelian if and only if this table is symmetric about the main diagonal and this is true since if the group is abelian, then gi ⋅ gj = gj ⋅ gi. This implies that the th entry of the table equals the th entry, every cyclic group G is abelian, because if x, y are in G, then xy = aman = am + n = an + m = anam = yx. Thus the integers, Z, form a group under addition, as do the integers modulo n. Every ring is a group with respect to its addition operation. In a commutative ring the invertible elements, or units, form an abelian multiplicative group, in particular, the real numbers are an abelian group under addition, and the nonzero real numbers are an abelian group under multiplication
23.
Irreducible representations
–
Every finite-dimensional unitary representation on a Hermitian vector space V is the direct sum of irreducible representations. The structure analogous to a representation in the resulting theory is a simple module. Let ρ be a representation i. e. a homomorphism ρ, G → G L of a group G where V is a space over a field F. If we pick a basis B for V, ρ can be thought of as a function from a group into a set of invertible matrices, however, it simplifies things greatly if we think of the space V without a basis. A linear subspace W ⊂ V is called G -invariant if ρ w ∈ W for all g ∈ G, the restriction of ρ to a G -invariant subspace W ⊂ V is known as a subrepresentation. A representation ρ, G → G L is said to be if it has only trivial subrepresentations. If there is a proper non-trivial invariant subspace, ρ is said to be reducible, Group elements can be represented by matrices, although the term represented has a specific and precise meaning in this context. A representation of a group is a mapping from the elements to the general linear group of matrices. The representations D and D are said to be equivalent representations, K, although some authors just write the numerical label without brackets. The dimension of D is the sum of the dimensions of the blocks, d i m = d i m + d i m + … + d i m If this is not possible, then the representation is indecomposable. Identifying the irreducible representations therefore allows one to label the states, predict how they will split under perturbations and this allows them to derive relativistic wave equations. The theory of groups and quantum mechanics, a. D. Boardman, D. E. OConner, P. A. Young. Symmetry and its applications in science, Group theory in quantum mechanics, an introduction to its present usage. Group Theory in Quantum Mechanics, An Introduction to Its Present Usage, manchester Physics Series, John Wiley & Sons. Weinberg, S, The Quantum Theory of Fields,1, Cambridge university press, molecular Quantum Mechanics, An introduction to quantum chemistry. Group theoretical discussion of wave equations. On Unitary Representations Of The Inhomogeneous Lorentz Group, commission on Mathematical and Theoretical Crystallography, Summer Schools on Mathematical Crystallography. Some Notes on Young Tableaux as useful for irreps of su. Hunt, Representations of Lorentz and Poincaré groups
24.
Phase (waves)
–
Phase is the position of a point in time on a waveform cycle. A complete cycle is defined as the interval required for the waveform to return to its initial value. The graphic to the right shows how one cycle constitutes 360° of phase, the graphic also shows how phase is sometimes expressed in radians, where one radian of phase equals approximately 57. 3°. Phase can also be an expression of relative displacement between two corresponding features of two waveforms having the same frequency, in sinusoidal functions or in waves phase has two different, but closely related, meanings. One is the angle of a sinusoidal function at its origin and is sometimes called phase offset or phase difference. Another usage is the fraction of the cycle that has elapsed relative to the origin. Phase shift is any change that occurs in the phase of one quantity and this symbol, φ is sometimes referred to as a phase shift or phase offset because it represents a shift from zero phase. For infinitely long sinusoids, a change in φ is the same as a shift in time, if x is delayed by 14 of its cycle, it becomes, x = A ⋅ cos = A ⋅ cos whose phase is now φ − π2. It has been shifted by π2 radians, Phase difference is the difference, expressed in degrees or time, between two waves having the same frequency and referenced to the same point in time. Two oscillators that have the frequency and no phase difference are said to be in phase. Two oscillators that have the frequency and different phases have a phase difference. The amount by which such oscillators are out of phase with each other can be expressed in degrees from 0° to 360°, if the phase difference is 180 degrees, then the two oscillators are said to be in antiphase. If two interacting waves meet at a point where they are in antiphase, then interference will occur. It is common for waves of electromagnetic, acoustic or other energy to become superposed in their transmission medium, when that happens, the phase difference determines whether they reinforce or weaken each other. Complete cancellation is possible for waves with equal amplitudes, time is sometimes used to express position within the cycle of an oscillation. A phase difference is analogous to two athletes running around a track at the same speed and direction but starting at different positions on the track. They pass a point at different instants in time, but the time difference between them is a constant - same for every pass since they are at the same speed and in the same direction. If they were at different speeds, the difference is undefined
25.
Electrodynamics
–
The theory provides an excellent description of electromagnetic phenomena whenever the relevant length scales and field strengths are large enough that quantum mechanical effects are negligible. For small distances and low field strengths, such interactions are described by quantum electrodynamics. Fundamental physical aspects of classical electrodynamics are presented in texts, such as those by Feynman, Leighton and Sands, Griffiths, Panofsky and Phillips. The physical phenomena that electromagnetism describes have been studied as separate fields since antiquity, for example, there were many advances in the field of optics centuries before light was understood to be an electromagnetic wave. For a detailed account, consult Pauli, Whittaker, Pais. The above equation illustrates that the Lorentz force is the sum of two vectors, one is the cross product of the velocity and magnetic field vectors. Based on the properties of the product, this produces a vector that is perpendicular to both the velocity and magnetic field vectors. The other vector is in the direction as the electric field. The sum of two vectors is the Lorentz force. In the absence of a field, the force is perpendicular to the velocity of the particle. If both electric and magnetic fields are present, the Lorentz force is the sum of both of these vectors, the electric field E is defined such that, on a stationary charge, F = q 0 E where q0 is what is known as a test charge. The size of the charge doesnt really matter, as long as it is small enough not to influence the field by its mere presence. What is plain from this definition, though, is that the unit of E is N/C and this unit is equal to V/m, see below. In electrostatics, where charges are not moving, around a distribution of point charges, both of the above equations are cumbersome, especially if one wants to determine E as a function of position. A scalar function called the potential can help. Electric potential, also called voltage, is defined by the line integral φ = − ∫ C E ⋅ d l where φ is the electric potential, unfortunately, this definition has a caveat. From Maxwells equations, it is clear that ∇ × E is not always zero, as a result, one must add a correction factor, which is generally done by subtracting the time derivative of the A vector potential described below. Whenever the charges are quasistatic, however, this condition will be essentially met, the scalar φ will add to other potentials as a scalar
26.
Time
–
Time is the indefinite continued progress of existence and events that occur in apparently irreversible succession from the past through the present to the future. Time is often referred to as the dimension, along with the three spatial dimensions. Time has long been an important subject of study in religion, philosophy, and science, nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems. Two contrasting viewpoints on time divide prominent philosophers, one view is that time is part of the fundamental structure of the universe—a dimension independent of events, in which events occur in sequence. Isaac Newton subscribed to this realist view, and hence it is referred to as Newtonian time. This second view, in the tradition of Gottfried Leibniz and Immanuel Kant, holds that time is neither an event nor a thing, Time in physics is unambiguously operationally defined as what a clock reads. Time is one of the seven fundamental physical quantities in both the International System of Units and International System of Quantities, Time is used to define other quantities—such as velocity—so defining time in terms of such quantities would result in circularity of definition. The operational definition leaves aside the question there is something called time, apart from the counting activity just mentioned, that flows. Investigations of a single continuum called spacetime bring questions about space into questions about time, questions that have their roots in the works of early students of natural philosophy. Furthermore, it may be there is a subjective component to time. Temporal measurement has occupied scientists and technologists, and was a motivation in navigation. Periodic events and periodic motion have long served as standards for units of time, examples include the apparent motion of the sun across the sky, the phases of the moon, the swing of a pendulum, and the beat of a heart. Currently, the unit of time, the second, is defined by measuring the electronic transition frequency of caesium atoms. Time is also of significant social importance, having economic value as well as value, due to an awareness of the limited time in each day. In day-to-day life, the clock is consulted for periods less than a day whereas the calendar is consulted for periods longer than a day, increasingly, personal electronic devices display both calendars and clocks simultaneously. The number that marks the occurrence of an event as to hour or date is obtained by counting from a fiducial epoch—a central reference point. Artifacts from the Paleolithic suggest that the moon was used to time as early as 6,000 years ago. Lunar calendars were among the first to appear, either 12 or 13 lunar months, without intercalation to add days or months to some years, seasons quickly drift in a calendar based solely on twelve lunar months
27.
Mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States
28.
Energy
–
In physics, energy is the property that must be transferred to an object in order to perform work on – or to heat – the object, and can be converted in form, but not created or destroyed. The SI unit of energy is the joule, which is the transferred to an object by the mechanical work of moving it a distance of 1 metre against a force of 1 newton. Mass and energy are closely related, for example, with a sensitive enough scale, one could measure an increase in mass after heating an object. Living organisms require available energy to stay alive, such as the humans get from food. Civilisation gets the energy it needs from energy resources such as fuels, nuclear fuel. The processes of Earths climate and ecosystem are driven by the radiant energy Earth receives from the sun, the total energy of a system can be subdivided and classified in various ways. It may also be convenient to distinguish gravitational energy, thermal energy, several types of energy, electric energy. Many of these overlap, for instance, thermal energy usually consists partly of kinetic. Some types of energy are a mix of both potential and kinetic energy. An example is energy which is the sum of kinetic. Whenever physical scientists discover that a phenomenon appears to violate the law of energy conservation. Heat and work are special cases in that they are not properties of systems, in general we cannot measure how much heat or work are present in an object, but rather only how much energy is transferred among objects in certain ways during the occurrence of a given process. Heat and work are measured as positive or negative depending on which side of the transfer we view them from, the distinctions between different kinds of energy is not always clear-cut. In contrast to the definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two, in 1807, Thomas Young was possibly the first to use the term energy instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described kinetic energy in 1829 in its modern sense, the law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat and these developments led to the theory of conservation of energy, formalized largely by William Thomson as the field of thermodynamics
29.
Power (physics)
–
In physics, power is the rate of doing work. It is the amount of energy consumed per unit time, having no direction, it is a scalar quantity. In the SI system, the unit of power is the joule per second, known as the watt in honour of James Watt, another common and traditional measure is horsepower. Being the rate of work, the equation for power can be written, because this integral depends on the trajectory of the point of application of the force and torque, this calculation of work is said to be path dependent. As a physical concept, power requires both a change in the universe and a specified time in which the change occurs. This is distinct from the concept of work, which is measured in terms of a net change in the state of the physical universe. The output power of a motor is the product of the torque that the motor generates. The power involved in moving a vehicle is the product of the force of the wheels. The dimension of power is divided by time. The SI unit of power is the watt, which is equal to one joule per second, other units of power include ergs per second, horsepower, metric horsepower, and foot-pounds per minute. One horsepower is equivalent to 33,000 foot-pounds per minute, or the required to lift 550 pounds by one foot in one second. Other units include dBm, a logarithmic measure with 1 milliwatt as reference, food calories per hour, Btu per hour. This shows how power is an amount of energy consumed per unit time. If ΔW is the amount of work performed during a period of time of duration Δt and it is the average amount of work done or energy converted per unit of time. The average power is simply called power when the context makes it clear. The instantaneous power is then the value of the average power as the time interval Δt approaches zero. P = lim Δ t →0 P a v g = lim Δ t →0 Δ W Δ t = d W d t. In the case of constant power P, the amount of work performed during a period of duration T is given by, W = P t
30.
Work (physics)
–
In physics, a force is said to do work if, when acting, there is a displacement of the point of application in the direction of the force. For example, when a ball is held above the ground and then dropped, the SI unit of work is the joule. The SI unit of work is the joule, which is defined as the work expended by a force of one newton through a distance of one metre. The dimensionally equivalent newton-metre is sometimes used as the unit for work, but this can be confused with the unit newton-metre. Usage of N⋅m is discouraged by the SI authority, since it can lead to confusion as to whether the quantity expressed in newton metres is a torque measurement, or a measurement of energy. Non-SI units of work include the erg, the foot-pound, the foot-poundal, the hour, the litre-atmosphere. Due to work having the physical dimension as heat, occasionally measurement units typically reserved for heat or energy content, such as therm, BTU. The work done by a constant force of magnitude F on a point that moves a distance s in a line in the direction of the force is the product W = F s. For example, if a force of 10 newtons acts along a point that travels 2 meters and this is approximately the work done lifting a 1 kg weight from ground level to over a persons head against the force of gravity. Notice that the work is doubled either by lifting twice the weight the distance or by lifting the same weight twice the distance. Work is closely related to energy, the work-energy principle states that an increase in the kinetic energy of a rigid body is caused by an equal amount of positive work done on the body by the resultant force acting on that body. Conversely, a decrease in energy is caused by an equal amount of negative work done by the resultant force. From Newtons second law, it can be shown that work on a free, rigid body, is equal to the change in energy of the velocity and rotation of that body. The work of forces generated by a function is known as potential energy. These formulas demonstrate that work is the associated with the action of a force, so work subsequently possesses the physical dimensions. The work/energy principles discussed here are identical to Electric work/energy principles, constraint forces determine the movement of components in a system, constraining the object within a boundary. Constraint forces ensure the velocity in the direction of the constraint is zero and this only applies for a single particle system. For example, in an Atwood machine, the rope does work on each body, there are, however, cases where this is not true
31.
Charge density
–
In electromagnetism, charge density is a measure of electric charge per unit volume of space, in one, two or three dimensions. More specifically, the linear, surface, or volume charge density is the amount of charge per unit length, surface area, or volume. The respective SI units are C·m−1, C·m−2 or C·m−3, like any density, charge density can depend on position, but charge and thus charge density can be negative. Therefore, a lithium cation will carry a charge density than a sodium cation due to the lithium cations having a smaller ionic radius. Following are the definitions for continuous charge distributions, within the context of electromagnetism, the subscripts are usually dropped for simplicity, λ, σ, ρ. Other notations may include, ρℓ, ρs, ρv, ρL, ρS, in dielectric materials, the total charge of an object can separate into free and bound charges. They are called bound because they cannot be removed, in the material the charges are the electrons bound to the nuclei. In terms of volume charges densities, the charge density is, ρ = ρ f + ρ b. as for surface charge densities, σ = σ f + σ b. where subscripts f and b denote free. Where P is the density, i. e. density of electric dipole moments within the material. The negative sign arises due to the signs on the charges in the dipoles, one end is within the volume of the object. A more rigorous derivation is given below, for the special case of a homogeneous charge density ρ0, independent of position i. e. constant throughout the region of the material, the equation simplifies to, Q = V ⋅ ρ0. The proof of this is immediate, start with the definition of the charge of any volume, Q = ∫ V ρ q d V. The equivalent proofs for linear charge density and surface charge density follow the same arguments as above, as always, the integral of the charge density over a region of space is the charge contained in that region. The charge density of the system at a point r is a sum of the densities for each charge qi at position ri. Similar equations are used for the linear and surface charge densities, in special relativity, the length of a segment of wire depends on velocity of observer because of length contraction, so charge density will also depend on velocity. Anthony French has described how the field force of a current-bearing wire arises from this relative charge density. He used a Minkowski diagram to show how a neutral current-bearing wire appears to carry a net charge density as observed in a moving frame, when a charge density is measured in a moving frame of reference it is called proper charge density. It turns out the charge density ρ and current density J transform together as a four current vector under Lorentz transformations
32.
Electric potential
–
An electric potential is the amount of work needed to move a unit positive charge from a reference point to a specific point inside the field without producing any acceleration. Typically, the point is Earth or a point at Infinity. By dividing out the charge on the particle a remainder is obtained that is a property of the field itself. This value can be calculated in either a static or an electric field at a specific time in units of joules per coulomb. The electric potential at infinity is assumed to be zero, a generalized electric scalar potential is also used in electrodynamics when time-varying electromagnetic fields are present, but this can not be so simply calculated. The electric potential and the vector potential together form a four vector. Classical mechanics explores concepts such as force, energy, potential etc, force and potential energy are directly related. A net force acting on any object will cause it to accelerate, as it rolls downhill its potential energy decreases, being translated to motion, inertial energy. It is possible to define the potential of certain force fields so that the energy of an object in that field depends only on the position of the object with respect to the field. Two such force fields are the field and an electric field. Such fields must affect objects due to the properties of the object. Objects may possess a property known as charge and an electric field exerts a force on charged objects. If the charged object has a charge the force will be in the direction of the electric field vector at that point while if the charge is negative the force will be in the opposite direction. The magnitude of the force is given by the quantity of the charge multiplied by the magnitude of the field vector. The electric potential at a point r in an electric field E is given by the line integral where C is an arbitrary path connecting the point with zero potential to r. When the curl ∇ × E is zero, the integral above does not depend on the specific path C chosen. The concept of electric potential is linked with potential energy. A test charge q has a potential energy UE given by U E = q V
33.
Volt
–
The volt is the derived unit for electric potential, electric potential difference, and electromotive force. One volt is defined as the difference in potential between two points of a conducting wire when an electric current of one ampere dissipates one watt of power between those points. It is also equal to the difference between two parallel, infinite planes spaced 1 meter apart that create an electric field of 1 newton per coulomb. Additionally, it is the difference between two points that will impart one joule of energy per coulomb of charge that passes through it. It can also be expressed as amperes times ohms, watts per ampere, or joules per coulomb, for the Josephson constant, KJ = 2e/h, the conventional value KJ-90 is used, K J-90 =0.4835979 GHz μ V. This standard is typically realized using an array of several thousand or tens of thousands of junctions. Empirically, several experiments have shown that the method is independent of device design, material, measurement setup, etc. in the water-flow analogy sometimes used to explain electric circuits by comparing them with water-filled pipes, voltage is likened to difference in water pressure. Current is proportional to the diameter of the pipe or the amount of water flowing at that pressure. A resistor would be a reduced diameter somewhere in the piping, the relationship between voltage and current is defined by Ohms Law. Ohms Law is analogous to the Hagen–Poiseuille equation, as both are linear models relating flux and potential in their respective systems, the voltage produced by each electrochemical cell in a battery is determined by the chemistry of that cell. Cells can be combined in series for multiples of that voltage, mechanical generators can usually be constructed to any voltage in a range of feasibility. High-voltage electric power lines,110 kV and up Lightning, Varies greatly. Volta had determined that the most effective pair of metals to produce electricity was zinc. In 1861, Latimer Clark and Sir Charles Bright coined the name volt for the unit of resistance, by 1873, the British Association for the Advancement of Science had defined the volt, ohm, and farad. In 1881, the International Electrical Congress, now the International Electrotechnical Commission and they made the volt equal to 108 cgs units of voltage, the cgs system at the time being the customary system of units in science. At that time, the volt was defined as the difference across a conductor when a current of one ampere dissipates one watt of power. The international volt was defined in 1893 as 1/1.434 of the emf of a Clark cell and this definition was abandoned in 1908 in favor of a definition based on the international ohm and international ampere until the entire set of reproducible units was abandoned in 1948. Prior to the development of the Josephson junction voltage standard, the volt was maintained in laboratories using specially constructed batteries called standard cells
34.
Energy density
–
Energy density is the amount of energy stored in a given system or region of space per unit volume. Colloquially it may also be used for energy per unit mass, often only the useful or extractable energy is measured, which is to say that chemically inaccessible energy such as rest mass energy is ignored. In short, pressure is a measure of the enthalpy per unit volume of a system, a pressure gradient has a potential to perform work on the surroundings by converting enthalpy until equilibrium is reached. There are many different types of stored in materials. In order of the magnitude of the energy released, these types of reactions are, nuclear, chemical, electrochemical. Chemical reactions are used by animals to derive energy from food, electrochemical reactions are used by most mobile devices such as laptop computers and mobile phones to release the energy from batteries. The following is a list of the energy densities of commonly used or well-known energy storage materials. Note that this list does not consider the mass of reactants commonly available such as the oxygen required for combustion or the efficiency in use. The following unit conversions may be helpful when considering the data in the table,1 MJ ≈0.28 kWh ≈0.37 HPh. In energy storage applications the energy density relates the mass of a store to the volume of the storage facility. The higher the density of the fuel, the more energy may be stored or transported for the same amount of volume. The energy density of a fuel per unit mass is called the energy of that fuel. The greatest energy source by far is mass itself and this energy, E = mc2, where m = ρV, ρ is the mass per unit volume, V is the volume of the mass itself and c is the speed of light. This energy, however, can be released only by the processes of nuclear fission, nuclear fusion, nuclear reactions cannot be realized by chemical reactions such as combustion. Although greater matter densities can be achieved, the density of a star would approximate the most dense system capable of matter-antimatter annihilation possible. A black hole, although denser than a star, does not have an equivalent anti-particle form. In the case of small black holes the power output would be tremendous. The highest density sources of energy aside from antimatter are fusion and fission, fusion includes energy from the sun which will be available for billions of years but so far, sustained fusion power production continues to be elusive
35.
Angular momentum
–
In physics, angular momentum is the rotational analog of linear momentum. It is an important quantity in physics because it is a conserved quantity – the angular momentum of a system remains constant unless acted on by an external torque. The definition of momentum for a point particle is a pseudovector r×p. This definition can be applied to each point in continua like solids or fluids, unlike momentum, angular momentum does depend on where the origin is chosen, since the particles position is measured from it. The angular momentum of an object can also be connected to the angular velocity ω of the object via the moment of inertia I. However, while ω always points in the direction of the rotation axis, Angular momentum is additive, the total angular momentum of a system is the vector sum of the angular momenta. For continua or fields one uses integration, torque can be defined as the rate of change of angular momentum, analogous to force. Applications include the gyrocompass, control moment gyroscope, inertial systems, reaction wheels, flying discs or Frisbees. In general, conservation does limit the motion of a system. In quantum mechanics, angular momentum is an operator with quantized eigenvalues, Angular momentum is subject to the Heisenberg uncertainty principle, meaning only one component can be measured with definite precision, the other two cannot. Also, the spin of elementary particles does not correspond to literal spinning motion, Angular momentum is a vector quantity that represents the product of a bodys rotational inertia and rotational velocity about a particular axis. Angular momentum can be considered an analog of linear momentum. Thus, where momentum is proportional to mass m and linear speed v, p = m v, angular momentum is proportional to moment of inertia I. Unlike mass, which only on amount of matter, moment of inertia is also dependent on the position of the axis of rotation. Unlike linear speed, which occurs in a line, angular speed occurs about a center of rotation. Therefore, strictly speaking, L should be referred to as the angular momentum relative to that center and this simple analysis can also apply to non-circular motion if only the component of the motion which is perpendicular to the radius vector is considered. In that case, L = r m v ⊥, where v ⊥ = v sin θ is the component of the motion. It is this definition, × to which the moment of momentum refers
36.
Orbit
–
In physics, an orbit is the gravitationally curved path of an object around a point in space, for example the orbit of a planet about a star or a natural satellite around a planet. Normally, orbit refers to a regularly repeating path around a body, to a close approximation, planets and satellites follow elliptical orbits, with the central mass being orbited at a focal point of the ellipse, as described by Keplers laws of planetary motion. For ease of calculation, in most situations orbital motion is adequately approximated by Newtonian Mechanics, historically, the apparent motions of the planets were described by European and Arabic philosophers using the idea of celestial spheres. This model posited the existence of perfect moving spheres or rings to which the stars and it assumed the heavens were fixed apart from the motion of the spheres, and was developed without any understanding of gravity. After the planets motions were accurately measured, theoretical mechanisms such as deferent. Originally geocentric it was modified by Copernicus to place the sun at the centre to help simplify the model, the model was further challenged during the 16th century, as comets were observed traversing the spheres. The basis for the understanding of orbits was first formulated by Johannes Kepler whose results are summarised in his three laws of planetary motion. Second, he found that the speed of each planet is not constant, as had previously been thought. Third, Kepler found a relationship between the orbital properties of all the planets orbiting the Sun. For the planets, the cubes of their distances from the Sun are proportional to the squares of their orbital periods. Jupiter and Venus, for example, are respectively about 5.2 and 0.723 AU distant from the Sun, their orbital periods respectively about 11.86 and 0.615 years. The proportionality is seen by the fact that the ratio for Jupiter,5. 23/11.862, is equal to that for Venus,0. 7233/0.6152. Idealised orbits meeting these rules are known as Kepler orbits, isaac Newton demonstrated that Keplers laws were derivable from his theory of gravitation and that, in general, the orbits of bodies subject to gravity were conic sections. Newton showed that, for a pair of bodies, the sizes are in inverse proportion to their masses. Where one body is more massive than the other, it is a convenient approximation to take the center of mass as coinciding with the center of the more massive body. Lagrange developed a new approach to Newtonian mechanics emphasizing energy more than force, in a dramatic vindication of classical mechanics, in 1846 le Verrier was able to predict the position of Neptune based on unexplained perturbations in the orbit of Uranus. This led astronomers to recognize that Newtonian mechanics did not provide the highest accuracy in understanding orbits, in relativity theory, orbits follow geodesic trajectories which are usually approximated very well by the Newtonian predictions but the differences are measurable. Essentially all the evidence that can distinguish between the theories agrees with relativity theory to within experimental measurement accuracy
37.
Spin (physics)
–
In quantum mechanics and particle physics, spin is an intrinsic form of angular momentum carried by elementary particles, composite particles, and atomic nuclei. Spin is one of two types of angular momentum in mechanics, the other being orbital angular momentum. In some ways, spin is like a vector quantity, it has a definite magnitude, all elementary particles of a given kind have the same magnitude of spin angular momentum, which is indicated by assigning the particle a spin quantum number. The SI unit of spin is the or, just as with classical angular momentum, very often, the spin quantum number is simply called spin leaving its meaning as the unitless spin quantum number to be inferred from context. When combined with the theorem, the spin of electrons results in the Pauli exclusion principle. Wolfgang Pauli was the first to propose the concept of spin, in 1925, Ralph Kronig, George Uhlenbeck and Samuel Goudsmit at Leiden University suggested an physical interpretation of particles spinning around their own axis. The mathematical theory was worked out in depth by Pauli in 1927, when Paul Dirac derived his relativistic quantum mechanics in 1928, electron spin was an essential part of it. As the name suggests, spin was originally conceived as the rotation of a particle around some axis and this picture is correct so far as spin obeys the same mathematical laws as quantized angular momenta do. On the other hand, spin has some properties that distinguish it from orbital angular momenta. Although the direction of its spin can be changed, a particle cannot be made to spin faster or slower. The spin of a particle is associated with a magnetic dipole moment with a g-factor differing from 1. This could only occur if the internal charge of the particle were distributed differently from its mass. The conventional definition of the quantum number, s, is s = n/2. Hence the allowed values of s are 0, 1/2,1, 3/2,2, the value of s for an elementary particle depends only on the type of particle, and cannot be altered in any known way. The spin angular momentum, S, of any system is quantized. The allowed values of S are S = ℏ s = h 4 π n, in contrast, orbital angular momentum can only take on integer values of s, i. e. even-numbered values of n. Those particles with half-integer spins, such as 1/2, 3/2, 5/2, are known as fermions, while particles with integer spins. The two families of particles obey different rules and broadly have different roles in the world around us, a key distinction between the two families is that fermions obey the Pauli exclusion principle, that is, there cannot be two identical fermions simultaneously having the same quantum numbers
38.
Magnetic field
–
A magnetic field is the magnetic effect of electric currents and magnetic materials. The magnetic field at any point is specified by both a direction and a magnitude, as such it is represented by a vector field. The term is used for two distinct but closely related fields denoted by the symbols B and H, where H is measured in units of amperes per meter in the SI, B is measured in teslas and newtons per meter per ampere in the SI. B is most commonly defined in terms of the Lorentz force it exerts on moving electric charges, Magnetic fields can be produced by moving electric charges and the intrinsic magnetic moments of elementary particles associated with a fundamental quantum property, their spin. In quantum physics, the field is quantized and electromagnetic interactions result from the exchange of photons. Magnetic fields are used throughout modern technology, particularly in electrical engineering. The Earth produces its own field, which is important in navigation. Rotating magnetic fields are used in electric motors and generators. Magnetic forces give information about the carriers in a material through the Hall effect. The interaction of magnetic fields in electric devices such as transformers is studied in the discipline of magnetic circuits, noting that the resulting field lines crossed at two points he named those points poles in analogy to Earths poles. He also clearly articulated the principle that magnets always have both a north and south pole, no matter how finely one slices them, almost three centuries later, William Gilbert of Colchester replicated Petrus Peregrinus work and was the first to state explicitly that Earth is a magnet. Published in 1600, Gilberts work, De Magnete, helped to establish magnetism as a science, in 1750, John Michell stated that magnetic poles attract and repel in accordance with an inverse square law. Charles-Augustin de Coulomb experimentally verified this in 1785 and stated explicitly that the north and south poles cannot be separated, building on this force between poles, Siméon Denis Poisson created the first successful model of the magnetic field, which he presented in 1824. In this model, a magnetic H-field is produced by magnetic poles, three discoveries challenged this foundation of magnetism, though. First, in 1819, Hans Christian Ørsted discovered that an electric current generates a magnetic field encircling it, then in 1820, André-Marie Ampère showed that parallel wires having currents in the same direction attract one another. Finally, Jean-Baptiste Biot and Félix Savart discovered the Biot–Savart law in 1820, extending these experiments, Ampère published his own successful model of magnetism in 1825. This has the benefit of explaining why magnetic charge can not be isolated. Also in this work, Ampère introduced the term electrodynamics to describe the relationship between electricity and magnetism, in 1831, Michael Faraday discovered electromagnetic induction when he found that a changing magnetic field generates an encircling electric field
39.
Magnetization
–
In classical electromagnetism, magnetization or magnetic polarization is the vector field that expresses the density of permanent or induced magnetic dipole moments in a magnetic material. Magnetization is not always uniform within a body, but rather varies between different points and it can be compared to electric polarization, which is the measure of the corresponding response of a material to an electric field in electrostatics. Physicists and engineers usually define magnetization as the quantity of magnetic moment per unit volume and it is represented by a pseudovector M. This is better illustrated through the following relation, m = ∭ M d V where m is a magnetic moment. Those definitions of P and M as a moments per unit volume are widely adopted, the M-field is measured in amperes per meter in SI units. The magnetization is often not listed as a parameter for commercially available ferromagnets. Instead the parameter that is listed is residual flux density, denoted B r, physicists often need the magnetization to calculate the moment of a ferromagnet. V is the volume of the magnet, μ0 =4 π ⋅10 −7 H/m is the permeability of vacuum. The behavior of magnetic fields, electric fields, charge density, the role of the magnetization is described below. The magnetization defines the magnetic field H as B = μ0 B = H +4 π M which is convenient for various calculations. The vacuum permeability μ0 is, by definition, 6993400000000000000♠4π×10−7 V·s/, a relation between M and H exists in many materials. In diamagnets and paramagnets, the relation is linear, M = χ m H where χm is called the volume magnetic susceptibility. In ferromagnets there is no correspondence between M and H because of Magnetic hysteresis. The magnetization M makes a contribution to the current density J and it is important to note that there is no such thing as a magnetic charge, but that issue was still debated through the whole 19th century. Other concepts, that went along with it, such as the auxiliary field H, however, they are convenient mathematical tools, and are therefore still used today for applications such as modeling the magnetic field of the Earth. The time-dependent behavior of magnetization becomes important when considering nanoscale and nanosecond timescale magnetization, technologically, this is one of the most important processes in magnetism that is linked to the magnetic data storage process such as used in modern hard disk drives. e. Incident electromagnetic radiation that is circularly polarized Demagnetization is the reduction or elimination of magnetization, another way is to pull it out of an electric coil with alternating current running through it, giving rise to fields that oppose the magnetization. One application of demagnetization is to eliminate unwanted magnetic fields, for example, magnetic fields can interfere with electronic devices such as cell phones or computers, and with machining by making cuttings cling to their parent
40.
Maxwell stress tensor
–
The Maxwell stress tensor is a second-order tensor used in classical electromagnetism to represent the interaction between electromagnetic forces and mechanical momentum. In simple situations, such as a point charge moving freely in a magnetic field. When the situation more complicated, this ordinary procedure can become impossibly difficult. It is therefore convenient to many of these terms in the Maxwell stress tensor. Note that the above derivation assumes complete knowledge of both ρ and J, for the case of nonlinear materials, the nonlinear Maxwell stress tensor must be used. In physics, the Maxwell stress tensor is the tensor of an electromagnetic field. In Gaussian cgs unit, it is given by, σ i j =14 π, indeed, the diagonal elements give the tension acting on a differential area element normal to the corresponding axis. Unlike forces due to the pressure of a gas, an area element in the electromagnetic field also feels a force in a direction that is not normal to the element. This shear is given by the elements of the stress tensor. If the field is only magnetic, some of the drop out. It is the force which spins the motor. Br is the density in the radial direction, and Bt is the flux density in the tangential direction. John Wiley & Sons, Inc.1999, richard Becker, Electromagnetic Fields and Interactions, Dover Publications Inc.1964
41.
Magnetic flux
–
In physics, specifically electromagnetism, the magnetic flux through a surface is the surface integral of the normal component of the magnetic field B passing through that surface. The SI unit of flux is the weber, and the CGS unit is the maxwell. Magnetic flux is measured with a fluxmeter, which contains measuring coils and electronics. The magnetic interaction is described in terms of a vector field, since a vector field is quite difficult to visualize at first, in elementary physics one may instead visualize this field with field lines. The magnetic flux through some surface, in this picture, is proportional to the number of field lines passing through that surface. In more advanced physics, the field line analogy is dropped, for a varying magnetic field, we first consider the magnetic flux through an infinitesimal area element dS, where we may consider the field to be constant, d Φ B = B ⋅ d S. This law is a consequence of the observation that magnetic monopoles have never been found. In other words, Gausss law for magnetism is the statement, while the magnetic flux through a closed surface is always zero, the magnetic flux through an open surface need not be zero and is an important quantity in electromagnetism. For example, a change in the flux passing through a loop of conductive wire will cause an electromotive force. The electromotive force is induced along this boundary, dℓ is an infinitesimal vector element of the contour ∂Σ, v is the velocity of the boundary ∂Σ, E is the electric field, B is the magnetic field. This equation is the principle behind an electrical generator, note that the flux of E through a closed surface is not always zero, this indicates the presence of electric monopoles, that is, free positive or negative charges. Gausss law gives the relation between the electric flux flowing out a surface and the electric charge enclosed in the surface. Magnetic circuit is a method using an analogy with electric circuits to calculate the flux of complex systems of magnetic components, Magnetic monopole is a hypothetical particle that may loosely be described as a magnet with only 1 pole. Magnetic flux quantum is the quantum of magnetic flux passing through a superconductor, carl Friedrich Gauss developed a fruitful collaboration with the physics professor Wilhelm Weber, it led to new knowledge in the field of magnetism. James Clerk Maxwell demonstrated that electric and magnetic forces are two aspects of electromagnetism. Patent 6,720,855, Magnetic-flux conduits Magnetic Flux through a Loop of Wire by Ernest Lee, conversion Magnetic flux Φ in nWb per meter track width to flux level in dB - Tape Operating Levels and Tape Alignment Levels
42.
Position (vector)
–
Usually denoted x, r, or s, it corresponds to the straight-line distances along each axis from O to P, r = O P →. The term position vector is used mostly in the fields of geometry, mechanics. Frequently this is used in two-dimensional or three-dimensional space, but can be generalized to Euclidean spaces in any number of dimensions. These different coordinates and corresponding basis vectors represent the position vector. More general curvilinear coordinates could be used instead, and are in contexts like continuum mechanics, linear algebra allows for the abstraction of an n-dimensional position vector. The notion of space is intuitive since each xi can be any value, the dimension of the position space is n. The coordinates of the vector r with respect to the vectors ei are xi. The vector of coordinates forms the coordinate vector or n-tuple, each coordinate xi may be parameterized a number of parameters t. One parameter xi would describe a curved 1D path, two parameters xi describes a curved 2D surface, three xi describes a curved 3D volume of space, and so on. The linear span of a basis set B = equals the position space R, position vector fields are used to describe continuous and differentiable space curves, in which case the independent parameter needs not be time, but can be arc length of the curve. In the case of one dimension, the position has only one component and it could be, say, a vector in the x-direction, or the radial r-direction. Equivalent notations include, x ≡ x ≡ x, r ≡ r, s ≡ s ⋯ For a position vector r that is a function of time t and these derivatives have common utility in the study of kinematics, control theory, engineering and other sciences. Velocity v = d r d t where dr is a small displacement. By extension, the higher order derivatives can be computed in a similar fashion, study of these higher order derivatives can improve approximations of the original displacement function. Such higher-order terms are required in order to represent the displacement function as a sum of an infinite sequence, enabling several analytical techniques in engineering. A displacement vector can be defined as the action of uniformly translating spatial points in a given direction over a given distance, thus the addition of displacement vectors expresses the composition of these displacement actions and scalar multiplication as scaling of the distance. With this in mind we may define a position vector of a point in space as the displacement vector mapping a given origin to that point. Note thus position vectors depend on a choice of origin for the space, affine space Six degrees of freedom Line element Parametric surface Keller, F. J, Gettys, W. E. et al
43.
Velocity
–
The velocity of an object is the rate of change of its position with respect to a frame of reference, and is a function of time. Velocity is equivalent to a specification of its speed and direction of motion, Velocity is an important concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a vector quantity, both magnitude and direction are needed to define it. The scalar absolute value of velocity is called speed, being a coherent derived unit whose quantity is measured in the SI system as metres per second or as the SI base unit of. For example,5 metres per second is a scalar, whereas 5 metres per second east is a vector, if there is a change in speed, direction or both, then the object has a changing velocity and is said to be undergoing an acceleration. To have a constant velocity, an object must have a constant speed in a constant direction, constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a path has a constant speed. Hence, the car is considered to be undergoing an acceleration, Speed describes only how fast an object is moving, whereas velocity gives both how fast and in what direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified, however, if the car is said to move at 60 km/h to the north, its velocity has now been specified. The big difference can be noticed when we consider movement around a circle and this is because the average velocity is calculated by only considering the displacement between the starting and the end points while the average speed considers only the total distance traveled. Velocity is defined as the rate of change of position with respect to time, average velocity can be calculated as, v ¯ = Δ x Δ t. The average velocity is less than or equal to the average speed of an object. This can be seen by realizing that while distance is always strictly increasing, from this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time is the displacement, x. In calculus terms, the integral of the velocity v is the displacement function x. In the figure, this corresponds to the area under the curve labeled s. Since the derivative of the position with respect to time gives the change in position divided by the change in time, although velocity is defined as the rate of change of position, it is often common to start with an expression for an objects acceleration. As seen by the three green tangent lines in the figure, an objects instantaneous acceleration at a point in time is the slope of the tangent to the curve of a v graph at that point. In other words, acceleration is defined as the derivative of velocity with respect to time, from there, we can obtain an expression for velocity as the area under an a acceleration vs. time graph
44.
Acceleration
–
Acceleration, in physics, is the rate of change of velocity of an object with respect to time. An objects acceleration is the net result of any and all forces acting on the object, the SI unit for acceleration is metre per second squared. Accelerations are vector quantities and add according to the parallelogram law, as a vector, the calculated net force is equal to the product of the objects mass and its acceleration. For example, when a car starts from a standstill and travels in a line at increasing speeds. If the car turns, there is an acceleration toward the new direction, in this example, we can call the forward acceleration of the car a linear acceleration, which passengers in the car might experience as a force pushing them back into their seats. When changing direction, we call this non-linear acceleration, which passengers might experience as a sideways force. If the speed of the car decreases, this is an acceleration in the direction from the direction of the vehicle. Passengers may experience deceleration as a force lifting them forwards, mathematically, there is no separate formula for deceleration, both are changes in velocity. Each of these accelerations might be felt by passengers until their velocity matches that of the car, an objects average acceleration over a period of time is its change in velocity divided by the duration of the period. Mathematically, a ¯ = Δ v Δ t, instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. The SI unit of acceleration is the metre per second squared, or metre per second per second, as the velocity in metres per second changes by the acceleration value, every second. An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, in this case it is said to be undergoing centripetal acceleration. Proper acceleration, the acceleration of a relative to a free-fall condition, is measured by an instrument called an accelerometer. As speeds approach the speed of light, relativistic effects become increasingly large and these components are called the tangential acceleration and the normal or radial acceleration. Geometrical analysis of space curves, which explains tangent, normal and binormal, is described by the Frenet–Serret formulas. Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an amount in every equal time period. A frequently cited example of uniform acceleration is that of an object in free fall in a gravitational field. The acceleration of a body in the absence of resistances to motion is dependent only on the gravitational field strength g