Maxima and minima
In mathematical analysis, the maxima and minima of a function, known collectively as extrema, are the largest and smallest value of the function, either within a given range or on the entire domain of a function. Pierre de Fermat was one of the first mathematicians to propose a general technique, for finding the maxima and minima of functions; as defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no maximum. A real-valued function f defined on a domain X has a global maximum point at x∗ if f ≥ f for all x in X. Similarly, the function has a global minimum point at x∗ if f ≤ f for all x in X; the value of the function at a maximum point is called the maximum value of the function and the value of the function at a minimum point is called the minimum value of the function. Symbolically, this can be written as follows: x 0 ∈ X is a global maximum point of function f: X → R if f ≥ f.
For global minimum point. If the domain X is a metric space f is said to have a local maximum point at the point x∗ if there exists some ε > 0 such that f ≥ f for all x in X within distance ε of x∗. The function has a local minimum point at x∗ if f ≤ f for all x in X within distance ε of x∗. A similar definition can be used when X is a topological space, since the definition just given can be rephrased in terms of neighbourhoods. Mathematically, the given definition is written as follows: Let be a metric space and function f: X → R. X 0 ∈ X is a local maximum point of function f if such that d X < ε ⟹ f ≥ f. For a local minimum point. In both the global and local cases, the concept of a strict extremum can be defined. For example, x∗ is a strict global maximum point if, for all x in X with x ≠ x∗, we have f > f, x∗ is a strict local maximum point if there exists some ε > 0 such that, for all x in X within distance ε of x∗ with x ≠ x∗, we have f > f. Note that a point is a strict global maximum point if and only if it is the unique global maximum point, for minimum points.
A continuous real-valued function with a compact domain always has a maximum point and a minimum point. An important example is a function. Finding global maxima and minima is the goal of mathematical optimization. If a function is continuous on a closed interval by the extreme value theorem global maxima and minima exist. Furthermore, a global maximum either must be a local maximum in the interior of the domain, or must lie on the boundary of the domain. So a method of finding a global maximum is to look at all the local maxima in the interior, look at the maxima of the points on the boundary, take the largest one; the most important, yet quite obvious, feature of continuous real-valued functions of a real variable is that they decrease before local minima and increase afterwards for maxima. A direct consequence of this is the Fermat's theorem, which states that local extrema must occur at critical points. One can distinguish whether a critical point is a local maximum or local minimum by using the first derivative test, second derivative test, or higher-order derivative test, given sufficient differentiability.
For any function, defined piecewise, one finds a maximum by finding the maximum of each piece separately, seeing which one is largest. The function x2 has a unique global minimum at x = 0; the function x3 has maxima. Although the first derivative is 0 at x = 0, this is an inflection point; the function x. The function x−x has a unique global maximum over the positive real numbers at x = 1/e; the function x3/3 − x has first derivative x2 − second derivative 2x. Setting the first derivative to 0 and solving for x gives stationary points at −1 and +1. From the sign o
Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light and water waves; the law of reflection says that for specular reflection the angle at which the wave is incident on the surface equals the angle at which it is reflected. Mirrors exhibit specular reflection. In acoustics, reflection is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types besides visible light. Reflection of VHF and higher frequencies is important for radar. Hard X-rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors. Reflection of light is either diffuse depending on the nature of the interface. In specular reflection the phase of the reflected waves depends on the choice of the origin of coordinates, but the relative phase between s and p polarizations is fixed by the properties of the media and of the interface between them.
A mirror provides the most common model for specular light reflection, consists of a glass sheet with a metallic coating where the significant reflection occurs. Reflection is enhanced in metals by suppression of wave propagation beyond their skin depths. Reflection occurs at the surface of transparent media, such as water or glass. In the diagram, a light ray PO strikes a vertical mirror at point O, the reflected ray is OQ. By projecting an imaginary line through point O perpendicular to the mirror, known as the normal, we can measure the angle of incidence, θi and the angle of reflection, θr; the law of reflection states that θi = θr, or in other words, the angle of incidence equals the angle of reflection. In fact, reflection of light may occur whenever light travels from a medium of a given refractive index into a medium with a different refractive index. In the most general case, a certain fraction of the light is reflected from the interface, the remainder is refracted. Solving Maxwell's equations for a light ray striking a boundary allows the derivation of the Fresnel equations, which can be used to predict how much of the light is reflected, how much is refracted in a given situation.
This is analogous to the way impedance mismatch in an electric circuit causes reflection of signals. Total internal reflection of light from a denser medium occurs if the angle of incidence is greater than the critical angle. Total internal reflection is used as a means of focusing waves that cannot be reflected by common means. X-ray telescopes are constructed by creating a converging "tunnel" for the waves; as the waves interact at low angle with the surface of this tunnel they are reflected toward the focus point. A conventional reflector would be useless as the X-rays would pass through the intended reflector; when light reflects off a material denser than the external medium, it undergoes a phase inversion. In contrast, a less dense, lower refractive index material will reflect light in phase; this is an important principle in the field of thin-film optics. Specular reflection forms images. Reflection from a flat surface forms a mirror image, which appears to be reversed from left to right because we compare the image we see to what we would see if we were rotated into the position of the image.
Specular reflection at a curved surface forms an image which may be demagnified. Such mirrors may have surfaces that are parabolic. If the reflecting surface is smooth, the reflection of light that occurs is called specular or regular reflection; the laws of reflection are as follows: The incident ray, the reflected ray and the normal to the reflection surface at the point of the incidence lie in the same plane. The angle which the incident ray makes with the normal is equal to the angle which the reflected ray makes to the same normal; the reflected ray and the incident ray are on the opposite sides of the normal. These three laws can all be derived from the Fresnel equations. In classical electrodynamics, light is considered as an electromagnetic wave, described by Maxwell's equations. Light waves incident on a material induce small oscillations of polarisation in the individual atoms, causing each particle to radiate a small secondary wave in all directions, like a dipole antenna. All these waves add up to give specular reflection and refraction, according to the Huygens–Fresnel principle.
In the case of dielectrics such as glass, the electric field of the light acts on the electrons in the material, the moving electrons generate fields and become new radiators. The refracted light in the glass is the combination of the forward radiation of the electrons and the incident light; the reflected light is the combination of the backward radiation of all of the electrons. In metals, electrons with no binding energy are called free electrons; when these electrons oscillate with the incident light, the phase difference between their radiation field and the incident field is π, so the forward radiation cancels the incident light, backward radiation is just the reflected light. Light–matter interaction in terms of photons is a topic of quantum electrodynamics, is described in detail by Richard Feynman in his popular book QED: The Strange Theory of Light and Matter; when light strikes the surface of a mate
Marin Cureau de la Chambre
Marin Cureau de la Chambre was a French physician and philosopher born in Saint-Jean-d'Assé, a village near Le Mans. Details of his youth and where he attended school are unknown, he was a physician in Le Mans, around 1630 moved to Paris, where he became a friend and physician to Pierre Séguier. Afterwards, he was a médecin ordinaire to Louis XIV; the monarch was impressed by Cureau de la Chambre's ability to judge human character based on physical appearance. Marin Cureau de la Chambre is known for his work in physiognomy. Between 1640 and 1662 he published a five-volume study on mans' character and "passions" called Caractères des passions, he wrote articles on many other topics, including palmistry, digestion, "reasoning" in animals, occult practices and optics. On the latter subject he investigated the nature of light and color and the possibility of primary and secondary colors, he was the author of books on philosophy, published a translation of Aristotle's Physica. In 1634 he became an early member of the Académie française, in 1666 was an original member of the French Academy of Sciences.
He was the father of clergyman Pierre Cureau de la Chambre. He died in Paris on December 29, 1669. In 1991 astronomer Eric Walter Elst named the asteroid 7126 Cureau after Marin Cureau de la Chambre; this article is based on a translation of an equivalent article at the French Wikipedia. Marin Cureau de la Chambre La lumiere - digital facsimile from the Linda Hall Library
In particle physics, quantum electrodynamics is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction. In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum. Richard Feynman called it "the jewel of physics" for its accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen; the first formulation of a quantum theory describing radiation and matter interaction is attributed to British scientist Paul Dirac, able to compute the coefficient of spontaneous emission of an atom.
Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and an elegant formulation of quantum electrodynamics due to Enrico Fermi, physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck, Victor Weisskopf, in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem pointed out by Robert Oppenheimer. At higher orders in the series infinities emerged, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics.
Difficulties with the theory increased through the end of the 1940s. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift and magnetic moment of the electron; these experiments exposed discrepancies. A first indication of a possible way out was given by Hans Bethe in 1947, after attending the Shelter Island Conference. While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. Despite the limitations of the computation, agreement was excellent; the idea was to attach infinities to corrections of mass and charge that were fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments; this procedure was named renormalization. Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga, Julian Schwinger, Richard Feynman and Freeman Dyson, it was possible to get covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics.
Shin'ichirō Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with a Nobel prize in physics in 1965 for their work in this area. Their contributions, those of Freeman Dyson, were about covariant and gauge invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams seemed different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson showed that the two approaches were equivalent. Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, has subsequently become one of the fundamental aspects of quantum field theory and has come to be seen as a criterion for a theory's general acceptability. Though renormalization works well in practice, Feynman was never comfortable with its mathematical validity referring to renormalization as a "shell game" and "hocus pocus".
QED has served as the template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1970s work by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on the pioneering work of Schwinger, Gerald Guralnik, Dick Hagen, Tom Kibble, Peter Higgs, Jeffrey Goldstone, others, Sheldon Lee Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. Near the end of his life, Richard P. Feynman gave a series of lectures on QED intended for the lay public; these lectures were transcribed and published as Feynman, QED: The strange theory of light and matter, a classic non-mathematical exposition of QED from the point of view articulated below. The key components of Feynman's presentation of QED are three basic actions. A photon goes from time to another place and time. An electron goes from time to another place and time.
An electron absorbs a photon at a certain place and time. These actions are represented in the form of visual shorthand by the three basic elements of Feynman diagrams: a wavy line for the photon, a straight line for the electron and a junction of two straight lines and a wavy one for a vertex representing em
In physics, the wavelength is the spatial period of a periodic wave—the distance over which the wave's shape repeats. It is thus the inverse of the spatial frequency. Wavelength is determined by considering the distance between consecutive corresponding points of the same phase, such as crests, troughs, or zero crossings and is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns. Wavelength is designated by the Greek letter lambda; the term wavelength is sometimes applied to modulated waves, to the sinusoidal envelopes of modulated waves or waves formed by interference of several sinusoids. Assuming a sinusoidal wave moving at a fixed wave speed, wavelength is inversely proportional to frequency of the wave: waves with higher frequencies have shorter wavelengths, lower frequencies have longer wavelengths. Wavelength depends on the medium. Examples of wave-like phenomena are sound waves, water waves and periodic electrical signals in a conductor.
A sound wave is a variation in air pressure, while in light and other electromagnetic radiation the strength of the electric and the magnetic field vary. Water waves are variations in the height of a body of water. In a crystal lattice vibration, atomic positions vary. Wavelength is a measure of the distance between repetitions of a shape feature such as peaks, valleys, or zero-crossings, not a measure of how far any given particle moves. For example, in sinusoidal waves over deep water a particle near the water's surface moves in a circle of the same diameter as the wave height, unrelated to wavelength; the range of wavelengths or frequencies for wave phenomena is called a spectrum. The name originated with the visible light spectrum but now can be applied to the entire electromagnetic spectrum as well as to a sound spectrum or vibration spectrum. In linear media, any wave pattern can be described in terms of the independent propagation of sinusoidal components; the wavelength λ of a sinusoidal waveform traveling at constant speed v is given by λ = v f, where v is called the phase speed of the wave and f is the wave's frequency.
In a dispersive medium, the phase speed itself depends upon the frequency of the wave, making the relationship between wavelength and frequency nonlinear. In the case of electromagnetic radiation—such as light—in free space, the phase speed is the speed of light, about 3×108 m/s, thus the wavelength of a 100 MHz electromagnetic wave is about: 3×108 m/s divided by 108 Hz = 3 metres. The wavelength of visible light ranges from deep red 700 nm, to violet 400 nm. For sound waves in air, the speed of sound is 343 m/s; the wavelengths of sound frequencies audible to the human ear are thus between 17 m and 17 mm, respectively. Note that the wavelengths in audible sound are much longer than those in visible light. A standing wave is an undulatory motion. A sinusoidal standing wave includes stationary points of no motion, called nodes, the wavelength is twice the distance between nodes; the upper figure shows three standing waves in a box. The walls of the box are considered to require the wave to have nodes at the walls of the box determining which wavelengths are allowed.
For example, for an electromagnetic wave, if the box has ideal metal walls, the condition for nodes at the walls results because the metal walls cannot support a tangential electric field, forcing the wave to have zero amplitude at the wall. The stationary wave can be viewed as the sum of two traveling sinusoidal waves of oppositely directed velocities. Wavelength and wave velocity are related just as for a traveling wave. For example, the speed of light can be determined from observation of standing waves in a metal box containing an ideal vacuum. Traveling sinusoidal waves are represented mathematically in terms of their velocity v, frequency f and wavelength λ as: y = A cos = A cos where y is the value of the wave at any position x and time t, A is the amplitude of the wave, they are commonly expressed in terms of wavenumber k and angular frequency ω as: y = A cos = A cos in which wavelength and wavenumber are related to velocity and frequency as: k = 2 π λ = 2 π f v = ω
Classical mechanics describes the motion of macroscopic objects, from projectiles to parts of machinery, astronomical objects, such as spacecraft, planets and galaxies. If the present state of an object is known it is possible to predict by the laws of classical mechanics how it will move in the future and how it has moved in the past; the earliest development of classical mechanics is referred to as Newtonian mechanics. It consists of the physical concepts employed by and the mathematical methods invented by Isaac Newton and Gottfried Wilhelm Leibniz and others in the 17th century to describe the motion of bodies under the influence of a system of forces. More abstract methods were developed, leading to the reformulations of classical mechanics known as Lagrangian mechanics and Hamiltonian mechanics; these advances, made predominantly in the 18th and 19th centuries, extend beyond Newton's work through their use of analytical mechanics. They are, with some modification used in all areas of modern physics.
Classical mechanics provides accurate results when studying large objects that are not massive and speeds not approaching the speed of light. When the objects being examined have about the size of an atom diameter, it becomes necessary to introduce the other major sub-field of mechanics: quantum mechanics. To describe velocities that are not small compared to the speed of light, special relativity is needed. In case that objects become massive, general relativity becomes applicable. However, a number of modern sources do include relativistic mechanics into classical physics, which in their view represents classical mechanics in its most developed and accurate form; the following introduces the basic concepts of classical mechanics. For simplicity, it models real-world objects as point particles; the motion of a point particle is characterized by a small number of parameters: its position and the forces applied to it. Each of these parameters is discussed in turn. In reality, the kind of objects that classical mechanics can describe always have a non-zero size.
Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the additional degrees of freedom, e.g. a baseball can spin while it is moving. However, the results for point particles can be used to study such objects by treating them as composite objects, made of a large number of collectively acting point particles; the center of mass of a composite object behaves like a point particle. Classical mechanics uses common-sense notions of how matter and forces interact, it assumes that matter and energy have definite, knowable attributes such as location in space and speed. Non-relativistic mechanics assumes that forces act instantaneously; the position of a point particle is defined in relation to a coordinate system centered on an arbitrary fixed reference point in space called the origin O. A simple coordinate system might describe the position of a particle P with a vector notated by an arrow labeled r that points from the origin O to point P. In general, the point particle does not need to be stationary relative to O.
In cases where P is moving relative to O, r is defined as a function of time. In pre-Einstein relativity, time is considered an absolute, i.e. the time interval, observed to elapse between any given pair of events is the same for all observers. In addition to relying on absolute time, classical mechanics assumes Euclidean geometry for the structure of space; the velocity, or the rate of change of position with time, is defined as the derivative of the position with respect to time: v = d r d t. In classical mechanics, velocities are directly subtractive. For example, if one car travels east at 60 km/h and passes another car traveling in the same direction at 50 km/h, the slower car perceives the faster car as traveling east at 60 − 50 = 10 km/h. However, from the perspective of the faster car, the slower car is moving 10 km/h to the west denoted as -10 km/h where the sign implies opposite direction. Velocities are directly additive as vector quantities. Mathematically, if the velocity of the first object in the previous discussion is denoted by the vector u = ud and the velocity of the second object by the vector v = ve, where u is the speed of the first object, v is the speed of the second object, d and e are unit vectors in the directions of motion of each object then the velocity of the first object as seen by the second object is u ′ = u − v. Similarly, the first object sees the velocity of the second object as v ′ = v − u.
When both objects are moving in the same direction, this equation can be simplified to u ′ = d. Or, by ignoring direction, the difference can be given in terms of speed only: u ′ = u − v; the acceleration, or rate of change of velocity, is th
Hamiltonian optics and Lagrangian optics are two formulations of geometrical optics which share much of the mathematical formalism with Hamiltonian mechanics and Lagrangian mechanics. In physics, Hamilton's principle states that the evolution of a system described by N generalized coordinates between two specified states at two specified parameters σA and σB is a stationary point, of the action functional, or δ S = δ ∫ σ A σ B L d σ = 0 where q ˙ k = d q k / d σ. Condition δ S = 0 is valid if and only if the Euler-Lagrange equations are satisfied ∂ L ∂ q k − d d σ ∂ L ∂ q ˙ k = 0 with k = 1, ⋯, N; the momentum is defined as p k = ∂ L ∂ q ˙ k and the Euler-Lagrange equations can be rewritten as p ˙ k = ∂ L ∂ q k where p ˙ k = d p k / d σ. A different approach to solving this problem consists in defining a Hamiltonian as H = ∑ k q ˙ k p k − L for which a new set of differential equations can be derived by looking at how the total differential of the Lagrangian depends on parameter σ, positions q i and their derivatives q ˙ i relative to σ.
This derivation is the same as in Hamiltonian mechanics, only with time t now replaced by a general parameter σ. Those differential equations are the Hamilton's equations ∂ H ∂ q k = − p ˙ k, ∂ H ∂ p k = q ˙ k, ∂ H ∂ σ = − ∂ L ∂ σ. with k = 1, ⋯, N. Hamilton's equations are first-order differential equations, while Euler-Lagrange's equations are second-order; the general results presented above for Hamilton's principle can be applied to optics. In 3D euclidean space the generalized coordinates are now the coordinates of euclidean space. Fermat's principle states that the optical length of the path followed by light between two fixed points, A and B, is a stationary point, it may be a minimum, constant or an inflection point. In general, as light travels, it moves in a medium of variable refractive index, a scalar field of position in space, that is, n = n in 3D euclidean space. Assuming now that light travels along the x3 axis, the path of a light ray may be parametrized as s = ( x 1, x 2