1.
Kingdom of France
–
The Kingdom of France was a medieval and early modern monarchy in Western Europe. It was one of the most powerful states in Europe and a great power since the Late Middle Ages and it was also an early colonial power, with possessions around the world. France originated as West Francia, the half of the Carolingian Empire. A branch of the Carolingian dynasty continued to rule until 987, the territory remained known as Francia and its ruler as rex Francorum well into the High Middle Ages. The first king calling himself Roi de France was Philip II, France continued to be ruled by the Capetians and their cadet lines—the Valois and Bourbon—until the monarchy was overthrown in 1792 during the French Revolution. France in the Middle Ages was a de-centralised, feudal monarchy, in Brittany and Catalonia the authority of the French king was barely felt. Lorraine and Provence were states of the Holy Roman Empire and not yet a part of France, during the Late Middle Ages, the Kings of England laid claim to the French throne, resulting in a series of conflicts known as the Hundred Years War. Subsequently, France sought to extend its influence into Italy, but was defeated by Spain in the ensuing Italian Wars, religiously France became divided between the Catholic majority and a Protestant minority, the Huguenots, which led to a series of civil wars, the Wars of Religion. France laid claim to large stretches of North America, known collectively as New France, Wars with Great Britain led to the loss of much of this territory by 1763. French intervention in the American Revolutionary War helped secure the independence of the new United States of America, the Kingdom of France adopted a written constitution in 1791, but the Kingdom was abolished a year later and replaced with the First French Republic. The monarchy was restored by the great powers in 1814. During the later years of the elderly Charlemagnes rule, the Vikings made advances along the northern and western perimeters of the Kingdom of the Franks, after Charlemagnes death in 814 his heirs were incapable of maintaining political unity and the empire began to crumble. The Treaty of Verdun of 843 divided the Carolingian Empire into three parts, with Charles the Bald ruling over West Francia, the nucleus of what would develop into the kingdom of France. Viking advances were allowed to increase, and their dreaded longboats were sailing up the Loire and Seine rivers and other waterways, wreaking havoc. During the reign of Charles the Simple, Normans under Rollo from Norway, were settled in an area on either side of the River Seine, downstream from Paris, that was to become Normandy. With its offshoots, the houses of Valois and Bourbon, it was to rule France for more than 800 years. Henry II inherited the Duchy of Normandy and the County of Anjou, and married Frances newly divorced ex-queen, Eleanor of Aquitaine, after the French victory at the Battle of Bouvines in 1214, the English monarchs maintained power only in southwestern Duchy of Guyenne. The death of Charles IV of France in 1328 without male heirs ended the main Capetian line, under Salic law the crown could not pass through a woman, so the throne passed to Philip VI, son of Charles of Valois
2.
Bourbon Restoration
–
The Bourbon Restoration was the period of French history following the fall of Napoleon in 1814 until the July Revolution of 1830. The brothers of executed Louis XVI of France reigned in highly conservative fashion, and they were nonetheless unable to reverse most of the changes made by the French Revolution and Napoleon. At the Congress of Vienna they were treated respectfully, but had to give up all the gains made since 1789. King Louis XVI of the House of Bourbon had been overthrown and executed during the French Revolution, a coalition of European powers defeated Napoleon in the War of the Sixth Coalition, ended the First Empire in 1814, and restored the monarchy to the brothers of Louis XVI. The Bourbon Restoration lasted from 6 April 1814 until the uprisings of the July Revolution of 1830. There was an interlude in spring 1815—the Hundred Days—when the return of Napoleon forced the Bourbons to flee France, when Napoleon was again defeated by the Seventh Coalition they returned to power in July. During the Restoration, the new Bourbon regime was a monarchy, unlike the absolutist Ancien Régime. The period was characterized by a conservative reaction, and consequent minor but consistent occurrences of civil unrest. It also saw the reestablishment of the Catholic Church as a power in French politics. The eras of the French Revolution and Napoleon brought a series of changes to France which the Bourbon Restoration did not reverse. First of all, France became highly centralized, with all decisions made in Paris, the political geography was completely reorganized and made uniform. France was divided more than 80 departments, which have endured into the 21st century. Each department had an administrative structure, and was tightly controlled by a prefect appointed by Paris. The Catholic Church lost all its lands and buildings during the Revolution, the bishop still ruled his diocese, and communicated with the pope through the government in Paris. Bishops, priests, nuns and other people were paid salaries by the state. All the old rites and ceremonies were retained, and the government maintained the religious buildings. The Church was allowed to operate its own seminaries and to some extent local schools as well, bishops were much less powerful than before, and had no political voice. However, the Catholic Church reinvented itself and put a new emphasis on personal religiosity that gave it a hold on the psychology of the faithful, education was centralized, with the Grand Master of the University of France controlling every element of the entire educational system from Paris
3.
University of Caen
–
The University of Caen Normandy is a university in Caen in Normandy, France. The institution was founded in 1432 by John of Lancaster, 1st Duke of Bedford and it originally consisted of a faculty of Canon Law and a faculty of Law. By 1438, it already had five faculties, the foundation was confirmed by the King of France Charles VII the Victorious in 1452. On July 7,1944, during the Battle for Caen, reconstruction began in 1948, the new university was inaugurated on June 1 and 2,1957. Its logo, the mythical Phoenix, symbolises this revival, the mathematician Pierre Varignon, whose work would influence the young Leonhard Euler, earned his M. A. from Caen in 1682. Pierre-Simon Laplace was introduced to mathematics in Caen by Christophe Gadbled, henri Poincaré taught there between 1879 and 1881. The University contains a scale model of Rome. Those intending to become advocates or solicitors in Guernsey must complete three months study of Norman law at Caen University prior to being called to the Guernsey or Jersey Bar, the Carré international is located here. The center is a hub for students from around the world who wish to attend university in France. They take students from A1 to C2, list of medieval universities Official web site, unicaen. fr
4.
Black holes
–
A black hole is a region of spacetime exhibiting such strong gravitational effects that nothing—not even particles and electromagnetic radiation such as light—can escape from inside it. The theory of relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon, although the event horizon has an enormous effect on the fate and circumstances of an object crossing it, no locally detectable features appear to be observed. In many ways a black hole acts like a black body. Moreover, quantum theory in curved spacetime predicts that event horizons emit Hawking radiation. This temperature is on the order of billionths of a kelvin for black holes of stellar mass, objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. Black holes were considered a mathematical curiosity, it was during the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality, black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed, it can continue to grow by absorbing mass from its surroundings, by absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies, despite its invisible interior, the presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter that falls onto a black hole can form an accretion disk heated by friction. If there are other stars orbiting a black hole, their orbits can be used to determine the black holes mass, such observations can be used to exclude possible alternatives such as neutron stars.3 million solar masses. On 15 June 2016, a detection of a gravitational wave event from colliding black holes was announced. The idea of a body so massive that light could not escape was briefly proposed by astronomical pioneer John Michell in a letter published in 1783-4. Michell correctly noted that such supermassive but non-radiating bodies might be detectable through their effects on nearby visible bodies. In 1915, Albert Einstein developed his theory of general relativity, only a few months later, Karl Schwarzschild found a solution to the Einstein field equations, which describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the solution for the point mass. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, the nature of this surface was not quite understood at the time
5.
Bayesian inference
–
Bayesian inference is a method of statistical inference in which Bayes theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics, Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a range of activities, including science, engineering, philosophy, medicine, sport. In the philosophy of theory, Bayesian inference is closely related to subjective probability. Bayesian inference derives the probability as a consequence of two antecedents, a prior probability and a likelihood function derived from a statistical model for the observed data. Bayesian inference computes the probability according to Bayes theorem, P = P ⋅ P P where ∣ means event conditional on. H stands for any hypothesis whose probability may be affected by data, often there are competing hypotheses, and the task is to determine which is the most probable. The evidence E corresponds to new data that were not used in computing the prior probability, P, the prior probability, is the estimate of the probability of the hypothesis H before the data E, the current evidence, is observed. P, the probability, is the probability of H given E, i. e. after E is observed. This is what we want to know, the probability of a hypothesis given the observed evidence, P is the probability of observing E given H. As a function of E with H fixed, this is the likelihood – it indicates the compatibility of the evidence with the given hypothesis. The likelihood function is a function of the evidence, E, while the probability is a function of the hypothesis. This factor is the same for all possible hypotheses being considered, Bayes rule can also be written as follows, P = P P ⋅ P where the factor P P can be interpreted as the impact of E on the probability of H. If the evidence does not match up with a hypothesis, one should reject the hypothesis, but if a hypothesis is extremely unlikely a priori, one should also reject it, even if the evidence does appear to match up. The critical point about Bayesian inference, then, is that it provides a way of combining new evidence with prior beliefs. This allows for Bayesian principles to be applied to various kinds of evidence and this procedure is termed Bayesian updating. Bayesian updating is widely used and computationally convenient, however, it is not the only updating rule that might be considered rational. Ian Hacking noted that traditional Dutch book arguments did not specify Bayesian updating, Hacking wrote And neither the Dutch book argument, nor any other in the personalist arsenal of proofs of the probability axioms, entails the dynamic assumption
6.
Laplace's equation
–
In mathematics, Laplaces equation is a second-order partial differential equation named after Pierre-Simon Laplace who first studied its properties. This is often written as, ∇2 φ =0 or Δ φ =0 where ∆ = ∇2 is the Laplace operator, Laplaces equation and Poissons equation are the simplest examples of elliptic partial differential equations. The general theory of solutions to Laplaces equation is known as potential theory, in the study of heat conduction, the Laplace equation is the steady-state heat equation. In curvilinear coordinates, Δ f = ∂ ∂ ξ j + ∂ f ∂ ξ j g j m Γ m n n =0, hence the Laplacian Δf ≝ div grad f maps the scalar function f to a scalar function. If the right-hand side is specified as a function, h, i. e. if the whole equation is written as Δ f = h then it is called Poissons equation. The Laplace equation is also a case of the Helmholtz equation. The Dirichlet problem for Laplaces equation consists of finding a solution φ on some domain D such that φ on the boundary of D is equal to some given function. Allow heat to flow until a state is reached in which the temperature at each point on the domain doesnt change anymore. The temperature distribution in the interior will then be given by the solution to the corresponding Dirichlet problem, the Neumann boundary conditions for Laplaces equation specify not the function φ itself on the boundary of D, but its normal derivative. Physically, this corresponds to the construction of a potential for a field whose effect is known at the boundary of D alone. Solutions of Laplaces equation are called functions, they are all analytic within the domain where the equation is satisfied. If any two functions are solutions to Laplaces equation, their sum is also a solution and this property, called the principle of superposition, is very useful, e. g. solutions to complex problems can be constructed by summing simple solutions. The Laplace equation in two independent variables has the form ∂2 ψ ∂ x 2 + ∂2 ψ ∂ y 2 ≡ ψ x x + ψ y y =0, the real and imaginary parts of a complex analytic function both satisfy the Laplace equation. It follows that u y y = y = − x = − x, therefore u satisfies the Laplace equation. A similar calculation shows that v also satisfies the Laplace equation, conversely, given a harmonic function, it is the real part of an analytic function, f. If a trial form is f = φ + i ψ and this relation does not determine ψ, but only its increments, d ψ = − φ y d x + φ x d y. The Laplace equation for φ implies that the integrability condition for ψ is satisfied, ψ x y = ψ y x, the integrability condition and Stokes theorem implies that the value of the line integral connecting two points is independent of the path. The resulting pair of solutions of the Laplace equation are called harmonic functions
7.
Laplace transform
–
In mathematics the Laplace transform is an integral transform named after its discoverer Pierre-Simon Laplace. It takes a function of a real variable t to a function of a complex variable s. The Laplace transform is similar to the Fourier transform. While the Fourier transform of a function is a function of a real variable. Laplace transforms are usually restricted to functions of t with t >0, a consequence of this restriction is that the Laplace transform of a function is a holomorphic function of the variable s. Unlike the Fourier transform, the Laplace transform of a distribution is generally a well-behaved function, also techniques of complex variables can be used directly to study Laplace transforms. As a holomorphic function, the Laplace transform has a series representation. This power series expresses a function as a superposition of moments of the function. This perspective has applications in probability theory, the Laplace transform is invertible on a large class of functions. The inverse Laplace transform takes a function of a complex variable s, so, for example, Laplace transformation from the time domain to the frequency domain transforms differential equations into algebraic equations and convolution into multiplication. It has many applications in the sciences and technology, the Laplace transform is named after mathematician and astronomer Pierre-Simon Laplace, who used a similar transform in his work on probability theory. The current widespread use of the transform came about during and soon after World War II although it had used in the 19th century by Abel, Lerch, Heaviside. The early history of methods having some similarity to Laplace transform is as follows. From 1744, Leonhard Euler investigated integrals of the form z = ∫ X e a x d x and z = ∫ X x A d x as solutions of differential equations but did not pursue the matter very far. These types of integrals seem first to have attracted Laplaces attention in 1782 where he was following in the spirit of Euler in using the integrals themselves as solutions of equations. He used an integral of the form ∫ x s ϕ d x, akin to a Mellin transform, to transform the whole of a difference equation, in order to look for solutions of the transformed equation. He then went on to apply the Laplace transform in the way and started to derive some of its properties. Laplace also recognised that Joseph Fouriers method of Fourier series for solving the equation could only apply to a limited region of space because those solutions were periodic
8.
Laplace distribution
–
In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. Increments of Laplace motion or a variance gamma process evaluated over the scale also have a Laplace distribution. If μ =0 and b =1, the positive half-line is exactly an exponential distribution scaled by 1/2, consequently, the Laplace distribution has fatter tails than the normal distribution. The inverse cumulative distribution function is given by F −1 = μ − b sgn ln and this follows from the inverse cumulative distribution function given above. A Laplace variate can also be generated as the difference of two i. i. d, equivalently, Laplace can also be generated as the logarithm of the ratio of two i. i. d. Given N independent and identically distributed samples x 1, x 2. X N, the maximum likelihood estimator μ ^ of μ is the median. μ r ′ = ∑ k =0 r = m n +12 b where E n is the exponential integral function E n = x n −1 Γ. If X ∼ Laplace then k X + c ∼ Laplace, If X ∼ Laplace then | X | ∼ Exponential. If X, Y ∼ Exponential then X − Y ∼ Laplace ． If X ∼ Laplace then | X − μ | ∼ Exponential, If X ∼ Laplace then X ∼ EPD. X4 ∼ N then X1 X2 − X3 X4 ∼ Laplace, If X i ∼ Laplace then 2 b ∑ i =1 n | X i − μ | ∼ χ2. If X, Y ∼ Laplace then | X − μ | | Y − μ | ∼ F , If X, Y ∼ U then log ∼ Laplace. If X ∼ Exponential and Y ∼ Bernoulli independent of X, If X ∼ Exponential and Y ∼ Exponential independent of X, then λ X − ν Y ∼ Laplace ． If X has a Rademacher distribution and Y ∼ Exponential then X Y ∼ Laplace. If V ∼ Exponential and Z ∼ N independent of V, If X ∼ GeometricStable then X ∼ Laplace. The Laplace distribution is a case of the hyperbolic distribution. If X | Y ∼ N with Y ∼ Rayleigh then X ∼ Laplace, a Laplace random variable can be represented as the difference of two iid exponential random variables. One way to show this is by using the characteristic function approach, consider two i. i. d random variables X, Y ∼ Exponential. The characteristic functions for X, − Y are λ − i t + λ, λ i t + λ respectively, on multiplying these characteristic functions, the result is λ2 = λ2 t 2 + λ2
9.
Laplace's method
–
This technique was originally presented in Laplace. Assume that the function ƒ has a global maximum at x0. Then, the value ƒ will be larger than the values of ƒ. If we multiply this function by a large number M, the ratio between Mƒ and Mƒ will stay the same, but it will grow exponentially in the function e M f. Thus, significant contributions to the integral of this function will come only from points x in a neighbourhood of x0, to state and motivate the method, we need several assumptions. We will assume that x0 is not an endpoint of the interval of integration, that the values ƒ cannot be very close to ƒ unless x is close to x0, and that the second derivative f ″ <0. We can expand ƒ around x0 by Taylors theorem, f = f + f ′ +12 f ″2 + R where R = O. Since ƒ has a maximum at x0, and since x0 is not an endpoint, it is a stationary point. Therefore, the function ƒ may be approximated to quadratic order f ≈ f −12 | f ″ |2 for x close to x0. The assumptions made ensure the accuracy of the approximation ∫ a b e M f d x ≈ e M f ∫ a b e − M | f ″ |2 /2 d x. This latter integral is a Gaussian integral if the limits of integration go from −∞ to +∞ and we find ∫ a b e M f d x ≈2 π M | f ″ | e M f as M → ∞. A generalization of this method and extension to arbitrary precision is provided by Fog, formal statement and proof, Assume that f is a twice continuously differentiable function on with x 0 ∈ the unique point such that f = max f. Assume additionally that f ″ <0, Laplaces approximation is sometimes written as ∫ a b h e M g d x ≈2 π M | g ″ | h e M g as M → ∞ where h is positive. Importantly, the accuracy of the approximation depends on the variable of integration, analogously to the univariate case, the Hessian is required to be negative definite. By the way, although x denotes a d -dimensional vector, in extensions of Laplaces method, complex analysis, and in particular Cauchys integral formula, is used to find a contour of steepest descent for an equivalent integral, expressed as a line integral. Again the main idea is to reduce, at least asymptotically, see the book of Erdelyi for a simple discussion. The appropriate formulation for the complex z-plane is ∫ a b e M f d z ≈ ≈2 π − M f ″ e M f as M → ∞. for a passing through the saddle point at z0. Note the explicit appearance of a sign to indicate the direction of the second derivative
10.
Laplace force
–
In physics the Lorentz force is the combination of electric and magnetic force on a point charge due to electromagnetic fields. A particle of charge q moving with velocity v in the presence of an electric field E, the first derivation of the Lorentz force is commonly attributed to Oliver Heaviside in 1889, although other historians suggest an earlier origin in an 1865 paper by James Clerk Maxwell. Hendrik Lorentz derived it a few years after Heaviside, the force F acting on a particle of electric charge q with instantaneous velocity v, due to an external electric field E and magnetic field B, is given by, where × is the vector cross product. More explicitly stated, F = q in which r is the vector of the charged particle, t is time. The term qE is called the force, while the term qv × B is called the magnetic force. According to some definitions, the term Lorentz force refers specifically to the formula for the magnetic force and this article will not follow this nomenclature, In what follows, the term Lorentz force will refer only to the expression for the total force. The magnetic force component of the Lorentz force manifests itself as the force acts on a current-carrying wire in a magnetic field. In that context, it is called the Laplace force. For a continuous distribution in motion, the Lorentz force equation becomes. If both sides of this equation are divided by the volume of this piece of the charge distribution dV. Rather than the amount of charge and its velocity in electric and magnetic fields, see Covariant formulation of classical electromagnetism for more details. The above-mentioned formulae use SI units which are the most common among experimentalists, technicians, in cgs-Gaussian units, which are somewhat more common among theoretical physicists, one has instead F = q c g s. where c is the speed of light. Where ε0 is the permittivity and μ0 the vacuum permeability. In practice, the subscripts cgs and SI are always omitted, early attempts to quantitatively describe the electromagnetic force were made in the mid-18th century. It was proposed that the force on magnetic poles, by Johann Tobias Mayer and others in 1760, however, in both cases the experimental proof was neither complete nor conclusive. It was not until 1784 when Charles-Augustin de Coulomb, using a balance, was able to definitively show through experiment that this was true. Soon after the discovery in 1820 by H. C, in all these descriptions, the force was always given in terms of the properties of the objects involved and the distances between them rather than in terms of electric and magnetic fields. J. J. Thomson was the first to attempt to derive from Maxwells field equations the electromagnetic forces on a charged object in terms of the objects properties
11.
Laplacian matrix
–
In the mathematical field of graph theory, the Laplacian matrix, sometimes called admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. The Laplacian matrix can be used to find many useful properties of a graph, together with Kirchhoffs theorem, it can be used to calculate the number of spanning trees for a given graph. The sparsest cut of a graph can be approximated through the second smallest eigenvalue of its Laplacian by Cheegers inequality. Given a simple graph G with n vertices, its Laplacian matrix L n × n is defined as, L = D − A, since G is a simple graph, A only contains 1s or 0s and its diagonal elements are all 0s. In the case of directed graphs, either the indegree or outdegree might be used, here is a simple example of a labeled, undirected graph and its Laplacian matrix. For an graph G and its Laplacian matrix L with eigenvalues λ0 ≤ λ1 ≤ ⋯ ≤ λ n −1, L is symmetric and this is verified in the incidence matrix section. This can also be seen from the fact that the Laplacian is symmetric, every row sum and column sum of L is zero. Indeed, in the sum, the degree of the vertex is summed with a -1 for each neighbor, in consequence, λ0 =0, because the vector v 0 = satisfies L v 0 =0. The number of connected components in the graph is the dimension of the nullspace of the Laplacian, the smallest non-zero eigenvalue of L is called the spectral gap. The second smallest eigenvalue of L is the connectivity of G. The Laplacian is an operator on the vector space of functions f, V → R, where V is the vertex set of G. When G is k-regular, the normalized Laplacian is, L =1 k L = I −1 k A, where A is the adjacency matrix and I is an identity matrix. For a graph with multiple connected components, L is a diagonal matrix. {\displaystyle M_=\left\ Then the Laplacian matrix L satisfies L = M T M, where M T is the matrix transpose of M. Now consider an eigendecomposition of L, with unit-norm eigenvectors v i, because λ i can be written as the inner product of the vector M v i with itself, this shows that λ i ≥0 and so the eigenvalues of L are all non-negative. The deformed Laplacian is commonly defined as Δ = I − s A + s 2 where I is the matrix, A is the adjacency matrix, and D is the degree matrix. Note that the standard Laplacian is just Δ, the symmetric normalized Laplacian is a symmetric matrix. All eigenvalues of the normalized Laplacian are real and non-negative and we can see this as follows
12.
Variance gamma process
–
In the theory of stochastic processes, a part of the mathematical theory of probability, the variance gamma process, also known as Laplace motion, is a Lévy process determined by a random time change. The process has finite moments distinguishing it from many Lévy processes, there is no diffusion component in the VG process and it is thus a pure jump process. The increments are independent and follow a Variance-gamma distribution, which is a generalization of the Laplace distribution, there are several representations of the VG process that relate it to other processes. It can for example be written as a Brownian motion W with drift θ t subjected to a time change which follows a gamma process Γ, X V G, = θ Γ + σ W. An alternative way of stating this is that the variance gamma process is a Brownian motion subordinated to a Gamma subordinator, alternatively it can be approximated by a compound Poisson process that leads to a representation with explicitly given jumps and their locations. This last characterization gives an understanding of the structure of the path with location. On the early history of the process see Seneta. As such the variance gamma model allows to consistently price options with different strikes and maturities using a set of parameters. Madan and Seneta present a symmetric version of the variance gamma process, Madan, Carr and Chang extend the model to allow for an asymmetric form and present a formula to price European options under the variance gamma process. Hirsa and Madan show how to price American options under variance gamma, fiorani presents numerical solutions for European and American barrier options under variance gamma process. He also provides computer programming code to price vanilla and barrier European and American barrier options under variance gamma process, lemmens et al. construct bounds for arithmetic Asian options for several Lévy models including the variance gamma model. The variance gamma process has successfully applied in the modeling of credit risk in structural models. Fiorani, Luciano and Semeraro model credit default swaps under variance gamma, in an extensive empirical test they show the overperformance of the pricing under variance gamma, compared to alternative models presented in literature. Monte Carlo methods for the variance gamma process are described by Fu, algorithms are presented by Korn et al. Input, VG parameters θ, σ, ν and time increments Δ t 1, … Δ t N, where ∑ i =1 N Δ t i = T. Initialization, Set X=0. Loop, For i =1 to N, Generate independent gamma Δ G i ∼ Γ, return X = X + θ Δ G i + σ Δ G i Z i. This approach is based on the difference of gamma representation X V G = Γ − Γ, where μ p, μ q, ν are defined as above. Input, VG parameters θ, σ, ν, μ p, μ q ] and time increments Δ t 1, … Δ t N, where ∑ i =1 N Δ t i = T. Initialization, Set X=0
13.
Laplace pressure
–
The Laplace pressure is the pressure difference between the inside and the outside of a curved surface that forms the boundary between a gas region and a liquid region. The pressure difference is caused by the tension of the interface between liquid and gas. The Laplace pressure is determined from the Young–Laplace equation given as Δ P ≡ P inside − P outside = γ, although signs for these values vary, sign convention usually dictates positive curvature when convex and negative when concave. The Laplace pressure is used to determine the pressure difference in spherical shapes such as bubbles or droplets. In this case, R1 = R2, Δ P = γ2 R For a gas bubble within a liquid, for a gas bubble with a liquid wall, beyond which is again gas, there are two surfaces, each contributing to the total pressure difference. A common example of use is finding the pressure inside an air bubble in pure water, the extra pressure inside the bubble is given here for three bubble sizes, A1 mm bubble has negligible extra pressure. Yet when the diameter is ~3 µm, the bubble has an atmosphere inside than outside. When the bubble is only several hundred nanometers, the pressure inside can be several atmospheres, one should bear in mind that the surface tension in the numerator can be much smaller in the presence of surfactants or contaminants. Such nanoemulsions can be antibacterial because the pressure inside the oil droplets can cause them to attach to bacteria and simply merge with them, swell them. Ostwald ripening Kelvin equation Laplace number
14.
Laplace resonance
–
Orbital resonances greatly enhance the mutual gravitational influence of the bodies, i. e. their ability to alter or constrain each others orbits. In most cases, this results in an interaction, in which the bodies exchange momentum. Under some circumstances, a resonant system can be stable and self-correcting, examples are the 1,2,4 resonance of Jupiters moons Ganymede, Europa and Io, and the 2,3 resonance between Pluto and Neptune. Unstable resonances with Saturns inner moons give rise to gaps in the rings of Saturn, thus the 2,3 ratio above means Pluto completes two orbits in the time it takes Neptune to complete three. In the case of resonance relationships between three or more bodies, either type of ratio may be used and the type of ratio will be specified. Since the discovery of Newtons law of gravitation in the 17th century. The stable orbits that arise in a two-body approximation ignore the influence of other bodies and it was Laplace who found the first answers explaining the remarkable dance of the Galilean moons. It is fair to say that this field of study has remained very active since then. Before Newton, there was consideration of ratios and proportions in orbital motions, in what was called the music of the spheres. In general, a resonance may involve one or any combination of the orbit parameters. Act on any scale from short term, commensurable with the orbit periods, to secular. Lead to either long-term stabilization of the orbits or be the cause of their destabilization, a mean-motion orbital resonance occurs when two bodies have periods of revolution that are a simple integer ratio of each other. Depending on the details, this can either stabilize or destabilize the orbit, stabilization may occur when the two bodies move in such a synchronised fashion that they never closely approach. For instance, The orbits of Pluto and the plutinos are stable, despite crossing that of the much larger Neptune, the resonance ensures that, when they approach perihelion and Neptunes orbit, Neptune is consistently distant. Other Neptune-crossing bodies that were not in resonance were ejected from that region by strong perturbations due to Neptune. There are also smaller but significant groups of resonant trans-Neptunian objects occupying the 1,1,3,5,4,7,1,2 and 2,5 resonances, among others, with respect to Neptune. In the asteroid belt beyond 3.5 AU from the Sun, orbital resonances can also destabilize one of the orbits. For small bodies, destabilization is actually far more likely, for instance, In the asteroid belt within 3.5 AU from the Sun, the major mean-motion resonances with Jupiter are locations of gaps in the asteroid distribution, the Kirkwood gaps
15.
Spherical harmonics
–
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving differential equations that commonly occur in science. Like the sines and cosines in Fourier series, the harmonics may be organized by angular frequency. Further, spherical harmonics are functions for SO, the group of rotations in three dimensions, and thus play a central role in the group theoretic discussion of SO. Despite their name, spherical harmonics take their simplest form in Cartesian coordinates, functions that satisfy Laplaces equation are often said to be harmonic, hence the name spherical harmonics. In this setting, they may be viewed as the portion of a set of solutions to Laplaces equation in three dimensions, and this viewpoint is often taken as an alternative definition. A specific set of harmonics, denoted Y ℓ m or Y ℓ m, are called Laplaces spherical harmonics. These functions form a system, and are thus basic to the expansion of a general function on the sphere as alluded to above. Spherical harmonics are important in theoretical and practical applications, e. g. In 3D computer graphics, spherical harmonics play a role in a variety of topics including indirect lighting and modelling of 3D shapes. Spherical harmonics were first investigated in connection with the Newtonian potential of Newtons law of gravitation in three dimensions. Each term in the summation is an individual Newtonian potential for a point mass. Just prior to time, Adrien-Marie Legendre had investigated the expansion of the Newtonian potential in powers of r = |x|. The functions Pi are the Legendre polynomials, and they are a case of spherical harmonics. Subsequently, in his 1782 memoire, Laplace investigated these coefficients using spherical coordinates to represent the angle γ between x1 and x. The solid harmonics were homogeneous polynomial solutions of Laplaces equation ∂2 u ∂ x 2 + ∂2 u ∂ y 2 + ∂2 u ∂ z 2 =0, by examining Laplaces equation in spherical coordinates, Thomson and Tait recovered Laplaces spherical harmonics. The 19th century development of Fourier series made possible the solution of a variety of physical problems in rectangular domains, such as the solution of the heat equation. This could be achieved by expansion of functions in series of trigonometric functions, many aspects of the theory of Fourier series could be generalized by taking expansions in spherical harmonics rather than trigonometric functions
16.
Nebular hypothesis
–
The nebular hypothesis is the most widely accepted model in the field of cosmogony to explain the formation and evolution of the Solar System. It suggests that the Solar System formed from nebulous material, the theory was developed by Immanuel Kant and published in his Allgemeine Naturgeschichte und Theorie des Himmels, published in 1755. Originally applied to the Solar System, the process of planetary formation is now thought to be at work throughout the Universe. The widely-accepted modern variant of the hypothesis is the solar nebular disk model or solar nebular model. It offered explanations for a variety of properties of the Solar System, including the circular and coplanar orbits of the planets. Some elements of the hypothesis are echoed in modern theories of planetary formation. According to the hypothesis, stars form in massive and dense clouds of molecular hydrogen—giant molecular clouds. These clouds are gravitationally unstable, and matter coalesces within them to smaller denser clumps, which rotate, collapse. Star formation is a process, which always produces a gaseous protoplanetary disk, proplyd. This may give birth to planets in certain circumstances, which are not well known, thus the formation of planetary systems is thought to be a natural result of star formation. A Sun-like star usually takes approximately 1 million years to form, the protoplanetary disk is an accretion disk that feeds the central star. Initially very hot, the disk later cools in what is known as the T tauri star stage, here, formation of small dust grains made of rocks, the grains eventually may coagulate into kilometer-sized planetesimals. If the disk is massive enough, the runaway accretions begin, near the star, the planetary embryos go through a stage of violent mergers, producing a few terrestrial planets. The last stage takes approximately 100 million to a billion years, the formation of giant planets is a more complicated process. It is thought to occur beyond the frost line, where planetary embryos mainly are made of various types of ice, as a result, they are several times more massive than in the inner part of the protoplanetary disk. What follows after the formation is not completely clear. Some embryos appear to continue to grow and eventually reach 5–10 Earth masses—the threshold value, jupiter- and Saturn-like planets are thought to accumulate the bulk of their mass during only 10,000 years. The accretion stops when the gas is exhausted, the formed planets can migrate over long distances during or after their formation
17.
Astronomer
–
An astronomer is a scientist in the field of astronomy who concentrates their studies on a specific question or field outside of the scope of Earth. They look at stars, planets, moons, comets and galaxies, as well as other celestial objects — either in observational astronomy. Examples of topics or fields astronomers work on include, planetary science, solar astronomy, there are also related but distinct subjects like physical cosmology which studies the Universe as a whole. Astronomers usually fit into two types, Observational astronomers make direct observations of planets, stars and galaxies, and analyze the data, theoretical astronomers create and investigate models of things that cannot be observed. They use this data to create models or simulations to theorize how different celestial bodies work, there are further subcategories inside these two main branches of astronomy such as planetary astronomy, galactic astronomy or physical cosmology. Today, that distinction has disappeared and the terms astronomer. Professional astronomers are highly educated individuals who typically have a Ph. D. in physics or astronomy and are employed by research institutions or universities. They spend the majority of their time working on research, although quite often have other duties such as teaching, building instruments. The number of astronomers in the United States is actually quite small. The American Astronomical Society, which is the organization of professional astronomers in North America, has approximately 7,000 members. This number includes scientists from other such as physics, geology. The International Astronomical Union comprises almost 10,145 members from 70 different countries who are involved in research at the Ph. D. level. Before CCDs, photographic plates were a method of observation. Modern astronomers spend relatively little time at telescopes usually just a few weeks per year, analysis of observed phenomena, along with making predictions as to the causes of what they observe, takes the majority of observational astronomers time. Astronomers who serve as faculty spend much of their time teaching undergraduate and graduate classes, most universities also have outreach programs including public telescope time and sometimes planetariums as a public service to encourage interest in the field. Those who become astronomers usually have a background in maths, sciences. Taking courses that teach how to research, write and present papers are also invaluable, in college/university most astronomers get a Ph. D. in astronomy or physics. Keeping in mind how few there are it is understood that graduate schools in this field are very competitive
18.
Jean d'Alembert
–
Jean-Baptiste le Rond dAlembert was a French mathematician, mechanician, physicist, philosopher, and music theorist. Until 1759 he was also co-editor with Denis Diderot of the Encyclopédie, DAlemberts formula for obtaining solutions to the wave equation is named after him. The wave equation is referred to as dAlemberts equation. Born in Paris, dAlembert was the son of the writer Claudine Guérin de Tencin and the chevalier Louis-Camus Destouches. Destouches was abroad at the time of dAlemberts birth, days after birth his mother left him on the steps of the Saint-Jean-le-Rond de Paris church. According to custom, he was named after the saint of the church. DAlembert was placed in an orphanage for foundling children, but his father found him and placed him with the wife of a glazier, Madame Rousseau, Destouches secretly paid for the education of Jean le Rond, but did not want his paternity officially recognized. DAlembert first attended a private school, the chevalier Destouches left dAlembert an annuity of 1200 livres on his death in 1726. Under the influence of the Destouches family, at the age of twelve entered the Jansenist Collège des Quatre-Nations. Here he studied philosophy, law, and the arts, graduating as baccalauréat en arts in 1735, in his later life, DAlembert scorned the Cartesian principles he had been taught by the Jansenists, physical promotion, innate ideas and the vortices. The Jansenists steered DAlembert toward a career, attempting to deter him from pursuits such as poetry. Theology was, however, rather unsubstantial fodder for dAlembert and he entered law school for two years, and was nominated avocat in 1738. He was also interested in medicine and mathematics, Jean was first registered under the name Daremberg, but later changed it to dAlembert. The name dAlembert was proposed by Johann Heinrich Lambert for a moon of Venus. In July 1739 he made his first contribution to the field of mathematics, at the time Lanalyse démontrée was a standard work, which dAlembert himself had used to study the foundations of mathematics. DAlembert was also a Latin scholar of note and worked in the latter part of his life on a superb translation of Tacitus. In 1740, he submitted his second scientific work from the field of fluid mechanics Mémoire sur la réfraction des corps solides, in this work dAlembert theoretically explained refraction. In 1741, after failed attempts, dAlembert was elected into the Académie des Sciences
19.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
20.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
21.
Astronomy
–
Astronomy is a natural science that studies celestial objects and phenomena. It applies mathematics, physics, and chemistry, in an effort to explain the origin of those objects and phenomena and their evolution. Objects of interest include planets, moons, stars, galaxies, and comets, while the phenomena include supernovae explosions, gamma ray bursts, more generally, all astronomical phenomena that originate outside Earths atmosphere are within the purview of astronomy. A related but distinct subject, physical cosmology, is concerned with the study of the Universe as a whole, Astronomy is the oldest of the natural sciences. The early civilizations in recorded history, such as the Babylonians, Greeks, Indians, Egyptians, Nubians, Iranians, Chinese, during the 20th century, the field of professional astronomy split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects, theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. The two fields complement each other, with theoretical astronomy seeking to explain the results and observations being used to confirm theoretical results. Astronomy is one of the few sciences where amateurs can play an active role, especially in the discovery. Amateur astronomers have made and contributed to many important astronomical discoveries, Astronomy means law of the stars. Astronomy should not be confused with astrology, the system which claims that human affairs are correlated with the positions of celestial objects. Although the two share a common origin, they are now entirely distinct. Generally, either the term astronomy or astrophysics may be used to refer to this subject, however, since most modern astronomical research deals with subjects related to physics, modern astronomy could actually be called astrophysics. Few fields, such as astrometry, are purely astronomy rather than also astrophysics, some titles of the leading scientific journals in this field includeThe Astronomical Journal, The Astrophysical Journal and Astronomy and Astrophysics. In early times, astronomy only comprised the observation and predictions of the motions of objects visible to the naked eye, in some locations, early cultures assembled massive artifacts that possibly had some astronomical purpose. Before tools such as the telescope were invented, early study of the stars was conducted using the naked eye, most of early astronomy actually consisted of mapping the positions of the stars and planets, a science now referred to as astrometry. From these observations, early ideas about the motions of the planets were formed, and the nature of the Sun, Moon, the Earth was believed to be the center of the Universe with the Sun, the Moon and the stars rotating around it. This is known as the model of the Universe, or the Ptolemaic system. The Babylonians discovered that lunar eclipses recurred in a cycle known as a saros
22.
Classical mechanics
–
In physics, classical mechanics is one of the two major sub-fields of mechanics, along with quantum mechanics. Classical mechanics is concerned with the set of physical laws describing the motion of bodies under the influence of a system of forces. The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering and technology. Classical mechanics describes the motion of objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars. Within classical mechanics are fields of study that describe the behavior of solids, liquids and gases, Classical mechanics also provides extremely accurate results as long as the domain of study is restricted to large objects and the speeds involved do not approach the speed of light. When both quantum and classical mechanics cannot apply, such as at the level with high speeds. Since these aspects of physics were developed long before the emergence of quantum physics and relativity, however, a number of modern sources do include relativistic mechanics, which in their view represents classical mechanics in its most developed and accurate form. Later, more abstract and general methods were developed, leading to reformulations of classical mechanics known as Lagrangian mechanics and these advances were largely made in the 18th and 19th centuries, and they extend substantially beyond Newtons work, particularly through their use of analytical mechanics. The following introduces the concepts of classical mechanics. For simplicity, it often models real-world objects as point particles, the motion of a point particle is characterized by a small number of parameters, its position, mass, and the forces applied to it. Each of these parameters is discussed in turn, in reality, the kind of objects that classical mechanics can describe always have a non-zero size. Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the degrees of freedom. However, the results for point particles can be used to such objects by treating them as composite objects. The center of mass of a composite object behaves like a point particle, Classical mechanics uses common-sense notions of how matter and forces exist and interact. It assumes that matter and energy have definite, knowable attributes such as where an object is in space, non-relativistic mechanics also assumes that forces act instantaneously. The position of a point particle is defined with respect to a fixed reference point in space called the origin O, in space. A simple coordinate system might describe the position of a point P by means of a designated as r. In general, the point particle need not be stationary relative to O, such that r is a function of t, the time
23.
Calculus
–
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two branches, differential calculus, and integral calculus, these two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the notions of convergence of infinite sequences. Generally, modern calculus is considered to have developed in the 17th century by Isaac Newton. Today, calculus has widespread uses in science, engineering and economics, Calculus is a part of modern mathematics education. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of functions and limits, Calculus has historically been called the calculus of infinitesimals, or infinitesimal calculus. Calculus is also used for naming some methods of calculation or theories of computation, such as calculus, calculus of variations, lambda calculus. The ancient period introduced some of the ideas that led to integral calculus, the method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD in order to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, indian mathematicians gave a non-rigorous method of a sort of differentiation of some trigonometric functions. In the Middle East, Alhazen derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration, Cavalieris work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first. The formal study of calculus brought together Cavalieris infinitesimals with the calculus of finite differences developed in Europe at around the same time, pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, in other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were considered disreputable. These ideas were arranged into a calculus of infinitesimals by Gottfried Wilhelm Leibniz. He is now regarded as an independent inventor of and contributor to calculus, unlike Newton, Leibniz paid a lot of attention to the formalism, often spending days determining appropriate symbols for concepts. Leibniz and Newton are usually credited with the invention of calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the used in calculus today
24.
Mathematical physics
–
Mathematical physics refers to development of mathematical methods for application to problems in physics. It is a branch of applied mathematics, but deals with physical problems, there are several distinct branches of mathematical physics, and these roughly correspond to particular historical periods. The rigorous, abstract and advanced re-formulation of Newtonian mechanics adopting the Lagrangian mechanics, both formulations are embodied in analytical mechanics. These approaches and ideas can be and, in fact, have extended to other areas of physics as statistical mechanics, continuum mechanics, classical field theory. Moreover, they have provided several examples and basic ideas in differential geometry, the theory of partial differential equations are perhaps most closely associated with mathematical physics. These were developed intensively from the half of the eighteenth century until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics. The theory of atomic spectra developed almost concurrently with the fields of linear algebra. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic, Quantum information theory is another subspecialty. The special and general theories of relativity require a different type of mathematics. This was group theory, which played an important role in quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the description of cosmological as well as quantum field theory phenomena. In this area both homological algebra and category theory are important nowadays, statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics and it is related with the more mathematical ergodic theory. There are increasing interactions between combinatorics and physics, in statistical physics. The usage of the mathematical physics is sometimes idiosyncratic. Certain parts of mathematics that arose from the development of physics are not, in fact, considered parts of mathematical physics. The term mathematical physics is sometimes used to research aimed at studying and solving problems inspired by physics or thought experiments within a mathematically rigorous framework
25.
Origin of the Solar System
–
The formation of the Solar System began 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud. This model, known as the hypothesis, was first developed in the 18th century by Emanuel Swedenborg, Immanuel Kant. Its subsequent development has interwoven a variety of disciplines including astronomy, physics, geology. Since the dawn of the age in the 1950s and the discovery of extrasolar planets in the 1990s. The Solar System has evolved considerably since its initial formation, many moons have formed from circling discs of gas and dust around their parent planets, while other moons are thought to have formed independently and later been captured by their planets. Still others, such as Earths Moon, may be the result of giant collisions, collisions between bodies have occurred continually up to the present day and have been central to the evolution of the Solar System. The positions of the planets often shifted due to gravitational interactions and this planetary migration is now thought to have been responsible for much of the Solar Systems early evolution. In the far distant future, the gravity of passing stars will gradually reduce the Suns retinue of planets, some planets will be destroyed, others ejected into interstellar space. Ultimately, over the course of tens of billions of years, the first step toward a theory of Solar System formation and evolution was the general acceptance of heliocentrism, which placed the Sun at the centre of the system and the Earth in orbit around it. This concept had developed for millennia, but was not widely accepted until the end of the 17th century, the first recorded use of the term Solar System dates from 1704. The most significant criticism of the hypothesis was its apparent inability to explain the Suns relative lack of momentum when compared to the planets. However, since the early 1980s studies of stars have shown them to be surrounded by cool discs of dust and gas, exactly as the nebular hypothesis predicts. Understanding of how the Sun is expected to continue to evolve required an understanding of the source of its power, in 1935, Eddington went further and suggested that other elements also might form within stars. Fred Hoyle elaborated on this premise by arguing that evolved stars called red giants created many elements heavier than hydrogen, when a red giant finally casts off its outer layers, these elements would then be recycled to form other star systems. The nebular hypothesis says that the Solar System formed from the collapse of a fragment of a giant molecular cloud. The cloud was about 20 parsec across, while the fragments were roughly 1 parsec across, the further collapse of the fragments led to the formation of dense cores 0. 01–0.1 pc in size. One of these fragments formed what became the Solar System. The remaining 2% of the mass consisted of elements that were created by nucleosynthesis in earlier generations of stars
26.
Black hole
–
A black hole is a region of spacetime exhibiting such strong gravitational effects that nothing—not even particles and electromagnetic radiation such as light—can escape from inside it. The theory of relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of the region from which no escape is possible is called the event horizon, although the event horizon has an enormous effect on the fate and circumstances of an object crossing it, no locally detectable features appear to be observed. In many ways a black hole acts like a black body. Moreover, quantum theory in curved spacetime predicts that event horizons emit Hawking radiation. This temperature is on the order of billionths of a kelvin for black holes of stellar mass, objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. Black holes were considered a mathematical curiosity, it was during the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality, black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed, it can continue to grow by absorbing mass from its surroundings, by absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies, despite its invisible interior, the presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter that falls onto a black hole can form an accretion disk heated by friction. If there are other stars orbiting a black hole, their orbits can be used to determine the black holes mass, such observations can be used to exclude possible alternatives such as neutron stars.3 million solar masses. On 15 June 2016, a detection of a gravitational wave event from colliding black holes was announced. The idea of a body so massive that light could not escape was briefly proposed by astronomical pioneer John Michell in a letter published in 1783-4. Michell correctly noted that such supermassive but non-radiating bodies might be detectable through their effects on nearby visible bodies. In 1915, Albert Einstein developed his theory of general relativity, only a few months later, Karl Schwarzschild found a solution to the Einstein field equations, which describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the solution for the point mass. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, the nature of this surface was not quite understood at the time
27.
Gravitational collapse
–
Gravitational collapse is the contraction of an astronomical object due to the influence of its own gravity, which tends to draw matter inward toward the center of mass. Gravitational collapse is a mechanism for structure formation in the universe. A star is born through the gravitational collapse of a cloud of interstellar matter. The star then exists in a state of dynamic equilibrium, once all its energy sources are exhausted, a star will again collapse until it reaches a new equilibrium state. An interstellar cloud of gas will remain in equilibrium as long as the kinetic energy of the gas pressure is in balance with the potential energy of the internal gravitational force. Mathematically this is expressed using the theorem, which states that, to maintain equilibrium. If a pocket of gas is enough that the gas pressure is insufficient to support it. The mass above which a cloud will undergo such collapse is called the Jeans mass and this mass depends on the temperature and density of the cloud, but is typically thousands to tens of thousands of solar masses. At what is called the death of the star, it will undergo a contraction that can be halted if it reaches a new state of equilibrium. If it has a star, a white dwarf-sized object can accrete matter from the companion star until it reaches the Chandrasekhar limit at which point gravitational collapse takes over again. While it might seem that the white dwarf might collapse to the stage, they instead undergo runaway carbon fusion. Neutron stars are formed by collapse of the cores of larger stars. They are so compact that a Newtonian description is inadequate for an accurate treatment, hence, the collapse continues with nothing to stop it. Once a body collapses to within its Schwarzschild radius it forms what is called a black hole and it follows from a theorem of Roger Penrose that the subsequent formation of some kind of singularity is inevitable. On the other hand, the nature of the kind of singularity to be expected inside a hole remains rather controversial. According to some theories, at a stage, the collapsing object will reach the maximum possible energy density for a certain volume of space or the Planck density. This is when the laws of gravity cease to be valid. There are competing theories as to what occurs at this point, the radii of larger mass neutron stars are estimated to be about 12-km, or approximately 2.0 times their equivalent Schwarzschild radius
28.
Isaac Newton
–
His book Philosophiæ Naturalis Principia Mathematica, first published in 1687, laid the foundations of classical mechanics. Newton also made contributions to optics, and he shares credit with Gottfried Wilhelm Leibniz for developing the infinitesimal calculus. Newtons Principia formulated the laws of motion and universal gravitation that dominated scientists view of the universe for the next three centuries. Newtons work on light was collected in his influential book Opticks. He also formulated a law of cooling, made the first theoretical calculation of the speed of sound. Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge, politically and personally tied to the Whig party, Newton served two brief terms as Member of Parliament for the University of Cambridge, in 1689–90 and 1701–02. He was knighted by Queen Anne in 1705 and he spent the last three decades of his life in London, serving as Warden and Master of the Royal Mint and his father, also named Isaac Newton, had died three months before. Born prematurely, he was a child, his mother Hannah Ayscough reportedly said that he could have fit inside a quart mug. When Newton was three, his mother remarried and went to live with her new husband, the Reverend Barnabas Smith, leaving her son in the care of his maternal grandmother, Newtons mother had three children from her second marriage. From the age of twelve until he was seventeen, Newton was educated at The Kings School, Grantham which taught Latin and Greek. He was removed from school, and by October 1659, he was to be found at Woolsthorpe-by-Colsterworth, Henry Stokes, master at the Kings School, persuaded his mother to send him back to school so that he might complete his education. Motivated partly by a desire for revenge against a bully, he became the top-ranked student. In June 1661, he was admitted to Trinity College, Cambridge and he started as a subsizar—paying his way by performing valets duties—until he was awarded a scholarship in 1664, which guaranteed him four more years until he would get his M. A. He set down in his notebook a series of Quaestiones about mechanical philosophy as he found it, in 1665, he discovered the generalised binomial theorem and began to develop a mathematical theory that later became calculus. Soon after Newton had obtained his B. A. degree in August 1665, in April 1667, he returned to Cambridge and in October was elected as a fellow of Trinity. Fellows were required to become ordained priests, although this was not enforced in the restoration years, however, by 1675 the issue could not be avoided and by then his unconventional views stood in the way. Nevertheless, Newton managed to avoid it by means of a special permission from Charles II. A and he was elected a Fellow of the Royal Society in 1672. Newtons work has been said to distinctly advance every branch of mathematics then studied and his work on the subject usually referred to as fluxions or calculus, seen in a manuscript of October 1666, is now published among Newtons mathematical papers
29.
Napoleon
–
Napoleon Bonaparte was a French military and political leader who rose to prominence during the French Revolution and led several successful campaigns during the French Revolutionary Wars. As Napoleon I, he was Emperor of the French from 1804 until 1814, Napoleon dominated European and global affairs for more than a decade while leading France against a series of coalitions in the Napoleonic Wars. He won most of these wars and the vast majority of his battles, one of the greatest commanders in history, his wars and campaigns are studied at military schools worldwide. Napoleons political and cultural legacy has ensured his status as one of the most celebrated and he was born Napoleone di Buonaparte in Corsica to a relatively modest family from the minor nobility. When the Revolution broke out in 1789, Napoleon was serving as an officer in the French army. Seizing the new opportunities presented by the Revolution, he rose through the ranks of the military. The Directory eventually gave him command of the Army of Italy after he suppressed a revolt against the government from royalist insurgents, in 1798, he led a military expedition to Egypt that served as a springboard to political power. He engineered a coup in November 1799 and became First Consul of the Republic and his ambition and public approval inspired him to go further, and in 1804 he became the first Emperor of the French. Intractable differences with the British meant that the French were facing a Third Coalition by 1805, in 1806, the Fourth Coalition took up arms against him because Prussia became worried about growing French influence on the continent. Napoleon quickly defeated Prussia at the battles of Jena and Auerstedt, then marched the Grand Army deep into Eastern Europe, France then forced the defeated nations of the Fourth Coalition to sign the Treaties of Tilsit in July 1807, bringing an uneasy peace to the continent. Tilsit signified the high watermark of the French Empire, hoping to extend the Continental System and choke off British trade with the European mainland, Napoleon invaded Iberia and declared his brother Joseph the King of Spain in 1808. The Spanish and the Portuguese revolted with British support, the Peninsular War lasted six years, featured extensive guerrilla warfare, and ended in victory for the Allies. The Continental System caused recurring diplomatic conflicts between France and its client states, especially Russia, unwilling to bear the economic consequences of reduced trade, the Russians routinely violated the Continental System and enticed Napoleon into another war. The French launched an invasion of Russia in the summer of 1812. The resulting campaign witnessed the collapse of the Grand Army, the destruction of Russian cities, in 1813, Prussia and Austria joined Russian forces in a Sixth Coalition against France. A lengthy military campaign culminated in a large Allied army defeating Napoleon at the Battle of Leipzig in October 1813, the Allies then invaded France and captured Paris in the spring of 1814, forcing Napoleon to abdicate in April. He was exiled to the island of Elba near Rome and the Bourbons were restored to power, however, Napoleon escaped from Elba in February 1815 and took control of France once again. The Allies responded by forming a Seventh Coalition, which defeated Napoleon at the Battle of Waterloo in June, the British exiled him to the remote island of Saint Helena in the South Atlantic, where he died six years later at the age of 51
30.
First French Empire
–
The First French Empire, Note 1 was the empire of Napoleon Bonaparte of France and the dominant power in much of continental Europe at the beginning of the 19th century. Its name was a misnomer, as France already had colonies overseas and was short lived compared to the Colonial Empire, a series of wars, known collectively as the Napoleonic Wars, extended French influence over much of Western Europe and into Poland. The plot included Bonapartes brother Lucien, then serving as speaker of the Council of Five Hundred, Roger Ducos, another Director, on 9 November 1799 and the following day, troops led by Bonaparte seized control. They dispersed the legislative councils, leaving a rump legislature to name Bonaparte, Sieyès, although Sieyès expected to dominate the new regime, the Consulate, he was outmaneuvered by Bonaparte, who drafted the Constitution of the Year VIII and secured his own election as First Consul. He thus became the most powerful person in France, a power that was increased by the Constitution of the Year X, the Battle of Marengo inaugurated the political idea that was to continue its development until Napoleons Moscow campaign. Napoleon planned only to keep the Duchy of Milan for France, setting aside Austria, the Peace of Amiens, which cost him control of Egypt, was a temporary truce. He gradually extended his authority in Italy by annexing the Piedmont and by acquiring Genoa, Parma, Tuscany and Naples, then he laid siege to the Roman state and initiated the Concordat of 1801 to control the material claims of the pope. Napoleon would have ruling elites from a fusion of the new bourgeoisie, on 12 May 1802, the French Tribunat voted unanimously, with exception of Carnot, in favour of the Life Consulship for the leader of France. This action was confirmed by the Corps Législatif, a general plebiscite followed thereafter resulting in 3,653,600 votes aye and 8,272 votes nay. On 2 August 1802, Napoleon Bonaparte was proclaimed Consul for life, pro-revolutionary sentiment swept through Germany aided by the Recess of 1803, which brought Bavaria, Württemberg and Baden to Frances side. The memories of imperial Rome were for a time, after Julius Caesar and Charlemagne. The Treaty of Pressburg, signed on 26 December 1805, did little other than create a more unified Germany to threaten France. On the other hand, Napoleons creation of the Kingdom of Italy, the occupation of Ancona, to create satellite states, Napoleon installed his relatives as rulers of many European states. The Bonapartes began to marry into old European monarchies, gaining sovereignty over many nations, in addition to the vassal titles, Napoleons closest relatives were also granted the title of French Prince and formed the Imperial House of France. Met with opposition, Napoleon would not tolerate any neutral power, Prussia had been offered the territory of Hanover to stay out of the Third Coalition. With the diplomatic situation changing, Napoleon offered Great Britain the province as part of a peace proposal and this, combined with growing tensions in Germany over French hegemony, Prussia responded by forming an alliance with Russia and sending troops into Bavaria on 1 October 1806. In this War of the Fourth Coalition, Napoleon destroyed the armies of Frederick William at Jena-Auerstedt, the Eylau and the Friedland against the Russians finally ruined Frederick the Greats formerly mighty kingdom, obliging Russia and Prussia to make peace with France at Tilsit. The Treaties of Tilsit ended the war between Russia and the French Empire and began an alliance between the two empires that held power of much of the rest of Europe, the two empires secretly agreed to aid each other in disputes
31.
Marquess
–
A marquess is a nobleman of hereditary rank in various European peerages and in those of some of their former colonies. The term is used to translate equivalent Asian styles, as in imperial China. In Great Britain and Ireland, the spelling of the aristocratic title of this rank is marquess. In Scotland the French spelling is sometimes used. In Great Britain and Ireland, the ranks below a duke. A woman with the rank of a marquess, or the wife of a marquess, is called a marchioness /ˌmɑːrʃəˈnɛs/ in Great Britain, the dignity, rank or position of the title is referred to as a marquisate or marquessate. The theoretical distinction between a marquess and other titles has, since the Middle Ages, faded into obscurity. In times past, the distinction between a count and a marquess was that the land of a marquess, called a march, was on the border of the country, while a land, called a county. As a result of this, a marquess was trusted to defend and fortify against potentially hostile neighbours and was more important. The title is ranked below that of a duke, which was restricted to the royal family. In the German lands, a Margrave was a ruler of an immediate Imperial territory, German rulers did not confer the title of marquis, holders of marquisates in Central Europe were largely associated with the Italian and Spanish crowns. The word entered the English language from the Old French marchis in the late 13th or early 14th century, the French word was derived from marche, itself descended from the Middle Latin marca, from which the modern English words march and mark also descend. In Great Britain and Ireland, the spelling for an English aristocrat of this rank is marquess. The word marquess is unusual in English, ending in -ess but referring to a male, a woman with the rank of a marquess, or the wife of a marquess, is called a marchioness in Great Britain and Ireland, or a marquise /mɑːrˈkiːz/ elsewhere in Europe. The dignity, rank or position of the title is referred to as a marquisate or marquessate, the honorific prefix The Most Honourable is a form of address that precedes the name of a marquess or marchioness in the United Kingdom. The rank of marquess was a late introduction to the British peerage, no marcher lords had the rank of marquess. The following list may still be incomplete, feminine forms follow after a slash, many languages have two words, one for the modern marquess and one for the original margrave. Even where neither title was used domestically, such duplication to describe foreign titles can exist
32.
W. W. Rouse Ball
–
Walter William Rouse Ball, known as W. W. Rouse Ball, was a British mathematician, lawyer, and fellow at Trinity College, Cambridge from 1878 to 1905. He was also an amateur magician, and the founding president of the Cambridge Pentacle Club in 1919. Ball was the son and heir of Walter Frederick Ball, of 3, St Johns Park Villas, South Hampstead, London. Educated at University College School, he entered Trinity College, Cambridge in 1870, became a scholar and first Smiths Prizeman and he became a Fellow of Trinity in 1875, and remained one for the rest of his life. He is buried at the Parish of the Ascension Burial Ground in Cambridge and he is commemorated in the naming of the small pavilion, now used as changing rooms and toilets, on Jesus Green in Cambridge. A History of the Study of Mathematics at Cambridge, Cambridge University Press,1889 A Short Account of the History of Mathematics at Project Gutenberg, dover 1960 republication of fourth edition. Mathematical Recreations and Essays at Project Gutenberg A History of the First Trinity Boat Club Cambridge Papers at Project Gutenberg, string Figures, Cambridge, W. Heffer & Sons Rouse Ball Professor of Mathematics Rouse Ball Professor of English Law Martin Gardner, another author of recreational mathematics. Singmaster, David,1892 Walter William Rouse Ball, Mathematical recreations and problems of past and present times, in Grattan-Guinness, W. W. Rouse Ball at the Mathematics Genealogy Project W. W. Rouse Ball at Find a Grave
33.
Jean le Rond d'Alembert
–
Jean-Baptiste le Rond dAlembert was a French mathematician, mechanician, physicist, philosopher, and music theorist. Until 1759 he was also co-editor with Denis Diderot of the Encyclopédie, DAlemberts formula for obtaining solutions to the wave equation is named after him. The wave equation is referred to as dAlemberts equation. Born in Paris, dAlembert was the son of the writer Claudine Guérin de Tencin and the chevalier Louis-Camus Destouches. Destouches was abroad at the time of dAlemberts birth, days after birth his mother left him on the steps of the Saint-Jean-le-Rond de Paris church. According to custom, he was named after the saint of the church. DAlembert was placed in an orphanage for foundling children, but his father found him and placed him with the wife of a glazier, Madame Rousseau, Destouches secretly paid for the education of Jean le Rond, but did not want his paternity officially recognized. DAlembert first attended a private school, the chevalier Destouches left dAlembert an annuity of 1200 livres on his death in 1726. Under the influence of the Destouches family, at the age of twelve entered the Jansenist Collège des Quatre-Nations. Here he studied philosophy, law, and the arts, graduating as baccalauréat en arts in 1735, in his later life, DAlembert scorned the Cartesian principles he had been taught by the Jansenists, physical promotion, innate ideas and the vortices. The Jansenists steered DAlembert toward a career, attempting to deter him from pursuits such as poetry. Theology was, however, rather unsubstantial fodder for dAlembert and he entered law school for two years, and was nominated avocat in 1738. He was also interested in medicine and mathematics, Jean was first registered under the name Daremberg, but later changed it to dAlembert. The name dAlembert was proposed by Johann Heinrich Lambert for a moon of Venus. In July 1739 he made his first contribution to the field of mathematics, at the time Lanalyse démontrée was a standard work, which dAlembert himself had used to study the foundations of mathematics. DAlembert was also a Latin scholar of note and worked in the latter part of his life on a superb translation of Tacitus. In 1740, he submitted his second scientific work from the field of fluid mechanics Mémoire sur la réfraction des corps solides, in this work dAlembert theoretically explained refraction. In 1741, after failed attempts, dAlembert was elected into the Académie des Sciences
34.
Karl Pearson
–
Karl Pearson FRS was an influential English mathematician and biostatistician. He has been credited with establishing the discipline of mathematical statistics, Pearson was also a protégé and biographer of Sir Francis Galton. Pearson was born in Islington, London to William and Fanny and he then travelled to Germany to study physics at the University of Heidelberg under G H Quincke and metaphysics under Kuno Fischer. He next visited the University of Berlin, where he attended the lectures of the famous physiologist Emil du Bois-Reymond on Darwinism, Pearson also studied Roman Law, taught by Bruns and Mommsen, medieval and 16th century German Literature, and Socialism. He became an historian and Germanist and spent much of the 1880s in Berlin, Heidelberg, Vienna, Saig bei Lenzkirch. He wrote on Passion plays, religion, Goethe, Werther, as well as sex-related themes, Pearson was offered a Germanics post at Kings College, Cambridge. Comparing Cambridge students to those he knew from Germany, Karl found German students inathletic and he wrote his mother, I used to think athletics and sport was overestimated at Cambridge, but now I think it cannot be too highly valued. Have you ever attempted to conceive all there is in the world worth knowing—that not one subject in the universe is unworthy of study, mankind seems on the verge of a new and glorious discovery. What Newton did to simplify the planetary motions must now be done to unite in one whole the various isolated theories of mathematical physics, Pearson then returned to London to study law, emulating his father. His next career move was to the Inner Temple, where he read law until 1881, after this, he returned to mathematics, deputising for the mathematics professor at Kings College, London in 1881 and for the professor at University College, London in 1883. In 1884, he was appointed to the Goldsmid Chair of Applied Mathematics and Mechanics at University College, Pearson became the editor of Common Sense of the Exact Sciences when William Kingdon Clifford died. The collaboration, in biometry and evolutionary theory, was a fruitful one, Weldon introduced Pearson to Charles Darwins cousin Francis Galton, who was interested in aspects of evolution such as heredity and eugenics. Pearson became Galtons protégé—his statistical heir as some have put it—at times to the verge of hero worship, in 1890 Pearson married Maria Sharpe. Maria died in 1928 and in 1929 Karl married Margaret Victoria Child and he and his family lived at 7 Well Road in Hampstead, now marked with a blue plaque. He predicted that Galton, rather than Charles Darwin, would be remembered as the most prodigious grandson of Erasmus Darwin, when Galton died, he left the residue of his estate to the University of London for a Chair in Eugenics. Pearson was the first holder of this chair — the Galton Chair of Eugenics and he formed the Department of Applied Statistics, into which he incorporated the Biometric and Galton laboratories. He remained with the department until his retirement in 1933, Pearson was a zealous atheist and a freethinker. This book covered several themes that were later to become part of the theories of Einstein, Pearson asserted that the laws of nature are relative to the perceptive ability of the observer
35.
Caen
–
Caen is a commune in northwestern France. It is the prefecture of the Calvados department, the city proper has 108,365 inhabitants, while its urban area has 420,000, making Caen the largest city in former Lower Normandy. It is also the second largest municipality in all of Normandy after Le Havre, the metropolitan area of Caen, in turn, is the second largest in Normandy after that of Rouen, the 21st largest in France. It is located 15 kilometres inland from the English Channel, two hours north-west of Paris, and connected to the south of England by the Caen--Portsmouth ferry route. Caen is located in the centre of its region, and it is a centre of political, economic. Located a few miles from the coast, the landing beaches, the city has now preserved the memory by erecting a memorial and a museum dedicated to peace, the Memorial de Caen. Current arms, Gules, an open castle Or, windowed and masoned sable. Under the Ancien Régime, Per fess, gules and azure,3 fleurs de lys Or, during the First French Empire, Gules, a single-towered castle Or, a chief of Good Imperial Cities. Today, Caen has no motto, but it used to have one, as a result, its spelling is archaic and has not been updated, Un Dieu, un Roy, une Foy, une Loy. This motto is reflected in a notable old Chant royal, Caens home port code is CN In 1346, King Edward III of England led his army against the city, hoping to loot it. During the attack, English officials searched its archives and found a copy of the 1339 Franco-Norman plot to invade England, devised by Philip VI of France and this was subsequently used as propaganda to justify the supplying and financing of the conflict and its continuation. Only the castle of Caen held out, despite attempts to besiege it, a few days later, the English left, marching to the east and on to their victory at the Battle of Crécy. It was later captured by Henry V in 1417 and treated harshly for being the first town to put up any resistance to his invasion. During the Battle of Normandy in the Second World War, Caen was liberated from the Nazis in early July, British and Canadian troops had intended to capture the town on D-Day. However they were held up north of the city until 9 July, the Allies seized the western quarters, a month later than Field Marshal Montgomerys original plan. During the battle, many of the inhabitants sought refuge in the Abbaye aux Hommes. Both the cathedral and the university were destroyed by the British. Post-Second World War work included the reconstruction of complete districts of the city and it took 14 years and led to the current urbanization of Caen
36.
Joseph Louis Lagrange
–
Joseph-Louis Lagrange, born Giuseppe Lodovico Lagrangia or Giuseppe Ludovico De la Grange Tournier, was an Italian and French Enlightenment Era mathematician and astronomer. He made significant contributions to the fields of analysis, number theory, in 1787, at age 51, he moved from Berlin to Paris and became a member of the French Academy of Sciences. He remained in France until the end of his life, Lagrange was one of the creators of the calculus of variations, deriving the Euler–Lagrange equations for extrema of functionals. He also extended the method to take into account possible constraints and he proved that every natural number is a sum of four squares. His treatise Theorie des fonctions analytiques laid some of the foundations of group theory, in calculus, Lagrange developed a novel approach to interpolation and Taylor series. Born as Giuseppe Lodovico Lagrangia, Lagrange was of Italian and French descent and his mother was from the countryside of Turin. He was raised as a Roman Catholic, a career as a lawyer was planned out for Lagrange by his father, and certainly Lagrange seems to have accepted this willingly. He studied at the University of Turin and his subject was classical Latin. At first he had no enthusiasm for mathematics, finding Greek geometry rather dull. It was not until he was seventeen that he showed any taste for mathematics – his interest in the subject being first excited by a paper by Edmond Halley which he came across by accident. Alone and unaided he threw himself into mathematical studies, at the end of a years incessant toil he was already an accomplished mathematician, in that capacity, Lagrange was the first to teach calculus in an engineering school. In this Academy one of his students was François Daviet de Foncenex, Lagrange is one of the founders of the calculus of variations. Starting in 1754, he worked on the problem of tautochrone, Lagrange wrote several letters to Leonhard Euler between 1754 and 1756 describing his results. He outlined his δ-algorithm, leading to the Euler–Lagrange equations of variational calculus, Lagrange also applied his ideas to problems of classical mechanics, generalizing the results of Euler and Maupertuis. Euler was very impressed with Lagranges results, Lagrange published his method in two memoirs of the Turin Society in 1762 and 1773. Many of these are elaborate papers, the article concludes with a masterly discussion of echoes, beats, and compound sounds. Other articles in volume are on recurring series, probabilities. The next work he produced was in 1764 on the libration of the Moon, and an explanation as to why the face was always turned to the earth
37.
Turin
–
Turin is a city and an important business and cultural centre in northern Italy, capital of the Piedmont region and was the first capital city of Italy. The city is located mainly on the bank of the Po River, in front of Susa Valley and surrounded by the western Alpine arch. The population of the city proper is 892,649 while the population of the area is estimated by Eurostat to be 1.7 million inhabitants. The Turin metropolitan area is estimated by the OECD to have a population of 2.2 million, in 1997 a part of the historical center of Torino was inscribed in the World Heritage List under the name Residences of the Royal House of Savoy. Turin is well known for its Renaissance, Baroque, Rococo, Neo-classical, many of Turins public squares, castles, gardens and elegant palazzi such as Palazzo Madama, were built between the 16th and 18th centuries. This was after the capital of the Duchy of Savoy was moved to Turin from Chambery as part of the urban expansion, the city used to be a major European political center. Turin was Italys first capital city in 1861 and home to the House of Savoy, from 1563, it was the capital of the Duchy of Savoy, then of the Kingdom of Sardinia ruled by the Royal House of Savoy and finally the first capital of the unified Italy. Turin is sometimes called the cradle of Italian liberty for having been the birthplace and home of notable politicians and people who contributed to the Risorgimento, such as Cavour. The city currently hosts some of Italys best universities, colleges, academies, lycea and gymnasia, such as the University of Turin, founded in the 15th century, in addition, the city is home to museums such as the Museo Egizio and the Mole Antonelliana. Turins attractions make it one of the worlds top 250 tourist destinations, Turin is ranked third in Italy, after Milan and Rome, for economic strength. With a GDP of $58 billion, Turin is the worlds 78th richest city by purchasing power, as of 2010, the city has been ranked by GaWC as a Gamma World city. Turin is also home to much of the Italian automotive industry, the Taurini were an ancient Celto-Ligurian Alpine people, who occupied the upper valley of the Po River, in the center of modern Piedmont. In 218 BC, they were attacked by Hannibal as he was allied with their long-standing enemies, the Taurini chief town was captured by Hannibals forces after a three-day siege. As a people they are mentioned in history. It is believed that a Roman colony was established in 27 BC under the name of Castra Taurinorum, both Livy and Strabo mention the Taurinis country as including one of the passes of the Alps, which points to a wider use of the name in earlier times. In the 1st century BC, the Romans created a military camp, the typical Roman street grid can still be seen in the modern city, especially in the neighborhood known as the Quadrilatero Romano. Via Garibaldi traces the path of the Roman citys decumanus which began at the Porta Decumani. The Porta Palatina, on the side of the current city centre, is still preserved in a park near the Cathedral
38.
Syndic
–
The meaning which underlies both applications is that of representative or delegate. As indicated above, in Italy and parts of Switzerland, the term sindaco or sindaca is equivalent to the English term mayor, in this case, in areas where Catalan or Occitan are spoken, the term has been used since Medieval times. At present it is used in a variety of cases, the president of Andorras parliament is known as the Síndic General or General Councillor. Until the 1993 Constitution, the Síndic was the head of government of Andorra. Similarly, the Sindic dAran / Síndic dAran is the head of the administration of this region in Catalonia. In Alguer, Sardinia, the síndic is the equivalent of mayor, in Europe in the Middle Ages and Renaissance, nearly all companies, guilds, and the University of Paris had representative bodies the members of which were termed syndici. The members or leaders of these organisations, however, are not called síndics, one special use of the term applies to the Franciscan order of priests and brothers. The Order of Friars Minor, as opposed to the Order of Friars Minor Conventual is forbidden by its constitutions from owning property, as part of its commitment to communal poverty. Within Syndicalist and Anarcho-syndicalist organizations, a syndic is a member of a union, also called a Syndicate. This article incorporates text from a now in the public domain, Chisholm, Hugh
39.
Benedictine
–
Each community within the order maintains its own autonomy, while the order itself represents their mutual interests. Internationally, the order is governed by the Benedictine Confederation, a body, established in 1883 by Pope Leo XIIIs Brief Summum semper, individuals whose communities are members of the order generally add the initials OSB after their names. The monastery at Subiaco in Italy, established by Saint Benedict of Nursia circa 529, was the first of the monasteries he founded. He later founded the Abbey of Monte Cassino, there is no evidence, however, that he intended to found an order and the Rule of Saint Benedict presupposes the autonomy of each community. It was from the monastery of St. Andrew in Rome that Augustine, the prior, at various stopping places during the journey, the monks left behind them traditions concerning their rule and form of life, and probably also some copies of the Rule. Lérins Abbey, for instance, founded by Honoratus in 375, probably received its first knowledge of the Benedictine Rule from the visit of St. Augustine, in Gaul and Switzerland, it supplemented the much stricter Irish or Celtic Rule introduced by Columbanus and others. In many monasteries it eventually displaced the earlier codes. Largely through the work of Benedict of Aniane, it became the rule of choice for monasteries throughout the Carolingian empire, Monastic scriptoria flourished from the ninth through the twelfth centuries. Sacred Scripture was always at the heart of every monastic scriptorium, as a general rule those of the monks who possessed skill as writers made this their chief, if not their sole active work. In the Middle Ages monasteries were founded by the nobility. Cluny Abbey was founded by William I, Duke of Aquitaine in 910, the abbey was noted for its strict adherence to the Rule of St. Benedict. The abbot of Cluny was the superior of all the daughter houses, one of the earliest reforms of Benedictine practice was that initiated in 980 by Romuald, who founded the Camaldolese community. The English Benedictine Congregation is the oldest of the nineteen Benedictine congregations, Augustine of Canterbury and his monks established the first English Benedictine monastery at Canterbury soon after their arrival in 597. Many of the sees of England were founded and governed by the Benedictines. Monasteries served as hospitals and places of refuge for the weak, the monks studied the healing properties of plants and minerals to alleviate the sufferings of the sick. Germany was evangelized by English Benedictines, willibrord and Boniface preached there in the seventh and eighth centuries and founded several abbeys. In the English Reformation, all monasteries were dissolved and their lands confiscated by the Crown, during the 19th century they were able to return to England, including to Selby Abbey in Yorkshire, one of the few great monastic churches to survive the Dissolution. St. Mildreds Priory, on the Isle of Thanet, Kent, was built in 1027 on the site of an abbey founded in 670 by the daughter of the first Christian King of Kent, currently the priory is home to a community of Benedictine nuns
40.
Priory
–
A priory is a monastery of men or women under religious vows that is headed by a prior or prioress. Priories may be houses of mendicant friars or religious sisters, or monasteries of monks or nuns, houses of canons regular and canonesses regular also use this term, the alternative being canonry. In pre-Reformation England, if an Abbey church was raised to cathedral status, the bishop, in effect, took the place of the abbot, and the monastery itself was headed by a prior. Priories first came to existence as subsidiaries to the Abbey of Cluny, many new houses were formed that were all subservient to the abbey of Cluny and called Priories. As such, the priory came to represent the Benedictine ideals espoused by the Cluniac reforms as smaller, lesser houses of Benedictines of Cluny. There were likewise many conventual priories in Germany and Italy during the Middle Ages, the Benedictines and their offshoots, the Premonstratensians, and the military orders distinguish between conventual and simple or obedientiary priories. Conventual priories are those autonomous houses which have no abbots, either because the required number of twelve monks has not yet been reached. Simple or obedientiary priories are dependencies of abbeys and their superior, who is subject to the abbot in everything, is called a prior. These monasteries are satellites of the mother abbey, the Cluniac order is notable for being organised entirely on this obedientiary principle, with a single abbot at the Abbey of Cluny, and all other houses dependent priories. Priory may also refer to schools operated or sponsored by the Benedictines, Priory is also used to refer to the geographic headquarters of several commanderies of knights. This article incorporates text from a now in the public domain, Herbermann, Charles
41.
Marquis de Condorcet
–
Unlike many of his contemporaries, he advocated a liberal economy, free and equal public instruction, constitutionalism, and equal rights for women and people of all races. His ideas and writings were said to embody the ideals of the Age of Enlightenment and rationalism and he died a mysterious death in prison after a period of flight from French Revolutionary authorities. Condorcet was born in Ribemont, and descended from the ancient family of Caritat, fatherless at a young age, he was raised by his devoutly religious mother. He was educated at the Jesuit College in Reims and at the Collège de Navarre in Paris, where he showed his intellectual ability. When he was sixteen, his analytical abilities gained the praise of Jean le Rond dAlembert and Alexis Clairaut, soon, from 1765 to 1774, he focused on science. In 1765, he published his first work on mathematics entitled Essai sur le calcul intégral and he would go on to publish more papers, and on 25 February 1769, he was elected to the Académie royale des Sciences. In 1772, he published another paper on integral calculus, soon after, he met Jacques Turgot, a French economist, and the two became friends. Turgot was to be an administrator under King Louis XV in 1772, Condorcet worked with Leonhard Euler and Benjamin Franklin. In 1774, Condorcet was appointed general of the Paris mint by Turgot. From this point on, Condorcet shifted his focus from the purely mathematical to philosophy, in the following years, he took up the defense of human rights in general, and of womens and Blacks rights in particular. He supported the ideals embodied by the newly formed United States, in 1776, Turgot was dismissed as Controller General. Consequently, Condorcet submitted his resignation as Inspector General of the Monnaie, but the request was refused, Condorcet later wrote Vie de M. Turgot, a biography which spoke fondly of Turgot and advocated Turgots economic theories. In 1785, Condorcet wrote an essay on the application of analysis of the probability of decisions made on a majority vote, the paper also outlines a generic Condorcet method, designed to simulate pair-wise elections between all candidates in an election. He disagreed strongly with the method of aggregating preferences put forth by Jean-Charles de Borda. Condorcet was one of the first to apply mathematics in the social sciences. In 1781, Condorcet wrote a pamphlet, Reflections on Negro Slavery, in 1786, Condorcet worked on ideas for the differential and integral calculus, giving a new treatment of infinitesimals – a work which was never printed. In 1789, he published Vie de Voltaire, which agreed with Voltaire in his opposition to the Church, in 1791, Condorcet along with Sophie de Grouchy, Thomas Paine, Etienne Dumont, Jacques-Pierre Brissot, and Achilles Duchastellet published a brief journal titled Le Républicain. Its main goal being the promotion of republicanism and the rejection of establishing a constitutional monarchy, the theme being that any sort of monarchy is a threat to freedom no matter who is leading, which emphasized that liberty is freedom from domination
42.
French Academy of Sciences
–
The French Academy of Sciences is a learned society, founded in 1666 by Louis XIV at the suggestion of Jean-Baptiste Colbert, to encourage and protect the spirit of French scientific research. It was at the forefront of developments in Europe in the 17th and 18th centuries. Currently headed by Sébastien Candel, it is one of the five Academies of the Institut de France, the Academy of Sciences makes its origin to Colberts plan to create a general academy. He chose a group of scholars who met on 22 December 1666 in the Kings library. The first 30 years of the Academys existence were relatively informal, in contrast to its British counterpart, the Academy was founded as an organ of government. The Academy was expected to remain apolitical, and to avoid discussion of religious, on 20 January 1699, Louis XIV gave the Company its first rules. The Academy received the name of Royal Academy of Sciences and was installed in the Louvre in Paris, following this reform, the Academy began publishing a volume each year with information on all the work done by its members and obituaries for members who had died. This reform also codified the method by which members of the Academy could receive pensions for their work, on 8 August 1793, the National Convention abolished all the academies. Almost all the old members of the previously abolished Académie were formally re-elected, among the exceptions was Dominique, comte de Cassini, who refused to take his seat. In 1816, the again renamed Royal Academy of Sciences became autonomous, while forming part of the Institute of France, in the Second Republic, the name returned to Académie des sciences. During this period, the Academy was funded by and accountable to the Ministry of Public Instruction, the Academy came to control French patent laws in the course of the eighteenth century, acting as the liaison of artisans knowledge to the public domain. As a result, academicians dominated technological activities in France, the Academy proceedings were published under the name Comptes rendus de lAcadémie des sciences. The Comptes rendus is now a series with seven titles. The publications can be found on site of the French National Library, in 1818 the French Academy of Sciences launched a competition to explain the properties of light. The civil engineer Augustin-Jean Fresnel entered this competition by submitting a new theory of light. Siméon Denis Poisson, one of the members of the judging committee, being a supporter of the particle-theory of light, he looked for a way to disprove it. The Poisson spot is not easily observed in every-day situations, so it was natural for Poisson to interpret it as an absurd result. However, the head of the committee, Dominique-François-Jean Arago, and he molded a 2-mm metallic disk to a glass plate with wax
43.
Lagrange
–
Joseph-Louis Lagrange, born Giuseppe Lodovico Lagrangia or Giuseppe Ludovico De la Grange Tournier, was an Italian and French Enlightenment Era mathematician and astronomer. He made significant contributions to the fields of analysis, number theory, in 1787, at age 51, he moved from Berlin to Paris and became a member of the French Academy of Sciences. He remained in France until the end of his life, Lagrange was one of the creators of the calculus of variations, deriving the Euler–Lagrange equations for extrema of functionals. He also extended the method to take into account possible constraints and he proved that every natural number is a sum of four squares. His treatise Theorie des fonctions analytiques laid some of the foundations of group theory, in calculus, Lagrange developed a novel approach to interpolation and Taylor series. Born as Giuseppe Lodovico Lagrangia, Lagrange was of Italian and French descent and his mother was from the countryside of Turin. He was raised as a Roman Catholic, a career as a lawyer was planned out for Lagrange by his father, and certainly Lagrange seems to have accepted this willingly. He studied at the University of Turin and his subject was classical Latin. At first he had no enthusiasm for mathematics, finding Greek geometry rather dull. It was not until he was seventeen that he showed any taste for mathematics – his interest in the subject being first excited by a paper by Edmond Halley which he came across by accident. Alone and unaided he threw himself into mathematical studies, at the end of a years incessant toil he was already an accomplished mathematician, in that capacity, Lagrange was the first to teach calculus in an engineering school. In this Academy one of his students was François Daviet de Foncenex, Lagrange is one of the founders of the calculus of variations. Starting in 1754, he worked on the problem of tautochrone, Lagrange wrote several letters to Leonhard Euler between 1754 and 1756 describing his results. He outlined his δ-algorithm, leading to the Euler–Lagrange equations of variational calculus, Lagrange also applied his ideas to problems of classical mechanics, generalizing the results of Euler and Maupertuis. Euler was very impressed with Lagranges results, Lagrange published his method in two memoirs of the Turin Society in 1762 and 1773. Many of these are elaborate papers, the article concludes with a masterly discussion of echoes, beats, and compound sounds. Other articles in volume are on recurring series, probabilities. The next work he produced was in 1764 on the libration of the Moon, and an explanation as to why the face was always turned to the earth
44.
Saint-Sulpice, Paris
–
Saint-Sulpice is a Roman Catholic church in Paris, France, on the east side of the Place Saint-Sulpice within the rue Bonaparte, in the Luxembourg Quarter of the 6th arrondissement. At 113 metres long,58 metres in width and 34 metres tall, it is slightly smaller than Notre-Dame. It is dedicated to Sulpitius the Pious, construction of the present building, the second church on the site, began in 1646. During the 18th century, a gnomon, the Gnomon of Saint-Sulpice, was constructed in the church. The present church is the building on the site, erected over a Romanesque church originally constructed during the 13th century. Additions were made over the centuries, up to 1631, the new building was founded in 1646 by parish priest Jean-Jacques Olier who had established the Society of Saint-Sulpice, a clerical congregation, and a seminary attached to the church. Anne of Austria laid the first stone, gittard completed the sanctuary, ambulatory, apsidal chapels, transept, and north portal, after which construction was halted for lack of funds. Gilles-Marie Oppenord and Giovanni Servandoni, adhering closely to Gittards designs, the decoration was executed by the brothers Sébastien-Antoine Slodtz and Paul-Ambroise Slodtz. He also built a bell-tower on top of the transept crossing and this miscalculation may account for the fact that Oppenord was then relieved of his duties as an architect and restricted to designing decoration. In 1732 a competition was held for the design of the west facade, won by Servandoni, the 1739 Turgot map of Paris shows the church without Oppenords crossing bell-tower, but with Servandonis pedimented facade mostly complete, still lacking however its two towers. Unfinished at the time of his death in 1766, the work was continued by others, primarily the obscure Oudot de Maclaurin, Chalgrin also designed the decoration of the chapels under the towers. The principal facade now exists in altered form. Large arched windows fill the vast interior with natural light, the result is a simple two-storey west front with three tiers of elegant columns. The overall harmony of the building is, some say, only marred by the two mismatched towers, one can still barely make out the printed words ‘’Le Peuple Francais Reconnoit L’Etre Suprême Et L’Immortalité de L’Âme’’. Further questions of interest are the fate of the frieze that this must have replaced, the responsible for placing this manifesto. Inside the church to either side of the entrance are the two halves of a shell given to King Francis I by the Venetian Republic. They function as holy water fonts and rest on rock-like bases sculpted by Jean-Baptiste Pigalle, Pigalle also designed the large white marble statue of Mary in the Lady Chapel at the far end of the church. The stucco decoration surrounding it is by Louis-Philippe Mouchy, pigalles work replaced a solid-silver statue by Edmé Bouchardon, which vanished at the time of the Revolution
45.
Differential equations
–
A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, because such relations are extremely common, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. In pure mathematics, differential equations are studied from different perspectives. Only the simplest differential equations are solvable by explicit formulas, however, if a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. Differential equations first came into existence with the invention of calculus by Newton, jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is a differential equation of the form y ′ + P y = Q y n for which the following year Leibniz obtained solutions by simplifying it. Historically, the problem of a string such as that of a musical instrument was studied by Jean le Rond dAlembert, Leonhard Euler, Daniel Bernoulli. In 1746, d’Alembert discovered the wave equation, and within ten years Euler discovered the three-dimensional wave equation. The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a particle will fall to a fixed point in a fixed amount of time. Lagrange solved this problem in 1755 and sent the solution to Euler, both further developed Lagranges method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. Contained in this book was Fouriers proposal of his heat equation for conductive diffusion of heat and this partial differential equation is now taught to every student of mathematical physics. For example, in mechanics, the motion of a body is described by its position. Newtons laws allow one to express these variables dynamically as an equation for the unknown position of the body as a function of time. In some cases, this equation may be solved explicitly. An example of modelling a real world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity, the balls acceleration towards the ground is the acceleration due to gravity minus the acceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the balls velocity and this means that the balls acceleration, which is a derivative of its velocity, depends on the velocity. Finding the velocity as a function of time involves solving a differential equation, Differential equations can be divided into several types
46.
Philosophiae Naturalis Principia Mathematica
–
Philosophiæ Naturalis Principia Mathematica, often referred to as simply the Principia, is a work in three books by Isaac Newton, in Latin, first published 5 July 1687. After annotating and correcting his personal copy of the first edition, Newton also published two editions, in 1713 and 1726. The Principia states Newtons laws of motion, forming the foundation of classical mechanics, Newtons law of gravitation. The Principia is regarded as one of the most important works in the history of science, the French mathematical physicist Alexis Clairaut assessed it in 1747, The famous book of mathematical Principles of natural Philosophy marked the epoch of a great revolution in physics. The method followed by its illustrious author Sir Newton, spread the light of mathematics on a science which up to then had remained in the darkness of conjectures and hypotheses. In formulating his theories, Newton developed and used mathematical methods now included in the field of calculus. In a revised conclusion to the Principia, Newton used his expression that became famous and it attempts to cover hypothetical or possible motions both of celestial bodies and of terrestrial projectiles. It explores difficult problems of motions perturbed by multiple attractive forces and its third and final book deals with the interpretation of observations about the movements of planets and their satellites. The opening sections of the Principia contain, in revised and extended form, the Principia begin with Definitions and Axioms or Laws of Motion, and continues in three books, Book 1, subtitled De motu corporum concerns motion in the absence of any resisting medium. It opens with an exposition of the method of first and last ratios. Book 1 contains some proofs with little connection to real-world dynamics, but there are also sections with far-reaching application to the solar system and universe, Propositions 57–69 deal with the motion of bodies drawn to one another by centripetal forces. Propositions 70–84 deal with the forces of spherical bodies. The section contains Newtons proof that a massive spherically symmetrical body attracts other bodies outside itself as if all its mass were concentrated at its centre. This fundamental result, called the Shell theorem, enables the inverse square law of gravitation to be applied to the solar system to a very close degree of approximation. Part of the originally planned for the first book was divided out into a second book. Book 2 also discusses hydrostatics and the properties of compressible fluids, Newton compares the resistance offered by a medium against motions of globes with different properties. In Section 8, he derives rules to determine the speed of waves in fluids and relates them to the density and condensation. He assumes that these rules apply equally to light and sound and estimates that the speed of sound is around 1088 feet per second and can increase depending on the amount of water in air
47.
Kepler's laws
–
In astronomy, Keplers laws of planetary motion are three scientific laws describing the motion of planets around the Sun. The orbit of a planet is an ellipse with the Sun at one of the two foci, a line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time. The square of the period of a planet is proportional to the cube of the semi-major axis of its orbit. Most planetary orbits are circular, and careful observation and calculation are required in order to establish that they are not perfectly circular. Calculations of the orbit of Mars, whose published values are somewhat suspect, from this, Johannes Kepler inferred that other bodies in the Solar System, including those farther away from the Sun, also have elliptical orbits. Keplers work improved the theory of Nicolaus Copernicus, explaining how the planets speeds varied. Isaac Newton showed in 1687 that relationships like Keplers would apply in the Solar System to a approximation, as a consequence of his own laws of motion. Keplers laws are part of the foundation of modern astronomy and physics, Keplers laws improve the model of Copernicus. Keplers corrections are not at all obvious, The planetary orbit is not a circle, the Sun is not at the center but at a focal point of the elliptical orbit. Neither the linear speed nor the speed of the planet in the orbit is constant, but the area speed is constant.015. The calculation is correct when perihelion, the date the Earth is closest to the Sun, the current perihelion, near January 4, is fairly close to the solstice of December 21 or 22. It took nearly two centuries for the current formulation of Keplers work to take on its settled form, voltaires Eléments de la philosophie de Newton of 1738 was the first publication to use the terminology of laws. The Biographical Encyclopedia of Astronomers in its article on Kepler states that the terminology of laws for these discoveries was current at least from the time of Joseph de Lalande. It was the exposition of Robert Small, in An account of the discoveries of Kepler that made up the set of three laws, by adding in the third. Small also claimed, against the history, that these were empirical laws, further, the current usage of Keplers Second Law is something of a misnomer. Kepler had two versions, related in a sense, the distance law and the area law. The area law is what became the Second Law in the set of three, but Kepler did himself not privilege it in that way, Johannes Kepler published his first two laws about planetary motion in 1609, having found them by analyzing the astronomical observations of Tycho Brahe. Keplers third law was published in 1619 and his first law reflected this discovery
48.
Newton's laws of motion
–
Newtons laws of motion are three physical laws that, together, laid the foundation for classical mechanics. They describe the relationship between a body and the forces acting upon it, and its motion in response to those forces. More precisely, the first law defines the force qualitatively, the second law offers a measure of the force. These three laws have been expressed in different ways, over nearly three centuries, and can be summarised as follows. The three laws of motion were first compiled by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica, Newton used them to explain and investigate the motion of many physical objects and systems. For example, in the volume of the text, Newton showed that these laws of motion, combined with his law of universal gravitation. Newtons laws are applied to objects which are idealised as single point masses, in the sense that the size and this can be done when the object is small compared to the distances involved in its analysis, or the deformation and rotation of the body are of no importance. In this way, even a planet can be idealised as a particle for analysis of its orbital motion around a star, in their original form, Newtons laws of motion are not adequate to characterise the motion of rigid bodies and deformable bodies. Leonhard Euler in 1750 introduced a generalisation of Newtons laws of motion for rigid bodies called Eulers laws of motion, if a body is represented as an assemblage of discrete particles, each governed by Newtons laws of motion, then Eulers laws can be derived from Newtons laws. Eulers laws can, however, be taken as axioms describing the laws of motion for extended bodies, Newtons laws hold only with respect to a certain set of frames of reference called Newtonian or inertial reference frames. Other authors do treat the first law as a corollary of the second, the explicit concept of an inertial frame of reference was not developed until long after Newtons death. In the given mass, acceleration, momentum, and force are assumed to be externally defined quantities. This is the most common, but not the interpretation of the way one can consider the laws to be a definition of these quantities. Newtonian mechanics has been superseded by special relativity, but it is useful as an approximation when the speeds involved are much slower than the speed of light. The first law states that if the net force is zero, the first law can be stated mathematically when the mass is a non-zero constant, as, ∑ F =0 ⇔ d v d t =0. Consequently, An object that is at rest will stay at rest unless a force acts upon it, an object that is in motion will not change its velocity unless a force acts upon it. This is known as uniform motion, an object continues to do whatever it happens to be doing unless a force is exerted upon it. If it is at rest, it continues in a state of rest, if an object is moving, it continues to move without turning or changing its speed
49.
Newton's law of universal gravitation
–
This is a general physical law derived from empirical observations by what Isaac Newton called inductive reasoning. It is a part of classical mechanics and was formulated in Newtons work Philosophiæ Naturalis Principia Mathematica, in modern language, the law states, Every point mass attracts every single other point mass by a force pointing along the line intersecting both points. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them, the first test of Newtons theory of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798. It took place 111 years after the publication of Newtons Principia, Newtons law of gravitation resembles Coulombs law of electrical forces, which is used to calculate the magnitude of the electrical force arising between two charged bodies. Both are inverse-square laws, where force is proportional to the square of the distance between the bodies. Coulombs law has the product of two charges in place of the product of the masses, and the constant in place of the gravitational constant. Newtons law has since been superseded by Albert Einsteins theory of general relativity, at the same time Hooke agreed that the Demonstration of the Curves generated thereby was wholly Newtons. In this way, the question arose as to what, if anything and this is a subject extensively discussed since that time and on which some points, outlined below, continue to excite controversy. And that these powers are so much the more powerful in operating. Thus Hooke clearly postulated mutual attractions between the Sun and planets, in a way that increased with nearness to the attracting body, Hookes statements up to 1674 made no mention, however, that an inverse square law applies or might apply to these attractions. Hookes gravitation was also not yet universal, though it approached universality more closely than previous hypotheses and he also did not provide accompanying evidence or mathematical demonstration. It was later on, in writing on 6 January 1679|80 to Newton, Newton, faced in May 1686 with Hookes claim on the inverse square law, denied that Hooke was to be credited as author of the idea. Among the reasons, Newton recalled that the idea had been discussed with Sir Christopher Wren previous to Hookes 1679 letter, Newton also pointed out and acknowledged prior work of others, including Bullialdus, and Borelli. D T Whiteside has described the contribution to Newtons thinking that came from Borellis book, a copy of which was in Newtons library at his death. Newton further defended his work by saying that had he first heard of the inverse square proportion from Hooke, Hooke, without evidence in favor of the supposition, could only guess that the inverse square law was approximately valid at great distances from the center. Thus Newton gave a justification, otherwise lacking, for applying the inverse square law to large spherical planetary masses as if they were tiny particles, after his 1679-1680 correspondence with Hooke, Newton adopted the language of inward or centripetal force. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s, the lesson offered by Hooke to Newton here, although significant, was one of perspective and did not change the analysis. This background shows there was basis for Newton to deny deriving the inverse square law from Hooke, on the other hand, Newton did accept and acknowledge, in all editions of the Principia, that Hooke had separately appreciated the inverse square law in the solar system
50.
Miracle
–
A miracle is an event not explicable by natural or scientific laws. Such an event may be attributed to a supernatural being, magic, a miracle worker, other such miracles might be, survival of an illness diagnosed as terminal, escaping a life-threatening situation or beating the odds. Some coincidences may be seen as miracles, a true miracle would, by definition, be a non-natural phenomenon, leading many rational and scientific thinkers to dismiss them as physically impossible or impossible to confirm by their nature. The former position is expressed for instance by Thomas Jefferson and the latter by David Hume, theologians typically say that, with divine providence, God regularly works through nature yet, as a creator, is free to work without, above, or against it as well. The possibility and probability of miracles are then equal to the possibility and probability of the existence of God, a miracle is a phenomenon not explained by known laws of nature. Criteria for classifying an event as a miracle vary, often a religious text, such as the Bible or Quran, states that a miracle occurred, and believers may accept this as a fact. British mathematician J. E. Littlewood suggested that individuals should statistically expect one-in-a-million events to happen to them at the rate of one per month. By Littlewoods definition, seemingly miraculous events are actually commonplace, the Aristotelian view of God does not include direct intervention in the order of the natural world. Jewish neo-Aristotelian philosophers, who are influential today, include Maimonides, Samuel ben Judah ibn Tibbon. Directly or indirectly, their views are still prevalent in much of the religious Jewish community, in his Tractatus Theologico-Politicus Spinoza claims that miracles are merely lawlike events whose causes we are ignorant of. We should not treat them as having no cause or of having a cause immediately available, rather the miracle is for combating the ignorance it entails, like a political project. According to the philosopher David Hume, a miracle is a transgression of a law of nature by a particular volition of the Deity, or by the interposition of some invisible agent. According to the Christian theologian Friedrich Schleiermacher every event, even the most natural and usual, james Keller states that The claim that God has worked a miracle implies that God has singled out certain persons for some benefit which many others do not receive implies that God is unfair. If God intervenes to save life in a car crash. Thus an all-powerful, all-knowing and just God, as predicated in Christianity, the Haedong Kosung-jon of Korea records that King Beopheung of Silla had desired to promulgate Buddhism as the state religion. However, officials in his court opposed him, in the fourteenth year of his reign, Beopheungs Grand Secretary, Ichadon, devised a strategy to overcome court opposition. Ichadon schemed with the king, convincing him to make a proclamation granting Buddhism official state sanction using the royal seal, Ichadon told the king to deny having made such a proclamation when the opposing officials received it and demanded an explanation. Instead, Ichadon would confess and accept the punishment of execution, Ichadon prophesied to the king that at his execution a wonderful miracle would convince the opposing court faction of Buddhisms power
51.
Accuracy and precision
–
Precision is a description of random errors, a measure of statistical variability. The two concepts are independent of other, so a particular set of data can be said to be either accurate, or precise. In the fields of science, engineering and statistics, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to that quantitys true value. The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results. Although the two words precision and accuracy can be synonymous in colloquial use, they are contrasted in the context of the scientific method. A measurement system can be accurate but not precise, precise but not accurate, neither, for example, if an experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. The result would be a consistent yet inaccurate string of results from the flawed experiment, eliminating the systematic error improves accuracy but does not change precision. A measurement system is considered if it is both accurate and precise. Related terms include bias and error, the terminology is also applied to indirect measurements—that is, values obtained by a computational procedure from observed data. Statistical literature prefers to use the terms bias and variability instead of accuracy and precision, bias is the amount of inaccuracy and variability is the amount of imprecision. In military terms, accuracy refers primarily to the accuracy of fire, ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the true value. The accuracy and precision of a measurement process is established by repeatedly measuring some traceable reference standard. Such standards are defined in the International System of Units and maintained by national organizations such as the National Institute of Standards. This also applies when measurements are repeated and averaged, further, the central limit theorem shows that the probability distribution of the averaged measurements will be closer to a normal distribution than that of individual measurements. With regard to accuracy we can distinguish, the difference between the mean of the measurements and the value, the bias. Establishing and correcting for bias is necessary for calibration, the combined effect of that and precision. A common convention in science and engineering is to express accuracy and/or precision implicitly by means of significant figures, here, when not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m, while a recording of 8,436 m would imply a margin of error of 0.5 m
52.
Stability of the Solar System
–
The stability of the Solar System is a subject of much inquiry in astronomy. Though the planets have been stable when historically observed, and will be in the short term, the orbits of the planets are open to long-term variations, and modeling the Solar System is subject to the n-body problem. Resonance happens when any two periods have a numerical ratio. The most fundamental period for an object in the Solar System is its orbital period, in 1867, the American astronomer Daniel Kirkwood noticed that asteroids in the asteroid belt are not randomly distributed. There were distinct gaps in the belt at locations that corresponded to resonances with Jupiter, for example, there were no asteroids at the 3,1 resonance – a distance of 2.5 AU – or at the 2,1 resonance at 3.3 AU. Another common form of resonance in the Solar System is spin–orbit resonance, an example is our own Moon, which is in a 1,1 spin–orbit resonance that keeps the far side of the Moon away from the Earth. The planets orbits are chaotic over longer timescales, such that the whole Solar System possesses a Lyapunov time in the range of 2–230 million years. In all cases this means that the position of a planet along its orbit becomes impossible to predict with any certainty. Such chaos manifests most strongly as changes in eccentricity, with some orbits becoming significantly more—or less—elliptical. Furthermore, the equations of motion describe a process that is inherently serial, the Neptune–Pluto system lies in a 3,2 orbital resonance. C. J. Cohen and E. C. Hubbard at the Naval Surface Warfare Center Dahlgren Division discovered this in 1965. Thus, on the scale of hundreds of millions of years Plutos orbital phase becomes impossible to determine. Jupiters moon Io has a period of 1.769 days. They are in a 2,1 orbit/orbit resonance and this particular resonance has important consequences because Europas gravity perturbs the orbit of Io. As Io moves closer to Jupiter and then further away in the course of an orbit, it experiences significant tidal stresses resulting in active volcanoes. Europa is also in a 2,1 resonance with the next satellite Ganymede.5 degrees every 1000 years, at one point, the two may fall into sync, at which time Jupiters constant gravitational tugs could accumulate and pull Mercury off course. This could eject it from the Solar System altogether or send it on a course with Venus. Project LONGSTOP was a 1982 international consortium of Solar System dynamicists led by Archie Roy and it involved creation of a model on a supercomputer, integrating the orbits of the outer planets
53.
Chaos theory
–
Chaos theory is a branch of mathematics focused on the behavior of dynamical systems that are highly sensitive to initial conditions. This happens even though these systems are deterministic, meaning that their behavior is fully determined by their initial conditions. In other words, the nature of these systems does not make them predictable. This behavior is known as chaos, or simply chaos. The theory was summarized by Edward Lorenz as, Chaos, When the present determines the future, Chaotic behavior exists in many natural systems, such as weather and climate. It also occurs spontaneously in some systems with components, such as road traffic. This behavior can be studied through analysis of a mathematical model, or through analytical techniques such as recurrence plots. Chaos theory has applications in several disciplines, including meteorology, sociology, physics, environmental science, computer science, engineering, economics, biology, ecology, the theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, self-assembly process. Chaos theory concerns deterministic systems whose behavior can in principle be predicted, Chaotic systems are predictable for a while and then appear to become random. Some examples of Lyapunov times are, chaotic electrical circuits, about 1 millisecond, weather systems, a few days, in chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast and this means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random, in common usage, chaos means a state of disorder. However, in theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition originally formulated by Robert L, in these cases, while it is often the most practically significant property, sensitivity to initial conditions need not be stated in the definition. If attention is restricted to intervals, the second property implies the other two, an alternative, and in general weaker, definition of chaos uses only the first two properties in the above list. Sensitivity to initial conditions means that each point in a system is arbitrarily closely approximated by other points with significantly different future paths. Thus, a small change, or perturbation, of the current trajectory may lead to significantly different future behavior. C. Entitled Predictability, Does the Flap of a Butterflys Wings in Brazil set off a Tornado in Texas, the flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale phenomena
54.
Observational astronomy
–
It is the practice of observing celestial objects by using telescopes and other astronomical apparatus. As a science, the study of astronomy is somewhat hindered in that direct experiments with the properties of the distant universe are not possible, however, this is partly compensated by the fact that astronomers have a vast number of visible examples of stellar phenomena that can be examined. This allows for data to be plotted on graphs. Nearby examples of phenomena, such as variable stars, can then be used to infer the behavior of more distant representatives. Those distant yardsticks can then be employed to measure other phenomena in that neighborhood, galileo Galilei turned a telescope to the heavens and recorded what he saw. Since that time, observational astronomy has made steady advances with each improvement in telescope technology, visible-light astronomy falls in the middle of this range. Infrared astronomy deals with the detection and analysis of infrared radiation, the most common tool is the reflecting telescope but with a detector sensitive to infrared wavelengths. Space telescopes are used at wavelengths where the atmosphere is opaque. Radio astronomy detects radiation of millimetre to decametre wavelength, the receivers are similar to those used in radio broadcast transmission but much more sensitive. High-energy astronomy includes X-ray astronomy, gamma-ray astronomy, and extreme UV astronomy, in addition to using electromagnetic radiation, modern astrophysicists can also make observations using neutrinos, cosmic rays or gravitational waves. Observing a source using multiple methods is known as multi-messenger astronomy, optical and radio astronomy can be performed with ground-based observatories, because the atmosphere is relatively transparent at the wavelengths being detected. Observatories are usually located at high altitudes so as to minimise the absorption and distortion caused by the Earths atmosphere, some wavelengths of infrared light are heavily absorbed by water vapor, so many infrared observatories are located in dry places at high altitude, or in space. Powerful gamma rays can, however be detected by the air showers they produce. For much of the history of astronomy, almost all observation was performed in the visual spectrum with optical telescopes. The seeing conditions depend on the turbulence and thermal variations in the air, locations that are frequently cloudy or suffer from atmospheric turbulence limit the resolution of observations. Likewise the presence of the full Moon can brighten up the sky with scattered light, for observation purposes, the optimal location for an optical telescope is undoubtedly in outer space. There the telescope can make observations without being affected by the atmosphere, however, at present it remains costly to lift telescopes into orbit. Thus the next best locations are certain mountain peaks that have a number of cloudless days
55.
Leonhard Euler
–
He also introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. He is also known for his work in mechanics, fluid dynamics, optics, astronomy, Euler was one of the most eminent mathematicians of the 18th century, and is held to be one of the greatest in history. He is also considered to be the most prolific mathematician of all time. His collected works fill 60 to 80 quarto volumes, more than anybody in the field and he spent most of his adult life in Saint Petersburg, Russia, and in Berlin, then the capital of Prussia. A statement attributed to Pierre-Simon Laplace expresses Eulers influence on mathematics, Read Euler, read Euler, Leonhard Euler was born on 15 April 1707, in Basel, Switzerland to Paul III Euler, a pastor of the Reformed Church, and Marguerite née Brucker, a pastors daughter. He had two sisters, Anna Maria and Maria Magdalena, and a younger brother Johann Heinrich. Soon after the birth of Leonhard, the Eulers moved from Basel to the town of Riehen, Paul Euler was a friend of the Bernoulli family, Johann Bernoulli was then regarded as Europes foremost mathematician, and would eventually be the most important influence on young Leonhard. Eulers formal education started in Basel, where he was sent to live with his maternal grandmother. In 1720, aged thirteen, he enrolled at the University of Basel, during that time, he was receiving Saturday afternoon lessons from Johann Bernoulli, who quickly discovered his new pupils incredible talent for mathematics. In 1726, Euler completed a dissertation on the propagation of sound with the title De Sono, at that time, he was unsuccessfully attempting to obtain a position at the University of Basel. In 1727, he first entered the Paris Academy Prize Problem competition, Pierre Bouguer, who became known as the father of naval architecture, won and Euler took second place. Euler later won this annual prize twelve times, around this time Johann Bernoullis two sons, Daniel and Nicolaus, were working at the Imperial Russian Academy of Sciences in Saint Petersburg. In November 1726 Euler eagerly accepted the offer, but delayed making the trip to Saint Petersburg while he applied for a physics professorship at the University of Basel. Euler arrived in Saint Petersburg on 17 May 1727 and he was promoted from his junior post in the medical department of the academy to a position in the mathematics department. He lodged with Daniel Bernoulli with whom he worked in close collaboration. Euler mastered Russian and settled life in Saint Petersburg. He also took on a job as a medic in the Russian Navy. The Academy at Saint Petersburg, established by Peter the Great, was intended to improve education in Russia, as a result, it was made especially attractive to foreign scholars like Euler
56.
Luminiferous ether
–
In the late 19th century, luminiferous aether, aether, or ether, meaning light-bearing aether, was the postulated medium for the propagation of light. It was invoked to explain the ability of the apparently wave-based light to propagate through empty space, the assumption of a spatial plenum of luminiferous aether, rather than a spatial vacuum, provided the theoretical medium that was required by wave theories of light. The concept was the topic of debate throughout its history, as it required the existence of an invisible. As the nature of light was explored, especially in the 19th century, by the late 1800s, the existence of the aether was being questioned, although there was no physical theory to replace it. The negative outcome of the Michelson–Morley experiment suggested that the aether was non-existent and this led to considerable theoretical work to explain the propagation of light without an aether. A major breakthrough was the theory of relativity, which could explain why the experiment failed to see aether, isaac Newton contended that light was made up of numerous small particles. This could explain such features as lights ability to travel in straight lines and this theory was known to have its problems, although it explained reflection well, its explanation of refraction and diffraction was less satisfactory. The modern understanding is that radiation is, like light. However, Newton viewed heat and light as two different phenomena and he believed heat vibrations to be excited when a Ray of Light falls upon the Surface of any pellucid Body. Before Newton, Christiaan Huygens had hypothesized that light was a wave propagating through an aether, Newton rejected this idea, mainly on the ground that both men apparently could only envision light as a longitudinal wave, like sound and other mechanical waves in fluids. However, longitudinal waves necessarily have one form for a given propagation direction. Thus, longitudinal waves could not explain birefringence, in which two polarizations of light are refracted differently by a crystal, instead, Newton preferred to imagine non-spherical particles, or corpuscles, of light with different sides that give rise to birefringence. In 1720 James Bradley carried out a series of experiments attempting to measure stellar parallax by taking measurements of stars at different times of the year and he failed to detect any parallax, thereby placing a lower limit on the distance to stars. During these experiments he discovered a similar effect, the apparent positions of the stars did change over the year. This interesting effect is now known as stellar aberration, knowing the Earths velocity and the aberration angle, this enabled him to estimate the speed of light. To explain stellar aberration in the context of a theory of light was regarded as more problematic. This meant that the Earth could travel through the aether, a physical medium, physicists assumed, moreover, that like mechanical waves, light waves required a medium for propagation, and thus required Huygenss idea of an aether gas permeating all space. However, a transverse wave apparently required the propagating medium to behave as a solid and he also suggested that the absence of longitudinal waves suggested that the aether had negative compressibility
57.
Integral
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
58.
Cubic function
–
In algebra, a cubic function is a function of the form f = a x 3 + b x 2 + c x + d, where a is nonzero. Setting f =0 produces an equation of the form. The solutions of this equation are called roots of the polynomial f, If all of the coefficients a, b, c, and d of the cubic equation are real numbers then there will be at least one real root. All of the roots of the equation can be found algebraically. The roots can also be found trigonometrically, alternatively, numerical approximations of the roots can be found using root-finding algorithms like Newtons method. The coefficients do not need to be complex numbers, much of what is covered below is valid for coefficients of any field with characteristic 0 or greater than 3. The solutions of the cubic equation do not necessarily belong to the field as the coefficients. For example, some cubic equations with rational coefficients have roots that are complex numbers. Cubic equations were known to the ancient Babylonians, Greeks, Chinese, Indians, Babylonian cuneiform tablets have been found with tables for calculating cubes and cube roots. The Babylonians could have used the tables to solve cubic equations, the problem of doubling the cube involves the simplest and oldest studied cubic equation, and one for which the ancient Egyptians did not believe a solution existed. Methods for solving cubic equations appear in The Nine Chapters on the Mathematical Art, in the 3rd century, the Greek mathematician Diophantus found integer or rational solutions for some bivariate cubic equations. In the 11th century, the Persian poet-mathematician, Omar Khayyám, in an early paper, he discovered that a cubic equation can have more than one solution and stated that it cannot be solved using compass and straightedge constructions. He also found a geometric solution, in the 12th century, the Indian mathematician Bhaskara II attempted the solution of cubic equations without general success. However, he gave one example of an equation, x3 + 12x = 6x2 +35. He used what would later be known as the Ruffini-Horner method to approximate the root of a cubic equation. He also developed the concepts of a function and the maxima and minima of curves in order to solve cubic equations which may not have positive solutions. He understood the importance of the discriminant of the equation to find algebraic solutions to certain types of cubic equations. Leonardo de Pisa, also known as Fibonacci, was able to approximate the positive solution to the cubic equation x3 + 2x2 + 10x =20
59.
Gerald James Whitrow
–
Gerald James Whitrow was a British mathematician, cosmologist and science historian. Whitrow was born on 9 June 1912 at Kimmeridge in Dorset, after completing school at Christs Hospital, he obtained a scholarship at Christ Church, Oxford in 1930, earning his first degree in 1933, the MA in 1937, and the PhD in 1939. At Oxford he worked on a theory of relativity with Professor Edward Arthur Milne. During World War II, he worked as a Scientific Officer for the Ministry of Supply and his work was on defence research, including ballistics, and he worked at Fort Halstead and Cambridge. After the war, he taught at the Imperial College, London, first as a Lecturer, then as Reader of Applied Mathematics, in 1955 Whitrow investigated the possibility of extradimensional space in Why Physical Space Has Three Dimensions. These instabilities would worsen for dimensions larger than four, if spatial dimensions were reduced to two, the propagation and reflection of waves would be more difficult, which would reduce coherent behavior of complex systems. He concluded that life would not be possible in other than three space dimensions, following his 1979 retirement, he was Emeritus Professor and Senior Research Fellow of the Imperial College. In 1971 he was among the founders of the British Society for the History of Science, whitrow’s interest in libraries and archives extended to the Athenaeum Club, of which he was elected a member in 1957. He served two terms on the Club’s Library Committee and was its chairman between 1979 and 1981 and he was responsible for founding some of the various discussion groups that exist in the Club and in the early 1990s he served on its Executive Committee. His main contributions were in the fields of cosmology and astrophysics, among his publications, The Natural Philosophy of Time received special attention. His work placed him at the centre of the study of time, Whitrow published an important paper on the cosmic background radiation with B. D. Yallop in 1964, Title, The background radiation in homogeneous isotropic world models, I. The Structure and Evolution of the Universe, An Introduction to Cosmology, the Structure and Evolution of the Universe. Einstein, the Man and his Achievement, articles,1967, Reflections on the Natural Philosophy of Time, Annals of the New York Academy of Sciences 138, 422-32. 1979, Mathematical Time and Its Role in the Development of the Scientific World-View in Greenway, Frank,1973, Time and Measurement, Dictionary of the History of Ideas. James, F. A. J. L.2001, Gerald James Whitrow, Astronomy & Astrophysics 42,2. 35-2.38
60.
Theory of tides
–
In 1609 Johannes Kepler correctly suggested that the gravitation of the Moon causes the tides, basing his argument upon ancient observations and correlations. The influence of the Moon on tides was mentioned in Ptolemys Tetrabiblos as having derived from ancient observation, in 1616, Galileo Galilei wrote Discourse on the Tides, in a letter to Cardinal Orsini. In this discourse, he tried to explain the occurrence of the tides as the result of the Earths rotation and revolution around the Sun, Galileo believed that the oceans moved like water in a large basin, as the basin moves, so does the water. Therefore, as the Earth revolves, the force of the Earths rotation causes the oceans to alternately accelerate and retardate, in subsequent centuries, further analysis led to the current tidal physics. Galileo rejected Keplers explanation of the tides, the dynamic theory of tides describes and predicts the actual real behavior of ocean tides. Laplaces theory of ocean tides took into account friction, resonance and it predicted the large amphidromic systems in the worlds ocean basins and explains the oceanic tides that are actually observed. The equilibrium tide theory calculates the height of the wave of less than half a meter. Satellite observations confirm the accuracy of the theory, and the tides worldwide are now measured to within a few centimeters. Measurements from the CHAMP satellite closely match the models based on the TOPEX data, accurate models of tides worldwide are essential for research since the variations due to tides must be removed from measurements when calculating gravity and changes in sea levels. In 1776, Pierre-Simon Laplace formulated a set of linear partial differential equations. Coriolis effects are introduced as well as lateral forcing by gravity, Laplace obtained these equations by simplifying the fluid dynamic equations. But they can also be derived from energy integrals via Lagranges equation, william Thomson rewrote Laplaces momentum terms using the curl to find an equation for vorticity. Under certain conditions this can be rewritten as a conservation of vorticity. Laplaces improvements in theory were substantial, but they still left prediction in an approximate state, thomsons work in this field was then further developed and extended by George Darwin, applying the lunar theory current in his time. Darwins symbols for the tidal harmonic constituents are still used, Doodsons work was carried out and published in 1921. Doodson devised a system for specifying the different harmonic components of the tide-generating potential, the Doodson numbers. Since the mid-twentieth century further analysis has generated many more terms than Doodsons 388, about 62 constituents are of sufficient size to be considered for possible use in marine tide prediction, but sometimes many fewer can predict tides to useful accuracy. Amplitudes of tidal constituents are given below for six example locations, Eastport, Maine, Biloxi, Mississippi, San Juan, Puerto Rico, Kodiak, Alaska, San Francisco, California, and Hilo, Hawaii
61.
Daniel Bernoulli
–
Daniel Bernoulli FRS was a Swiss mathematician and physicist and was one of the many prominent mathematicians in the Bernoulli family. He is particularly remembered for his applications of mathematics to mechanics, especially fluid mechanics, Daniel Bernoulli was born in Groningen, in the Netherlands, into a family of distinguished mathematicians. The Bernoulli family came originally from Antwerp, at time in the Spanish Netherlands. After a brief period in Frankfurt the family moved to Basel, Daniel was the son of Johann Bernoulli, nephew of Jacob Bernoulli. He had two brothers, Niklaus and Johann II, Daniel Bernoulli was described by W. W. Rouse Ball as by far the ablest of the younger Bernoullis. He is said to have had a bad relationship with his father, Johann Bernoulli also plagiarized some key ideas from Daniels book Hydrodynamica in his own book Hydraulica which he backdated to before Hydrodynamica. Despite Daniels attempts at reconciliation, his father carried the grudge until his death, around schooling age, his father, Johann, encouraged him to study business, there being poor rewards awaiting a mathematician. However, Daniel refused, because he wanted to study mathematics and he later gave in to his fathers wish and studied business. His father then asked him to study in medicine, and Daniel agreed under the condition that his father would teach him mathematics privately, Daniel studied medicine at Basel, Heidelberg, and Strasbourg, and earned a PhD in anatomy and botany in 1721. He was a contemporary and close friend of Leonhard Euler and he went to St. Petersburg in 1724 as professor of mathematics, but was very unhappy there, and a temporary illness in 1733 gave him an excuse for leaving St. Petersberg. He returned to the University of Basel, where he held the chairs of medicine, metaphysics. In May,1750 he was elected a Fellow of the Royal Society and his earliest mathematical work was the Exercitationes, published in 1724 with the help of Goldbach. Two years later he pointed out for the first time the frequent desirability of resolving a compound motion into motions of translation and motion of rotation, together Bernoulli and Euler tried to discover more about the flow of fluids. In particular, they wanted to know about the relationship between the speed at which blood flows and its pressure, soon physicians all over Europe were measuring patients blood pressure by sticking point-ended glass tubes directly into their arteries. It was not until about 170 years later, in 1896 that an Italian doctor discovered a less painful method which is still in use today. However, Bernoullis method of measuring pressure is used today in modern aircraft to measure the speed of the air passing the plane. Taking his discoveries further, Daniel Bernoulli now returned to his work on Conservation of Energy. It was known that a moving body exchanges its kinetic energy for energy when it gains height
62.
Tidal force
–
The tidal force is a force that is the secondary effect of the force of gravity, it is responsible for the phenomenon of tides. It arises because the force exerted by one body on another is not constant across it. Thus, the force is differential. Consider the gravitational attraction of the Moon on the oceans nearest to the Moon, the solid Earth, there is a mutual attraction between the Moon and the solid Earth, which can be considered to act on its centre of mass. However, the oceans are more strongly attracted and, especially since they are fluid, they approach the Moon slightly. The far oceans are attracted less, viewing the Earth as a whole, we see that all its mass experiences a mutual attraction with that of the Moon, but the near oceans more so than the far oceans, leading to a separation of the two. When a body is acted on by the gravity of another body, Figure 2 shows the differential force of gravity on a spherical body exerted by another body. These so-called tidal forces cause strains on both bodies and may distort them or even, in cases, break one or the other apart. These strains would not occur if the field were uniform, because a uniform field only causes the entire body to accelerate together in the same direction. In the case of a small elastic sphere, the effect of a tidal force is to distort the shape of the body without any change in volume. The sphere becomes an ellipsoid with two bulges, pointing towards and away from the other body, larger objects distort into an ovoid, and are slightly compressed, which is what happens to the Earths oceans under the action of the Moon. The Earth and Moon rotate about their center of mass or barycenter. To an observer on the Earth, very close to this barycenter, all parts of the Earth are subject to the Moons gravitational forces, causing the water in the oceans to redistribute, forming bulges on the sides near the Moon and far from the Moon. When a body rotates while subject to forces, internal friction results in the gradual dissipation of its rotational kinetic energy as heat. In the case for the Earth, and Earths Moon, the loss of kinetic energy results in a gain of about 2 milliseconds per century. If the body is enough to its primary, this can result in a rotation which is tidally locked to the orbital motion. Tidal heating produces dramatic volcanic effects on Jupiters moon Io, stresses caused by tidal forces also cause a regular monthly pattern of moonquakes on Earths Moon. Tidal forces contribute to ocean currents, which moderate global temperatures by transporting heat energy toward the poles and it has been suggested that in addition to other factors, harmonic beat variations in tidal forcing may contribute to climate changes
63.
Friction
–
Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other. There are several types of friction, Dry friction resists relative lateral motion of two surfaces in contact. Dry friction is subdivided into static friction between non-moving surfaces, and kinetic friction between moving surfaces, fluid friction describes the friction between layers of a viscous fluid that are moving relative to each other. Lubricated friction is a case of fluid friction where a lubricant fluid separates two solid surfaces, skin friction is a component of drag, the force resisting the motion of a fluid across the surface of a body. Internal friction is the force resisting motion between the making up a solid material while it undergoes deformation. When surfaces in contact move relative to other, the friction between the two surfaces converts kinetic energy into thermal energy. This property can have consequences, as illustrated by the use of friction created by rubbing pieces of wood together to start a fire. Kinetic energy is converted to thermal energy whenever motion with friction occurs, another important consequence of many types of friction can be wear, which may lead to performance degradation and/or damage to components. Friction is a component of the science of tribology, Friction is not itself a fundamental force. Dry friction arises from a combination of adhesion, surface roughness, surface deformation. The complexity of interactions makes the calculation of friction from first principles impractical and necessitates the use of empirical methods for analysis. Friction is a non-conservative force - work done against friction is path dependent, in the presence of friction, some energy is always lost in the form of heat. Thus mechanical energy is not conserved, the Greeks, including Aristotle, Vitruvius, and Pliny the Elder, were interested in the cause and mitigation of friction. They were aware of differences between static and kinetic friction with Themistius stating in 350 A. D. that it is easier to further the motion of a moving body than to move a body at rest. The classic laws of sliding friction were discovered by Leonardo da Vinci in 1493, a pioneer in tribology and these laws were rediscovered by Guillaume Amontons in 1699. Amontons presented the nature of friction in terms of surface irregularities, the understanding of friction was further developed by Charles-Augustin de Coulomb. Coulomb further considered the influence of sliding velocity, temperature and humidity, the distinction between static and dynamic friction is made in Coulombs friction law, although this distinction was already drawn by Johann Andreas von Segner in 1758. Leslie was equally skeptical about the role of adhesion proposed by Desaguliers, in Leslies view, friction should be seen as a time-dependent process of flattening, pressing down asperities, which creates new obstacles in what were cavities before