1.
Peter Debye
–
Peter Joseph William Debye ForMemRS was a Dutch-American physicist and physical chemist, and Nobel laureate in Chemistry. Born Petrus Josephus Wilhelmus Debije in Maastricht, Netherlands, Debye enrolled in the Aachen University of Technology in 1901, in 1905, he completed his first degree in electrical engineering. He published his first paper, an elegant solution of a problem involving eddy currents. At Aachen, he studied under the theoretical physicist Arnold Sommerfeld, in 1906, Sommerfeld received an appointment at Munich, Bavaria, and took Debye with him as his assistant. Debye got his Ph. D. with a dissertation on radiation pressure in 1908, in 1910, he derived the Planck radiation formula using a method which Max Planck agreed was simpler than his own. In 1911, when Albert Einstein took an appointment as a professor at Prague, Bohemia, Debye took his old professorship at the University of Zurich and he was awarded the Lorentz Medal in 1935. From 1937 to 1939 he was the president of the Deutsche Physikalische Gesellschaft, in May 1914 he became member of the Royal Netherlands Academy of Arts and Sciences and in December of the same year he became foreign member. In 1913, Debye married Mathilde Alberer and they had a son, Peter P. Debye, and a daughter, Mathilde Maria. Peter became a physicist and collaborated with Debye in some of his researches, in consequence, the units of molecular dipole moments are termed debyes in his honor. Also in 1912, he extended Albert Einsteins theory of heat to lower temperatures by including contributions from low-frequency phonons. In 1913, he extended Niels Bohrs theory of structure, introducing elliptical orbits. In 1914–1915, Debye calculated the effect of temperature on X-ray diffraction patterns of crystalline solids with Paul Scherrer, in 1923, together with his assistant Erich Hückel, he developed an improvement of Svante Arrhenius theory of electrical conductivity in electrolyte solutions. Although an improvement was made to the Debye–Hückel equation in 1926 by Lars Onsager, also in 1923, Debye developed a theory to explain the Compton effect, the shifting of the frequency of X-rays when they interact with electrons. From 1934 to 1939 Debye was director of the section of the prestigious Kaiser Wilhelm Institute in Berlin. From 1936 onwards he was professor of Theoretical Physics at the Frederick William University of Berlin. These positions were held during the years that Adolf Hitler ruled Nazi Germany and, from 1938 onward, in 1939 Debye traveled to the United States to deliver the Baker Lectures at Cornell University in Ithaca, New York. After leaving Germany in early 1940, Debye became a professor at Cornell, chaired the department for 10 years. In 1946 he became an American citizen, unlike the European phase of his life, where he moved from city to city every few years, in the United States Debye remained at Cornell for the remainder of his career
2.
Thermodynamics
–
Thermodynamics is a branch of science concerned with heat and temperature and their relation to energy and work. The behavior of these quantities is governed by the four laws of thermodynamics, the laws of thermodynamics are explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to a variety of topics in science and engineering, especially physical chemistry, chemical engineering. The initial application of thermodynamics to mechanical heat engines was extended early on to the study of chemical compounds, Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged in the following decades, statistical thermodynamics, or statistical mechanics, concerned itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a mathematical approach to the field in his axiomatic formulation of thermodynamics. A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis, the first law specifies that energy can be exchanged between physical systems as heat and work. In thermodynamics, interactions between large ensembles of objects are studied and categorized, central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. With these tools, thermodynamics can be used to describe how systems respond to changes in their environment and this can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium, non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field. Guericke was driven to make a vacuum in order to disprove Aristotles long-held supposition that nature abhors a vacuum. Shortly after Guericke, the English physicist and chemist Robert Boyle had learned of Guerickes designs and, in 1656, in coordination with English scientist Robert Hooke, using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyles Law was formulated, which states that pressure, later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and he did not, however, follow through with his design. Nevertheless, in 1697, based on Papins designs, engineer Thomas Savery built the first engine, although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. Black and Watt performed experiments together, but it was Watt who conceived the idea of the condenser which resulted in a large increase in steam engine efficiency. Drawing on all the work led Sadi Carnot, the father of thermodynamics, to publish Reflections on the Motive Power of Fire
3.
Kinetic theory of gases
–
Kinetic theory explains macroscopic properties of gases, such as pressure, temperature, viscosity, thermal conductivity, and volume, by considering their molecular composition and motion. The theory posits that gas pressure is due to the impacts, on the walls of a container, Kinetic theory defines temperature in its own way, not identical with the thermodynamic definition. Under a microscope, the making up a liquid are too small to be visible. Known as Brownian motion, it directly from collisions between the grains or particles and liquid molecules. As analyzed by Albert Einstein in 1907, this evidence for kinetic theory is generally seen as having confirmed the concrete material existence of atoms. The theory for ideal gases makes the assumptions, The gas consists of very small particles known as molecules. This smallness of their size is such that the volume of the individual gas molecules added up is negligible compared to the volume of the smallest open ball containing all the molecules. This is equivalent to stating that the distance separating the gas particles is large compared to their size. These particles have the same mass, the number of molecules is so large that statistical treatment can be applied. These molecules are in constant, random, and rapid motion, the rapidly moving particles constantly collide among themselves and with the walls of the container. All these collisions are perfectly elastic and this means, the molecules are considered to be perfectly spherical in shape, and elastic in nature. Except during collisions, the interactions among molecules are negligible and this means that the inter-particle distance is much larger than the thermal de Broglie wavelength and the molecules are treated as classical objects. Because of the two, their dynamics can be treated classically. This means that the equations of motion of the molecules are time-reversible, the average kinetic energy of the gas particles depends only on the absolute temperature of the system. The kinetic theory has its own definition of temperature, not identical with the thermodynamic definition, the elapsed time of a collision between a molecule and the containers wall is negligible when compared to the time between successive collisions. Because they have mass, the gas molecules will be affected by gravity, more modern developments relax these assumptions and are based on the Boltzmann equation. These can accurately describe the properties of gases, because they include the volume of the molecules. The necessary assumptions are the absence of quantum effects, molecular chaos, expansions to higher orders in the density are known as virial expansions
4.
Particle statistics
–
Particle statistics is a particular description of multiple particles in statistical mechanics. Its core concept is an ensemble that emphasizes properties of a large system as a whole at the expense of knowledge about parameters of separate particles. When an ensemble consists of particles with similar properties, their number is called the particle number, in classical mechanics, all particles in the system are considered distinguishable. This means that particles in a system can be tracked. As a consequence, changing the position of any two particles in the leads to a completely different configuration of the entire system. Furthermore, there is no restriction on placing more than one particle in any given state accessible to the system and these characteristics of classical positions are called Maxwell–Boltzmann statistics. The fundamental feature of quantum mechanics that distinguishes it from classical mechanics is that particles of a type are indistinguishable from one another. This means that in an assembly consisting of particles, interchanging any two particles does not lead to a new configuration of the system. In the case of a system consisting of particles of different kinds, all quantum particles, such as leptons and baryons, in the universe have three translational motion degrees of freedom and one discrete degree of freedom, known as spin. Thats why quantum statistics is useful when one considers, say, helium liquid or ammonia gas, the spin–statistics theorem binds two particular kinds of combinatorial symmetry with two particular kinds of spin symmetry, namely bosons and fermions. In Bose–Einstein statistics interchanging any two particles of the leaves the resultant system in a symmetric state. That is, the function of the system before interchanging equals the wave function of the system after interchanging. It is important to emphasize that the function of the system has not changed itself. This has very important consequences on the state of the system and it is found that the particles that obey Bose–Einstein statistics are the ones which have integer spins, which are therefore called bosons. Examples of bosons include photons and helium-4, one type of system obeying B–E statistics is the Bose–Einstein condensate where all particles of the assembly exist in the same state. In Fermi–Dirac statistics interchanging any two particles of the leaves the resultant system in an antisymmetric state. That is, the function of the system before interchanging is the wave function of the system after interchanging. Again, the function of the system itself does not change
5.
Spin-statistics theorem
–
In quantum mechanics, the spin–statistics theorem relates the intrinsic spin of a particle to the particle statistics it obeys. In units of the reduced Planck constant ħ, all particles have integer spin or half-integer spin. In a quantum system, a state is described by a state vector. A pair of distinct state vectors are physically equivalent if their value is equal. A pair of indistinguishable particles such as this have only one state and this means that if the positions of the particles are exchanged, this does not identify a new physical state, but rather one matching the original physical state. In fact, one cannot tell which particle is in which position, while the physical state does not change under the exchange of the particles positions, it is possible for the state vector to be negated as a result of an exchange. Since this does not change the value of the state vector. The essential ingredient in proving the spin/statistics relation is relativity, that the laws do not change under Lorentz transformations. The field operators transform under Lorentz transformations according to the spin of the particle that they create, additionally, the assumption that spacelike separated fields either commute or anticommute can be made only for relativistic theories with a time direction. Otherwise, the notion of being spacelike is meaningless, however, the proof involves looking at a Euclidean version of spacetime, in which the time direction is treated as a spatial one, as will be now explained. Lorentz transformations include 3-dimensional rotations as well as boosts, a boost transfers to a frame of reference with a different velocity, and is mathematically like a rotation into time. By analytic continuation of the functions of a quantum field theory, the time coordinate may become imaginary. The new spacetime has only spatial directions and is termed Euclidean, bosons are particles whose wavefunction is symmetric under such an exchange, so if we swap the particles the wavefunction does not change. This is the Pauli exclusion principle, two identical fermions cannot occupy the same state and this rule does not hold for bosons. In quantum field theory, a state or a wavefunction is described by field operators operating on some basic state called the vacuum, in order for the operators to project out the symmetric or antisymmetric component of the creating wavefunction, they must have the appropriate commutation law. Let us assume that x ≠ y and the two take place at the same time, more generally, they may have spacelike separation. If the fields commute, meaning that the following holds, ϕ ϕ = ϕ ϕ, then only the part of ψ contributes, so that ψ = ψ. Naively, neither has anything to do with the spin, which determines the properties of the particles
6.
Identical particles
–
Identical particles, also called indistinguishable or indiscernible particles, are particles that cannot be distinguished from one another, even in principle. Species of identical particles include, but are not limited to particles such as electrons, composite subatomic particles such as atomic nuclei, as well as atoms. Quasiparticles also behave in this way, although all known indistinguishable particles are tiny, there is no exhaustive list of all possible sorts of particles nor a clear-cut limit of applicability, as explored in quantum statistics. There are two categories of identical particles, bosons, which can share quantum states, and fermions. Examples of bosons are photons, gluons, phonons, helium-4 nuclei, examples of fermions are electrons, neutrinos, quarks, protons, neutrons, and helium-3 nuclei. The fact that particles can be identical has important consequences in statistical mechanics, calculations in statistical mechanics rely on probabilistic arguments, which are sensitive to whether or not the objects being studied are identical. As a result, identical particles exhibit markedly different statistical behaviour from distinguishable particles, for example, the indistinguishability of particles has been proposed as a solution to Gibbs mixing paradox. There are two methods for distinguishing between particles, the first method relies on differences in the intrinsic physical properties of the particles, such as mass, electric charge, and spin. If differences exist, it is possible to distinguish between the particles by measuring the relevant properties, however, it is an empirical fact that microscopic particles of the same species have completely equivalent physical properties. For instance, every electron in the universe has exactly the electric charge. Even if the particles have equivalent physical properties, there remains a second method for distinguishing between particles, which is to track the trajectory of each particle. As long as the position of particle can be measured with infinite precision. The problem with the approach is that it contradicts the principles of quantum mechanics. According to quantum theory, the particles do not possess definite positions during the periods between measurements, instead, they are governed by wavefunctions that give the probability of finding a particle at each position. As time passes, the wavefunctions tend to spread out and overlap, once this happens, it becomes impossible to determine, in a subsequent measurement, which of the particle positions correspond to those measured earlier. The particles are said to be indistinguishable. What follows is an example to make the above discussion concrete, let n denote a complete set of quantum numbers for specifying single-particle states For simplicity, consider a system composed of two identical particles. Suppose that one particle is in the state n1, and another is in the state n2, however, this expression implies the ability to identify the particle with n1 as particle 1 and the particle with n2 as particle 2
7.
Parastatistics
–
In quantum mechanics and statistical mechanics, parastatistics is one of several alternatives to the better known particle statistics models. Other alternatives include anyonic statistics and braid statistics, both of these involving lower spacetime dimensions, consider the operator algebra of a system of N identical particles. There is an SN group acting upon the operator algebra with the interpretation of permuting the N particles. Quantum mechanics requires focus on observables having a meaning. In other words, the observable algebra would have to be an invariant under the action of SN. Therefore we can have different superselection sectors, each parameterized by a Young diagram of SN, in particular, If we have N identical parabosons of order p, then the permissible Young diagrams are all those with p or fewer rows. If we have N identical parafermions of order p, then the permissible Young diagrams are all those with p or fewer columns, If p is 1, we just have the ordinary cases of Bose–Einstein and Fermi–Dirac statistics respectively. If p is infinity, we have Maxwell–Boltzmann statistics, a paraboson field of order p, ϕ = ∑ i =1 p ϕ where if x and y are spacelike-separated points, =0 and =0 if i ≠ j where is the commutator and is the anticommutator. Note that this disagrees with the theorem, which is for bosons. There might be a group such as the symmetric group Sp acting upon the φs, observables would have to be operators which are invariant under the group in question. However, the existence of such a symmetry is not essential, a parafermion field ψ = ∑ i =1 p ψ of order p, where if x and y are spacelike-separated points, =0 and =0 if i ≠ j. The same comment about observables would apply together with the requirement that they have even grading under the grading where the ψs have odd grading, the parafermionic and parabosonic algebras are generated by elements that obey the commutation and anticommutation relations. They generalize the usual fermionic algebra and the algebra of quantum mechanics. The Dirac algebra and the Duffin–Kemmer–Petiau algebra appear as special cases of the parafermionic algebra for order p=1 and p=2, note that if x and y are spacelike-separated points, φ and φ neither commute nor anticommute unless p=1. The same comment applies to ψ and ψ, so, if we have n spacelike separated points x1. Xn, ϕ ⋯ ϕ | Ω ⟩ corresponds to creating n identical parabosons at x1, similarly, ψ ⋯ ψ | Ω ⟩ corresponds to creating n identical parafermions. Because these fields neither commute nor anticommute ϕ ⋯ ϕ | Ω ⟩ and ψ ⋯ ψ | Ω ⟩ gives distinct states for each permutation π in Sn. We can define a permutation operator E by E = ϕ ⋯ ϕ | Ω ⟩ and E = ψ ⋯ ψ | Ω ⟩ respectively and this can be shown to be well-defined as long as E is only restricted to states spanned by the vectors given above
8.
Anyon
–
Anyons are generally classified as abelian or non-abelian. Abelian anyons have been detected and play a role in the fractional quantum Hall effect. Non-abelian anyons have not been detected, although this is an active area of research. In space of three or more dimensions, elementary particles are fermions or bosons, according to their statistical behaviour. Fermions obey the Fermi–Dirac statistics, while bosons obey the Bose–Einstein statistics, in the language of quantum mechanics this is formulated as the behavior of multiparticle states under the exchange of particles. Here the + corresponds to the particles being bosons, and the − to the particles being fermions. In our above example of two particles this looks as follows, | ψ1 ψ2 ⟩ = e i θ | ψ2 ψ1 ⟩, with i the imaginary unit and this is an application of Eulers formula and can produce any unit complex number. It is important to note there is slight abuse of notation in this shorthand expression, as in reality this wave function can be. Conversely, a clockwise half-revolution results in multiplying the wave function by e−iθ, such a theory obviously only makes sense in two-dimensions, where clockwise and counterclockwise are clearly defined directions. In the case θ = π we recover the Fermi–Dirac statistics, in between we have something different. Frank Wilczek coined the term anyon to describe such particles, since they can have any phase when particles are interchanged, at an edge, fractional quantum Hall effect anyons are confined to move in one space dimension. Mathematical models of one-dimensional anyons provide a base of the commutation relations shown above, in a three-dimensional position space, the fermion and boson statistics operators are just 1-dimensional representations of the permutation group acting on the space of wave functions. In the same way, in position space, the abelian anyonic statistics operators are just 1-dimensional representations of the braid group acting on the space of wave functions. Non-abelian anyonic statistics are higher-dimensional representations of the braid group, anyonic statistics must not be confused with parastatistics, which describes statistics of particles whose wavefunctions are higher-dimensional representations of the permutation group. That the homotopy classes of paths is relevant hints at a more subtle insight and it arises from the Feynman path integral, in which all paths from an initial to final point in spacetime contribute with an appropriate phase factor. Recall that the Feynman path integral can be motivated from expanding the propagator using a method called time-slicing, in non-homotopic paths, one cannot get from any point at one time slice to any other point at the next time slice. This means that we can consider homotopic equivalence class of paths to have different weighting factors, so it can be seen that the topological notion of equivalence comes from a study of the Feynman path integral. For a more transparent way of seeing that the notion of equivalence is the right one to use
9.
Statistical ensemble (mathematical physics)
–
In mathematical physics, especially as introduced into statistical mechanics and thermodynamics by J. In other words, an ensemble is a probability distribution for the state of the system. The concept of an equilibrium or stationary ensemble is crucial to many applications of statistical ensembles, although a mechanical system certainly evolves over time, the ensemble does not necessarily have to evolve. In fact, the ensemble will not evolve if it contains all past, such a statistical ensemble, one that does not change over time, is called stationary and can said to be in statistical equilibrium. The word ensemble is used for a smaller set of possibilities sampled from the full set of possible states. For example, a collection of walkers in a Markov chain Monte Carlo iteration is called an ensemble in some of the literature, the term ensemble is often used in physics and the physics-influenced literature. In probability theory, the probability space is more prevalent. The study of thermodynamics is concerned with systems which appear to human perception to be static, and these systems can be described by statistical ensembles that depend on a few observable parameters, and which are in statistical equilibrium. Gibbs noted that different macroscopic constraints lead to different types of ensembles, the system must remain totally isolated in order to stay in statistical equilibrium. Canonical ensemble or NVT ensemble—a statistical ensemble where the energy is not known exactly, in place of the energy, the temperature is specified. The canonical ensemble is appropriate for describing a system which is in, or has been in. In order to be in equilibrium the system must remain totally closed. Grand canonical ensemble or µVT ensemble—a statistical ensemble where neither the energy nor particle number are fixed, in their place, the temperature and chemical potential are specified. The grand canonical ensemble is appropriate for describing a system, one which is in, or has been in. The ensemble remains in equilibrium if the system comes into weak contact with other systems that are described by ensembles with the same temperature. The calculations that can be made using each of these ensembles are explored further in their respective articles, other thermodynamic ensembles can be also defined, corresponding to different physical requirements, for which analogous formulae can often similarly be derived. The precise mathematical expression for an ensemble has a distinct form depending on the type of mechanics under consideration. In the classical case the ensemble is a probability distribution over the microstates, in quantum mechanics this notion, due to von Neumann, is a way of assigning a probability distribution over the results of each complete set of commuting observables
10.
Microcanonical ensemble
–
In statistical mechanics, a microcanonical ensemble is the statistical ensemble that is used to represent the possible states of a mechanical system which has an exactly specified total energy. The system is assumed to be isolated in the sense that the system cannot exchange energy or particles with its environment, the systems energy, composition, volume, and shape are kept the same in all possible states of the system. This ensemble is sometimes called the NVE ensemble, as each of these three quantities is a constant of the ensemble. In simple terms, the ensemble is defined by assigning an equal probability to every microstate whose energy falls within a range centered at E. All other microstates are given a probability of zero, in the limit of this process, the microcanonical ensemble is obtained. Also, in some systems the evolution is ergodic in which case the microcanonical ensemble is equal to the time-ensemble when starting out with a single state of energy E. In practice, the ensemble does not correspond to an experimentally realistic situation. With a real physical system there is at least some uncertainty in energy, moreover there are ambiguities regarding the appropriate definitions of quantities such as entropy and temperature in the microcanonical ensemble. Boltzmann did not elaborate too deeply on what constitutes the set of distinct states of a system. Gibbs investigated carefully the analogies between the ensemble and thermodynamics, especially how they break down in the case of systems of few degrees of freedom. He introduced two further definitions of entropy that do not depend on ω - the volume and surface entropy described above. The volume entropy Sv and associated Tv form a close analogy to thermodynamic entropy and it is possible to show exactly that d E = T v d S v − ⟨ P ⟩ d V, as expected for the first law of thermodynamics. A similar equation can be found for the entropy and its associated Ts. The microcanonical Tv and Ts are not entirely satisfactory in their analogy to temperature, outside of the thermodynamic limit, a number of artifacts occur. Unfortunately, the flow between the two systems cannot be predicted based on the initial Ts. Even when the initial Ts are equal, there may be energy transferred, moreover, the T of the combination is different from the initial values. This contradicts the intuition that temperature should be a quantity. Strange behaviour for few-particle systems, Many results such as the microcanonical equipartition theorem acquire a one- or two-degree of freedom offset when written in terms of Ts
11.
Canonical ensemble
–
In statistical mechanics, a canonical ensemble is the statistical ensemble that represents the possible states of a mechanical system in thermal equilibrium with a heat bath at a fixed temperature. The system can exchange energy with the bath, so that the states of the system will differ in total energy. The principal thermodynamic variable of the ensemble, determining the probability distribution of states, is the absolute temperature. The ensemble typically also depends on variables such as the number of particles in the system. An ensemble with these three parameters is called the NVT ensemble. The number F is the energy and is a constant for the ensemble. However, the probabilities and F will vary if different N, V, T are selected. An alternative but equivalent formulation for the same concept writes the probability as P =1 Z e − E /, the equations below may be restated in terms of the canonical partition function by simple mathematical manipulations. Historically, the ensemble was first described by Boltzmann in 1884 in a relatively unknown paper. It was later reformulated and extensively investigated by Gibbs in 1902, the canonical ensemble is the ensemble that describes the possible states of an isolated system that is in thermal equilibrium with a heat bath. The canonical ensemble applies to systems of any size, while it is necessary to assume that the bath is very large. The condition that the system is isolated is necessary in order to ensure it does not exchange energy with any external object besides the heat bath. In general, it is desirable to apply the canonical ensemble to systems that are in contact with the heat bath. When the total energy is fixed but the state of the system is otherwise unknown. For systems where the number is variable, the correct description is the grand canonical ensemble. For large systems these other ensembles become equivalent to the canonical ensemble. Moreover, if the system is made up of similar parts. In this way, the canonical ensemble provides exactly the Boltzmann distribution for systems of any number of particles, in comparison, the justification of the Boltzmann distribution from the microcanonical ensemble only applies for systems with a large number of parts
12.
Grand canonical ensemble
–
The systems volume, shape, and other external coordinates are kept the same in all possible states of the system. The thermodynamic variables of the canonical ensemble are chemical potential. The ensemble is also dependent on mechanical variables such as volume which influence the nature of the internal states. This ensemble is sometimes called the µVT ensemble, as each of these three quantities are constants of the ensemble. The number Ω is known as the potential and is constant for the ensemble. However, the probabilities and Ω will vary if different µ, V, T are selected, the grand potential Ω serves two roles, to provide a normalization factor for the probability distribution, and, many important ensemble averages can be directly calculated from the function Ω. However, these numbers should be defined carefully. The grand canonical ensemble provides a setting for an exact derivation of the Fermi–Dirac statistics or Bose–Einstein statistics for a system of non-interacting quantum particles. Note on formulation An alternative formulation for the same concept writes the probability as P =1 Z e /, the equations in this article may be restated in terms of the grand partition function by simple mathematical manipulations. The grand canonical ensemble is the ensemble that describes the states of an isolated system that is in thermal and chemical equilibrium with a reservoir. The grand canonical ensemble applies to systems of any size, small or large, the condition that the system is isolated is necessary in order to ensure it has well-defined thermodynamic quantities and evolution. In practice, however, it is desirable to apply the canonical ensemble to describe systems that are in direct contact with the reservoir. Alternatively, theoretical approaches can be used to model the influence of the connection, another case in which the grand canonical ensemble appears is when considering a system that is large and thermodynamic. The reason for this is that various thermodynamic ensembles become equivalent in some aspects to the canonical ensemble. Of course, for systems, the different ensembles are no longer equivalent even in the mean. As a result, the canonical ensemble can be highly inaccurate when applied to small systems of fixed particle number. ⟨ N1 E ⟩ − ⟨ N1 ⟩ ⟨ E ⟩ = k T ∂ ⟨ E ⟩ ∂ μ1, The usefulness of the grand canonical ensemble is illustrated in the examples below. In each case the potential is calculated on the basis of the relationship Ω = − k T ln which is required for the microstates probabilities to add up to 1
13.
Einstein solid
–
In the Einstein model, each atom oscillates independently. The original theory proposed by Einstein in 1907 has great historical relevance, the heat capacity of solids as predicted by the empirical Dulong-Petit law was required by classical mechanics, the specific heat of solids should be independent of temperature. But experiments at low temperatures showed that the heat capacity changes, as the temperature goes up, the specific heat goes up until it approaches the Dulong and Petit prediction at high temperature. By employing Plancks quantization assumption, Einsteins theory accounted for the experimental trend for the first time. Together with the effect, this became one of the most important pieces of evidence for the need of quantization. Einstein used the levels of the mechanical oscillator many years before the advent of modern quantum mechanics. Einstein’s Theory of Specific Heats In Einsteins model, the specific heat approaches zero exponentially fast at low temperatures and this is because all the oscillations have one common frequency. The correct behavior is found by quantizing the normal modes of the solid in the way that Einstein suggested. Then the frequencies of the waves are not all the same, and the specific heat goes to zero as a T3 power law and this modification is called the Debye Model, which appeared in 1912. When Walther Nernst learned of Einsteins 1907 paper on specific heat, he was so excited that he traveled all the way from Berlin to Zürich to meet with him. The heat capacity of an object at constant volume V is defined through the internal energy U as C V = V. T, to find the entropy consider a solid made of N atoms, each of which has 3 degrees of freedom. So there are 3 N quantum harmonic oscillators, next, we must compute the multiplicity of the system. That is, compute the number of ways to distribute q quanta of energy among N ′ SHOs, the number of arrangements of n objects is n. So the number of arrangements of q pebbles and N ′ −1 partitions is. However, if partition #3 and partition #5 trade places, no one would notice, the same argument goes for quanta. To obtain the number of possible distinguishable arrangements one has to divide the number of arrangements by the number of indistinguishable arrangements. There are q. identical quanta arrangements, and, therefore, multiplicity of the system is given by Ω =. Q. which, as mentioned before, is the number of ways to deposit q quanta of energy into N ′ oscillators, entropy of the system has the form S / k = ln Ω = ln
14.
Ising model
–
The Ising model, named after the physicist Ernst Ising, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic spins that can be in one of two states, the spins are arranged in a graph, usually a lattice, allowing each spin to interact with its neighbors. The model allows the identification of phase transitions, as a model of reality. The two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition, the Ising model was invented by the physicist Wilhelm Lenz, who gave it as a problem to his student Ernst Ising. The one-dimensional Ising model has no phase transition and was solved by Ising himself in his 1924 thesis, the two-dimensional square lattice Ising model is much harder, and was given an analytic description much later, by Lars Onsager. It is usually solved by a method, although there exist different approaches. In dimensions greater than four, the transition of the Ising model is described by mean field theory. Consider a set of lattice sites Λ, each with a set of adjacent sites forming a d-dimensional lattice, for each lattice site k ∈ Λ there is a discrete variable σk such that σk ∈, representing the sites spin. A spin configuration, σ = k ∈ Λ is an assignment of value to each lattice site. For any two adjacent sites i, j ∈ Λ one has an interaction Jij, also a site j ∈ Λ has an external magnetic field hj interacting with it. The notation <ij> indicates that sites i and j are nearest neighbors, the magnetic moment is given by µ. For a function f of the spins, one denotes by ⟨ f ⟩ β = ∑ σ f P β the expectation of f, the configuration probabilities Pβ represent the probability of being in a state with configuration σ in equilibrium. The minus sign on each term of the Hamiltonian function H is conventional, in a ferromagnetic Ising model, spins desire to be aligned, the configurations in which adjacent spins are of the same sign have higher probability. In an antiferromagnetic model, adjacent spins tend to have opposite signs, the sign convention of H also explains how a spin site j interacts with the external field. Namely, the spin site wants to line up with the external field, Ising models are often examined without an external field interacting with the lattice, that is, h =0 for all j in the lattice Λ. Using this simplification, our Hamiltonian becomes, H = − ∑ ⟨ i j ⟩ J i j σ i σ j. When the external field is zero, h =0, the Ising model is symmetric under switching the value of the spin in all the lattice sites. Another common simplification is to assume all of the nearest neighbors <ij> have the same interaction strength
15.
Thermodynamic potential
–
A thermodynamic potential is a scalar quantity used to represent the thermodynamic state of a system. The concept of thermodynamic potentials was introduced by Pierre Duhem in 1886, Josiah Willard Gibbs in his papers used the term fundamental functions. One main thermodynamic potential that has an interpretation is the internal energy U. It is the energy of configuration of a system of conservative forces. Expressions for all other thermodynamic potentials are derivable via Legendre transforms from an expression for U. In thermodynamics, certain forces, such as gravity, are disregarded when formulating expressions for potentials. Five common thermodynamic potentials are, where T = temperature, S = entropy, p = pressure, the Helmholtz free energy is often denoted by the symbol F, but the use of A is preferred by IUPAC, ISO and IEC. Ni is the number of particles of type i in the system, for the sake of completeness, the set of all Ni are also included as natural variables, although they are sometimes ignored. These five common potentials are all energy potentials, but there are also entropy potentials, the thermodynamic square can be used as a tool to recall and derive some of the potentials. Gibbs energy is the capacity to do non-mechanical work, enthalpy is the capacity to do non-mechanical work plus the capacity to release heat. Helmholtz free energy is the capacity to do mechanical plus non-mechanical work, thermodynamic potentials are very useful when calculating the equilibrium results of a chemical reaction, or when measuring the properties of materials in a chemical reaction. Just as in mechanics, the system will tend towards lower values of potential and at equilibrium, under these constraints, the thermodynamic potentials can also be used to estimate the total amount of energy available from a thermodynamic system under the appropriate constraint. In particular, When the entropy and external parameters of a system are held constant. This follows from the first and second laws of thermodynamics and is called the principle of minimum energy, the following three statements are directly derivable from this principle. When the temperature and external parameters of a system are held constant. When the pressure and external parameters of a system are held constant. When the temperature, pressure and external parameters of a system are held constant. The variables that are constant in this process are termed the natural variables of that potential
16.
Internal energy
–
It keeps account of the gains and losses of energy of the system that are due to changes in its internal state. The internal energy of a system can be changed by transfers of matter or heat or by doing work, when matter transfer is prevented by impermeable containing walls, the system is said to be closed. Then the first law of thermodynamics states that the increase in energy is equal to the total heat added plus the work done on the system by its surroundings. If the containing walls pass neither matter nor energy, the system is said to be isolated, the first law of thermodynamics may be regarded as establishing the existence of the internal energy. The internal energy is one of the two cardinal state functions of the variables of a thermodynamic system. The internal energy of a state of a system cannot be directly measured. Such a chain, or path, can be described by certain extensive state variables of the system, namely, its entropy, S, its volume, V. The internal energy, U, is a function of those, sometimes, to that list are appended other extensive state variables, for example electric dipole moment. Customarily, thermodynamic descriptions include only items relevant to the processes under study, Thermodynamics is chiefly concerned only with changes in the internal energy, not with its absolute value. The internal energy is a function of a system, because its value depends only on the current state of the system. It is the one and only cardinal thermodynamic potential, through it, by use of Legendre transforms, are mathematically constructed the other thermodynamic potentials. These are functions of variable lists in which some extensive variables are replaced by their conjugate intensive variables, Legendre transformation is necessary because mere substitutive replacement of extensive variables by intensive variables does not lead to thermodynamic potentials. Mere substitution leads to a less informative formula, an equation of state, though it is a macroscopic quantity, internal energy can be explained in microscopic terms by two theoretical virtual components. One is the kinetic energy due to the microscopic motion of the systems particles. The other is the energy associated with the microscopic forces, including the chemical bonds. If thermonuclear reactions are specified as a topic of concern, then the static rest mass energy of the constituents of matter is also counted. There is no simple relation between these quantities of microscopic energy and the quantities of energy gained or lost by the system in work, heat. The SI unit of energy is the joule, sometimes it is convenient to use a corresponding density called specific internal energy which is internal energy per unit of mass of the system in question
17.
Enthalpy
–
Enthalpy /ˈɛnθəlpi/ is a measurement of energy in a thermodynamic system. It is the thermodynamic quantity equivalent to the heat content of a system. It is equal to the energy of the system plus the product of pressure. Enthalpy is defined as a function that depends only on the prevailing equilibrium state identified by the systems internal energy, pressure. The unit of measurement for enthalpy in the International System of Units is the joule, but other historical, conventional units are still in use, such as the British thermal unit and the calorie. At constant pressure, the enthalpy change equals the energy transferred from the environment through heating or work other than expansion work, the total enthalpy, H, of a system cannot be measured directly. The same situation exists in classical mechanics, only a change or difference in energy carries physical meaning. Enthalpy itself is a potential, so in order to measure the enthalpy of a system, we must refer to a defined reference point, therefore what we measure is the change in enthalpy. The ΔH is a change in endothermic reactions, and negative in heat-releasing exothermic processes. For processes under constant pressure, ΔH is equal to the change in the energy of the system. This means that the change in enthalpy under such conditions is the heat absorbed by the material through a reaction or by external heat transfer. Enthalpies for chemical substances at constant pressure assume standard state, most commonly 1 bar pressure, standard state does not, strictly speaking, specify a temperature, but expressions for enthalpy generally reference the standard heat of formation at 25 °C. Enthalpy of ideal gases and incompressible solids and liquids does not depend on pressure, unlike entropy, real materials at common temperatures and pressures usually closely approximate this behavior, which greatly simplifies enthalpy calculation and use in practical designs and analyses. The word enthalpy stems from the Ancient Greek verb enthalpein, which means to warm in and it combines the Classical Greek prefix ἐν- en-, meaning to put into, and the verb θάλπειν thalpein, meaning to heat. The word enthalpy is often attributed to Benoît Paul Émile Clapeyron. This misconception was popularized by the 1927 publication of The Mollier Steam Tables, however, neither the concept, the word, nor the symbol for enthalpy existed until well after Clapeyrons death. The earliest writings to contain the concept of enthalpy did not appear until 1875, however, Gibbs did not use the word enthalpy in his writings. The actual word first appears in the literature in a 1909 publication by J. P. Dalton
18.
Helmholtz free energy
–
In thermodynamics, the Helmholtz free energy is a thermodynamic potential that measures the “useful” work obtainable from a closed thermodynamic system at a constant temperature and volume. The negative of the difference in the Helmholtz energy is equal to the amount of work that the system can perform in a thermodynamic process in which volume is held constant. If the volume is not held constant, part of work will be performed as boundary work. The Helmholtz energy is used for systems held at constant volume. Since in this case no work is performed on the environment, for a system at constant temperature and volume, the Helmholtz energy is minimized at equilibrium. The Helmholtz free energy was developed by Hermann von Helmholtz, a German physician and physicist, the IUPAC recommends the letter A as well as the use of name Helmholtz energy. In physics, the letter F can also be used to denote the Helmholtz energy, as Helmholtz energy is referred to as the Helmholtz function, Helmholtz free energy. For example, in research, Helmholtz free energy is often used since explosive reactions by their nature induce pressure changes. It is also used to define fundamental equations of state of pure substances. The Helmholtz energy is the Legendre transform of the energy, U. If the system is kept at fixed volume and is in contact with a bath at some constant temperature. Conservation of energy implies, Δ U bath + Δ U + W =0 The volume of the system is kept constant and this means that the volume of the heat bath does not change either and we can conclude that the heat bath does not perform any work. This implies that the amount of heat that flows into the bath is given by. This result seems to contradict the equation dA = -S dT - P dV, as keeping T and V constant seems to imply dA =0, to allow for spontaneous processes at constant T and V, one needs to enlarge the thermodynamical state space of the system. In case of a reaction, one must allow for changes in the numbers Nj of particles of each type j. This equation is then valid for both reversible and non-reversible uPT changes. In case of a change at constant T and V without electrical work. A system kept at constant volume, temperature, and particle number is described by the canonical ensemble, the fact that the system does not have a unique energy means that the various thermodynamical quantities must be defined as expectation values
19.
Gibbs free energy
–
Just as in mechanics, where the decrease in potential energy is defined as maximum useful work that can be performed, similarly different potentials have different meanings. The Gibbs energy is also the potential that is minimized when a system reaches chemical equilibrium at constant pressure and temperature. Its derivative with respect to the coordinate of the system vanishes at the equilibrium point. As such, a reduction in G is a condition for the spontaneity of processes at constant pressure and temperature. The Gibbs free energy, originally called available energy, was developed in the 1870s by the American scientist Josiah Willard Gibbs. The initial state of the body, according to Gibbs, is supposed to be such that the body can be made to pass from it to states of dissipated energy by reversible processes. In his 1876 magnum opus On the Equilibrium of Heterogeneous Substances, according to the second law of thermodynamics, for systems reacting at STP, there is a general natural tendency to achieve a minimum of the Gibbs free energy. A quantitative measure of the favorability of a reaction at constant temperature and pressure is the change ΔG in Gibbs free energy that is caused by the reaction. As a necessary condition for the reaction to occur at constant temperature and pressure, ΔG must be smaller than the non-PV work, ΔG equals the maximum amount of non-PV work that can be performed as a result of the chemical reaction for the case of reversible process. The equation can be seen from the perspective of the system taken together with its surroundings. First assume that the reaction at constant temperature and pressure is the only one that is occurring. Then the entropy released or absorbed by the system equals the entropy that the environment must absorb or release, the reaction will only be allowed if the total entropy change of the universe is zero or positive. This is reflected in a negative ΔG, and the reaction is called exergonic, if we couple reactions, then an otherwise endergonic chemical reaction can be made to happen. In traditional use, the term free was included in Gibbs free energy to mean available in the form of useful work, the characterization becomes more precise if we add the qualification that it is the energy available for non-volume work. However, a number of books and journal articles do not include the attachment free. This is the result of a 1988 IUPAC meeting to set unified terminologies for the scientific community. This standard, however, has not yet been universally adopted. Further, Gibbs stated, In this description, as used by Gibbs, ε refers to the energy of the body, η refers to the entropy of the body
20.
Grand potential
–
The grand potential is a quantity used in statistical mechanics, especially for irreversible processes in open systems. The grand potential is the state function for the grand canonical ensemble. This can be seen by considering that dΦG is zero if the volume is fixed, for homogeneous systems, one obtains Ω = − P V. The pressure then must be constant with respect to changes in volume, μ, T =0, and the particle and all extensive quantities must grow linearly with volume, e. g. μ, T = N / V. In this case we have simply Φ G = − ⟨ P ⟩ V, the value of Φ G can be understood as the work we can extract from the system by shrinking it down to nothing. The fact that Φ G = − ⟨ P ⟩ V is negative implies that it takes energy to perform this extraction, such homogeneous scaling does not exist in many systems. The problem here is that, although electrons and energy are exchanged with a reservoir, generally in small systems, or systems with long range interactions, Φ G ≠ − ⟨ P ⟩ V. The factor eβμ is the Boltzmann factor, Gibbs energy Helmholtz energy Grand Potential
21.
James Clerk Maxwell
–
James Clerk Maxwell FRS FRSE was a Scottish scientist in the field of mathematical physics. Maxwells equations for electromagnetism have been called the great unification in physics after the first one realised by Isaac Newton. With the publication of A Dynamical Theory of the Electromagnetic Field in 1865, Maxwell proposed that light is an undulation in the same medium that is the cause of electric and magnetic phenomena. The unification of light and electrical phenomena led to the prediction of the existence of radio waves, Maxwell helped develop the Maxwell–Boltzmann distribution, a statistical means of describing aspects of the kinetic theory of gases. He is also known for presenting the first durable colour photograph in 1861 and his discoveries helped usher in the era of modern physics, laying the foundation for such fields as special relativity and quantum mechanics. Many physicists regard Maxwell as the 19th-century scientist having the greatest influence on 20th-century physics and his contributions to the science are considered by many to be of the same magnitude as those of Isaac Newton and Albert Einstein. In the millennium poll—a survey of the 100 most prominent physicists—Maxwell was voted the third greatest physicist of all time, behind only Newton and Einstein. On the centenary of Maxwells birthday, Einstein described Maxwells work as the most profound and the most fruitful that physics has experienced since the time of Newton. James Clerk Maxwell was born on 13 June 1831 at 14 India Street, Edinburgh, to John Clerk Maxwell of Middlebie, an advocate and his father was a man of comfortable means of the Clerk family of Penicuik, holders of the baronetcy of Clerk of Penicuik. His fathers brother was the 6th Baronet, James was the first cousin of the artist Jemima Blackburn and cousin of the civil engineer William Dyce Cay. They were close friends and Cay acted as his best man when Maxwell married, Maxwells parents met and married when they were well into their thirties, his mother was nearly 40 when he was born. They had had one child, a daughter named Elizabeth. When Maxwell was young his family moved to Glenlair House, which his parents had built on the 1,500 acres Middlebie estate, all indications suggest that Maxwell had maintained an unquenchable curiosity from an early age. By the age of three, everything moved, shone, or made a noise drew the question, whats the go o that. And show me how it doos is never out of his mouth and he also investigates the hidden course of streams and bell-wires, the way the water gets from the pond through the wall. Recognising the potential of the boy, Maxwells mother Frances took responsibility for Jamess early education. At eight he could recite long passages of Milton and the whole of the 119th psalm, indeed, his knowledge of scripture was already very detailed, he could give chapter and verse for almost any quotation from the psalms. His mother was ill with abdominal cancer and, after an unsuccessful operation
22.
Ludwig Boltzmann
–
Boltzmann was born in Vienna, the capital of the Austrian Empire. His father, Ludwig Georg Boltzmann, was a revenue official and his grandfather, who had moved to Vienna from Berlin, was a clock manufacturer, and Boltzmanns mother, Katharina Pauernfeind, was originally from Salzburg. He received his education from a private tutor at the home of his parents. Boltzmann attended high school in Linz, Upper Austria, when Boltzmann was 15, his father died. Boltzmann studied physics at the University of Vienna, starting in 1863, among his teachers were Josef Loschmidt, Joseph Stefan, Andreas von Ettingshausen and Jozef Petzval. Boltzmann received his PhD degree in 1866 working under the supervision of Stefan, in 1867 he became a Privatdozent. After obtaining his degree, Boltzmann worked two more years as Stefans assistant. It was Stefan who introduced Boltzmann to Maxwells work, in 1869 at age 25, thanks to a letter of recommendation written by Stefan, he was appointed full Professor of Mathematical Physics at the University of Graz in the province of Styria. In 1869 he spent several months in Heidelberg working with Robert Bunsen and Leo Königsberger and then in 1871 he was with Gustav Kirchhoff, in 1873 Boltzmann joined the University of Vienna as Professor of Mathematics and there he stayed until 1876. In 1872, long before women were admitted to Austrian universities, he met Henriette von Aigentler and she was refused permission to audit lectures unofficially. Boltzmann advised her to appeal, which she did, successfully, on July 17,1876 Ludwig Boltzmann married Henriette, they had three daughters and two sons. Boltzmann went back to Graz to take up the chair of Experimental Physics, among his students in Graz were Svante Arrhenius and Walther Nernst. He spent 14 happy years in Graz and it was there that he developed his concept of nature. Boltzmann was appointed to the Chair of Theoretical Physics at the University of Munich in Bavaria, in 1893, Boltzmann succeeded his teacher Joseph Stefan as Professor of Theoretical Physics at the University of Vienna. Boltzmann spent a deal of effort in his final years defending his theories. He did not get along with some of his colleagues in Vienna, particularly Ernst Mach and that same year Georg Helm and Wilhelm Ostwald presented their position on Energetics at a meeting in Lübeck. They saw energy, and not matter, as the component of the universe. Boltzmanns position carried the day among other physicists who supported his theories in the debate
23.
Josiah Willard Gibbs
–
Josiah Willard Gibbs was an American scientist who made important theoretical contributions to physics, chemistry, and mathematics. His work on the applications of thermodynamics was instrumental in transforming physical chemistry into a rigorous inductive science, Gibbs also worked on the application of Maxwells equations to problems in physical optics. As a mathematician, he invented modern vector calculus, in 1863, Yale awarded Gibbs the first American doctorate in engineering. After a three-year sojourn in Europe, Gibbs spent the rest of his career at Yale, commentators and biographers have remarked on the contrast between Gibbss quiet, solitary life in turn of the century New England and the great international impact of his ideas. Though his work was almost entirely theoretical, the value of Gibbss contributions became evident with the development of industrial chemistry during the first half of the 20th century. According to Robert A. Gibbs was born in New Haven and he belonged to an old Yankee family that had produced distinguished American clergymen and academics since the 17th century. He was the fourth of five children and the son of Josiah Willard Gibbs and his wife Mary Anna. On his fathers side, he was descended from Samuel Willard, on his mothers side, one of his ancestors was the Rev. Jonathan Dickinson, the first president of the College of New Jersey, the elder Gibbs was generally known to his family and colleagues as Josiah, while the son was called Willard. Josiah Gibbs was a linguist and theologian who served as professor of sacred literature at Yale Divinity School from 1824 until his death in 1861, Willard Gibbs was educated at the Hopkins School and entered Yale College in 1854, aged 15. At Yale, Gibbs received prizes for excellence in mathematics and Latin and he remained at Yale as a graduate student at the Sheffield Scientific School. At age 19, soon after his graduation from college, Gibbs was inducted into the Connecticut Academy of Arts and Sciences, relatively few documents from the period survive and it is difficult to reconstruct the details of Gibbss early career with precision. After the death of his father in 1861, Gibbs inherited enough money to him financially independent. Recurrent pulmonary trouble ailed the young Gibbs and his physicians were concerned that he might be susceptible to tuberculosis and he also suffered from astigmatism, whose treatment was then still largely unfamiliar to oculists, so that Gibbs had to diagnose himself and grind his own lenses. He was not conscripted and he remained at Yale for the duration of the war, in 1861, Yale had become the first US university to offer a Ph. D. degree and Gibbss was only the fifth Ph. D. granted in the US in any subject. After graduation, Gibbs was appointed as tutor at the College for a term of three years, during the first two years he taught Latin and during the third natural philosophy. After his term as tutor ended, Gibbs traveled to Europe with his sisters, moving to Berlin, Gibbs attended the lectures taught by mathematicians Karl Weierstrass and Leopold Kronecker, as well as by chemist Heinrich Gustav Magnus. In August 1867, Gibbss sister Julia was married in Berlin to Addison Van Name, the newly married couple returned to New Haven, leaving Gibbs and his sister Anna in Germany
24.
Albert Einstein
–
Albert Einstein was a German-born theoretical physicist. He developed the theory of relativity, one of the two pillars of modern physics, Einsteins work is also known for its influence on the philosophy of science. Einstein is best known in popular culture for his mass–energy equivalence formula E = mc2, near the beginning of his career, Einstein thought that Newtonian mechanics was no longer enough to reconcile the laws of classical mechanics with the laws of the electromagnetic field. This led him to develop his theory of relativity during his time at the Swiss Patent Office in Bern. Briefly before, he aquired the Swiss citizenship in 1901, which he kept for his whole life and he continued to deal with problems of statistical mechanics and quantum theory, which led to his explanations of particle theory and the motion of molecules. He also investigated the properties of light which laid the foundation of the photon theory of light. In 1917, Einstein applied the theory of relativity to model the large-scale structure of the universe. He was visiting the United States when Adolf Hitler came to power in 1933 and, being Jewish, did not go back to Germany and he settled in the United States, becoming an American citizen in 1940. This eventually led to what would become the Manhattan Project, Einstein supported defending the Allied forces, but generally denounced the idea of using the newly discovered nuclear fission as a weapon. Later, with the British philosopher Bertrand Russell, Einstein signed the Russell–Einstein Manifesto, Einstein was affiliated with the Institute for Advanced Study in Princeton, New Jersey, until his death in 1955. Einstein published more than 300 scientific papers along with over 150 non-scientific works, on 5 December 2014, universities and archives announced the release of Einsteins papers, comprising more than 30,000 unique documents. Einsteins intellectual achievements and originality have made the word Einstein synonymous with genius, Albert Einstein was born in Ulm, in the Kingdom of Württemberg in the German Empire, on 14 March 1879. His parents were Hermann Einstein, a salesman and engineer, the Einsteins were non-observant Ashkenazi Jews, and Albert attended a Catholic elementary school in Munich from the age of 5 for three years. At the age of 8, he was transferred to the Luitpold Gymnasium, the loss forced the sale of the Munich factory. In search of business, the Einstein family moved to Italy, first to Milan, when the family moved to Pavia, Einstein stayed in Munich to finish his studies at the Luitpold Gymnasium. His father intended for him to electrical engineering, but Einstein clashed with authorities and resented the schools regimen. He later wrote that the spirit of learning and creative thought was lost in strict rote learning, at the end of December 1894, he travelled to Italy to join his family in Pavia, convincing the school to let him go by using a doctors note. During his time in Italy he wrote an essay with the title On the Investigation of the State of the Ether in a Magnetic Field
25.
Paul Ehrenfest
–
Paul Ehrenfest was born and grew up in Vienna in a Jewish family from Loštice in Moravia. His parents, Sigmund Ehrenfest and Johanna Jellinek, ran a grocery store, although the family was not overly religious, Paul studied Hebrew and the history of the Jewish people. Later he always emphasized his Jewish roots, Ehrenfest excelled in grade school but did not do well at the Akademisches Gymnasium, his best subject being mathematics. After transferring to the Franz Josef Gymnasium, his marks improved, in 1899 he passed the final exams. He majored in chemistry at the Institute of technology, but took courses at the University of Vienna, there he met his future wife Tatyana Afanasyeva, a young mathematician born in Kiev, then capital of the Kiev Governorate, Russian Empire, and educated in St Petersburg. In the spring of 1903 he met H. A, Lorentz during a short trip to Leiden. In the meantime he prepared a dissertation on Die Bewegung starrer Körper in Flüssigkeiten und die Mechanik von Hertz and he obtained his Ph. D. degree on June 23,1904 in Vienna, where he stayed from 1904 to 1905. On December 21,1904 he married Russian mathematician Tatyana Alexeyevna Afanasyeva and they had two daughters and two sons, Tatyana, also became a mathematician, Galinka, became an author and illustrator of childrens books, Paul, Jr. who also became a physicist, and Vassily. The Ehrenfests returned to Göttingen in September 1906 and they would not see Boltzmann again, on September 6 Boltzmann took his own life in Duino near Trieste. Ehrenfest published an obituary in which Boltzmanns accomplishments are described. Felix Klein, dean of the Göttinger mathematicians and chief editor of the Enzyklopädie der mathematischen Wissenschaften, had counted on Boltzmann for a review about statistical mechanics, now he asked Ehrenfest to take on this task. Together with his wife, Ehrenfest worked on it for several years, in 1907 the couple moved to St Petersburg. Ehrenfest found good friends there, in particular A. F. Joffe, moreover, as an Austrian citizen and of Jewish origin, he had no prospect of a permanent position. Early in 1912 Ehrenfest set out on a tour of German-speaking universities in the hope of a position and he visited Berlin where he saw Max Planck, Leipzig where he saw his old friend Herglotz, Munich where he met Arnold Sommerfeld, then Zürich and Vienna. While in Prague he met Albert Einstein for the first time, Einstein recommended Ehrenfest to succeed him in his position in Prague, but that did not work out. This was due to the fact that Ehrenfest declared himself to be an atheist, Sommerfeld offered him a position in Munich, but Ehrenfest received a better offer, at the same time there was an unexpected turn of events. H. A. Lorentz resigned his position as professor at the University of Leiden, in October 1912 Ehrenfest arrived in Leiden, and December 4 he gave his inaugural lecture Zur Krise der Lichtaether-Hypothese. He remained in Leiden for the rest of his career, in order to stimulate interaction and exchange among physics students he organized a discussion group and a fraternity called De Leidsche Flesch
26.
John von Neumann
–
John von Neumann was a Hungarian-American mathematician, physicist, inventor, computer scientist, and polymath. He made major contributions to a number of fields, including mathematics, physics, economics, computing, and statistics. He published over 150 papers in his life, about 60 in pure mathematics,20 in physics, and 60 in applied mathematics and his last work, an unfinished manuscript written while in the hospital, was later published in book form as The Computer and the Brain. His analysis of the structure of self-replication preceded the discovery of the structure of DNA, also, my work on various forms of operator theory, Berlin 1930 and Princeton 1935–1939, on the ergodic theorem, Princeton, 1931–1932. During World War II he worked on the Manhattan Project, developing the mathematical models behind the lenses used in the implosion-type nuclear weapon. After the war, he served on the General Advisory Committee of the United States Atomic Energy Commission, along with theoretical physicist Edward Teller, mathematician Stanislaw Ulam, and others, he worked out key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb. Von Neumann was born Neumann János Lajos to a wealthy, acculturated, Von Neumanns place of birth was Budapest in the Kingdom of Hungary which was then part of the Austro-Hungarian Empire. He was the eldest of three children and he had two younger brothers, Michael, born in 1907, and Nicholas, who was born in 1911. His father, Neumann Miksa was a banker, who held a doctorate in law and he had moved to Budapest from Pécs at the end of the 1880s. Miksas father and grandfather were both born in Ond, Zemplén County, northern Hungary, johns mother was Kann Margit, her parents were Jakab Kann and Katalin Meisels. Three generations of the Kann family lived in apartments above the Kann-Heller offices in Budapest. In 1913, his father was elevated to the nobility for his service to the Austro-Hungarian Empire by Emperor Franz Joseph, the Neumann family thus acquired the hereditary appellation Margittai, meaning of Marghita. The family had no connection with the town, the appellation was chosen in reference to Margaret, Neumann János became Margittai Neumann János, which he later changed to the German Johann von Neumann. Von Neumann was a child prodigy, as a 6 year old, he could multiply and divide two 8-digit numbers in his head, and could converse in Ancient Greek. When he once caught his mother staring aimlessly, the 6 year old von Neumann asked her, formal schooling did not start in Hungary until the age of ten. Instead, governesses taught von Neumann, his brothers and his cousins, Max believed that knowledge of languages other than Hungarian was essential, so the children were tutored in English, French, German and Italian. A copy was contained in a private library Max purchased, One of the rooms in the apartment was converted into a library and reading room, with bookshelves from ceiling to floor. Von Neumann entered the Lutheran Fasori Evangelikus Gimnázium in 1911 and this was one of the best schools in Budapest, part of a brilliant education system designed for the elite
27.
Richard C. Tolman
–
Richard Chace Tolman was an American mathematical physicist and physical chemist who was an authority on statistical mechanics. He also made important contributions to cosmology in the years soon after Einsteins discovery of general relativity. He was a professor of chemistry and mathematical physics at the California Institute of Technology. Tolman was born in West Newton, Massachusetts and studied engineering at the Massachusetts Institute of Technology, receiving his bachelors degree in 1903. He married Ruth Sherman Tolman in 1924, in 1912, he conceived of the concept of relativistic mass by writing that the expression m0−1/2 is best suited for the mass of a moving body. In a 1916 experiment, Tolman demonstrated that electricity consists of flowing through a metallic conductor. A by-product of this experiment was a value of the mass of the electron. Overall, however, he was known as a theorist. Tolman was elected a Fellow of the American Academy of Arts, the same year, he joined the faculty of the California Institute of Technology, where he became professor of physical chemistry and mathematical physics and later dean of the graduate school. One of Tolmans early students at Caltech was the theoretical chemist Linus Pauling, in 1927, Tolman published a text on statistical mechanics whose background was the old quantum theory of Max Planck, Niels Bohr and Arnold Sommerfeld. In 1938, he published a new detailed work that covered the application of mechanics to classical. It was the work on the subject for many years. In the later years of his career, Tolman became increasingly interested in the application of thermodynamics to relativistic systems, also in this monograph, Tolman was the first person to document and explain how a closed universe could equal zero energy. He explained how all mass energy is positive and all energy is negative and they cancel each other out. During World War II, Tolman served as advisor to General Leslie Groves on the Manhattan Project. At the time of his death in Pasadena, he was advisor to Bernard Baruch. Each year, the southern California section of the American Chemical Society honors Tolman by awarding its Tolman Medal in recognition of outstanding contributions to chemistry, Tolmans brother was the behavioral psychologist Edward Chace Tolman. New York, The Chemical Catalog Company, reissued New York, Dover ISBN 0-486-65383-8
28.
Enrico Fermi
–
Enrico Fermi was an Italian physicist, who created the worlds first nuclear reactor, the Chicago Pile-1. He has been called the architect of the age and the architect of the atomic bomb. He was one of the few physicists to excel both theoretically and experimentally and he made significant contributions to the development of quantum theory, nuclear and particle physics, and statistical mechanics. Fermis first major contribution was to statistical mechanics, today, particles that obey the exclusion principle are called fermions. Later Pauli postulated the existence of an invisible particle emitted along with an electron during beta decay. Fermi took up this idea, developing a model that incorporated the postulated particle and his theory, later referred to as Fermis interaction and still later as weak interaction, described one of the four fundamental forces of nature. Fermi left Italy in 1938 to escape new Italian Racial Laws that affected his Jewish wife Laura Capon and he emigrated to the United States where he worked on the Manhattan Project during World War II. Fermi led the team designed and built Chicago Pile-1, which went critical on 2 December 1942. He was on hand when the X-10 Graphite Reactor at Oak Ridge, Tennessee, went critical in 1943, at Los Alamos he headed F Division, part of which worked on Edward Tellers thermonuclear Super bomb. He was present at the Trinity test on 16 July 1945, after the war, Fermi served under J. Robert Oppenheimer on the General Advisory Committee, which advised the Atomic Energy Commission on nuclear matters and policy. Following the detonation of the first Soviet fission bomb in August 1949 and he was among the scientists who testified on Oppenheimers behalf at the 1954 hearing that resulted in the denial of the latters security clearance. Enrico Fermi was born in Rome, Italy, on 29 September 1901 and he was the third child of Alberto Fermi, a division head in the Ministry of Railways, and Ida de Gattis, an elementary school teacher. His only sister, Maria, was two years older than he was, and his brother Giulio was a year older, after the two boys were sent to a rural community to be wet nursed, Enrico rejoined his family in Rome when he was two and a half. Although he was baptised a Roman Catholic in accordance with his grandparents wishes, his family was not particularly religious, as a young boy he shared the same interests as his brother Giulio, building electric motors and playing with electrical and mechanical toys. Giulio died during the administration of an anesthetic for an operation on a throat abscess in 1915, one of Fermis first sources for his study of physics was a book he found at the local market at Campo de Fiori in Rome. Published in 1840, the 900-page Elementorum physicae mathematicae, was written in Latin by Jesuit Father Andrea Caraffa and it covered mathematics, classical mechanics, astronomy, optics, and acoustics, insofar as these disciplines were understood when the book was written. Fermis interest in physics was encouraged by his fathers colleague Adolfo Amidei, who gave him several books on physics and mathematics. Fermi graduated from school in July 1918 and, at Amideis urging
29.
Satyendra Nath Bose
–
Satyendra Nath Bose, FRS was an Indian physicist from Bengal specialising in theoretical physics. He is best known for his work on mechanics in the early 1920s, providing the foundation for Bose–Einstein statistics. A Fellow of the Royal Society, he was awarded Indias second highest civilian award, the class of particles that obey Bose–Einstein statistics, bosons, was named after Bose by Paul Dirac. A self-taught scholar and a polymath, he had a range of interests in varied fields including physics, mathematics, chemistry, biology, mineralogy, philosophy, arts, literature. He served on many research and development committees in sovereign India, Bose was born in Calcutta, the eldest of seven children. He was the son, with six sisters after him. His ancestral home was in village Bara Jagulia, in the district of Nadia and his schooling began at the age of five, near his home. When his family moved to Goabagan, he was admitted to the New Indian School, in the final year of school, he was admitted to the Hindu School. He passed his examination in 1909 and stood fifth in the order of merit. Naman Sharma and Meghnad Saha, from Dacca, joined the college two years later. Prasanta Chandra Mahalanobis and Sisir Kumar Mitra were few years senior to Bose, Bose chose mixed mathematics for his BSc and passed the examinations standing first in 1913 and again stood first in the MSc mixed mathematics exam in 1915. It is said that his marks in the MSc examination created a new record in the annals of the University of Calcutta, after completing his MSc, Bose joined the University of Calcutta as a research scholar in 1916 and started his studies in the theory of relativity. It was an era in the history of scientific progress. Quantum theory had just appeared on the horizon and important results had started pouring in and his father, Surendranath Bose, worked in the Engineering Department of the East Indian Railway Company. In 1914, at age 20, Satyendra Nath Bose married Ushabati Ghosh and they had nine children, two of whom died in early childhood. When he died in 1974, he left behind his wife, as a polyglot, Bose was well versed in several languages such as Bengali, English, French, German and Sanskrit as well as the poetry of Lord Tennyson, Rabindranath Tagore and Kalidasa. He could play the esraj, an instrument similar to a violin. He was actively involved in running night schools that came to be known as the Working Mens Institute and he came in contact with teachers such as Jagadish Chandra Bose, Prafulla Chandra Ray and Naman Sharma who provided inspiration to aim high in life
30.
Solid-state physics
–
Solid-state physics is the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. It is the largest branch of condensed matter physics, solid-state physics studies how the large-scale properties of solid materials result from their atomic-scale properties. Thus, solid-state physics forms a basis of materials science. It also has applications, for example in the technology of transistors and semiconductors. Solid materials are formed from densely packed atoms, which interact intensely and these interactions produce the mechanical, thermal, electrical, magnetic and optical properties of solids. Depending on the involved and the conditions in which it was formed. The bulk of physics, as a general theory, is focused on crystals. Primarily, this is because the periodicity of atoms in a crystal — its defining characteristic — facilitates mathematical modeling, likewise, crystalline materials often have electrical, magnetic, optical, or mechanical properties that can be exploited for engineering purposes. The forces between the atoms in a crystal can take a variety of forms, for example, in a crystal of sodium chloride, the crystal is made up of ionic sodium and chlorine, and held together with ionic bonds. In others, the atoms share electrons and form covalent bonds, in metals, electrons are shared amongst the whole crystal in metallic bonding. Finally, the noble gases do not undergo any of these types of bonding, in solid form, the noble gases are held together with van der Waals forces resulting from the polarisation of the electronic charge cloud on each atom. The differences between the types of solid result from the differences between their bonding, the DSSP catered to industrial physicists, and solid-state physics became associated with the technological applications made possible by research on solids. By the early 1960s, the DSSP was the largest division of the American Physical Society, large communities of solid state physicists also emerged in Europe after World War II, in particular in England, Germany, and the Soviet Union. In the United States and Europe, solid state became a prominent field through its investigations into semiconductors, superconductivity, nuclear magnetic resonance, today, solid-state physics is broadly considered to be the subfield of condensed matter physics that focuses on the properties of solids with regular crystal lattices. Many properties of materials are affected by their crystal structure and this structure can be investigated using a range of crystallographic techniques, including X-ray crystallography, neutron diffraction and electron diffraction. The sizes of the crystals in a crystalline solid material vary depending on the material involved. Real crystals feature defects or irregularities in the arrangements. Properties of materials such as electrical conduction and heat capacity are investigated by solid state physics, an early model of electrical conduction was the Drude model, which applied kinetic theory to the electrons in a solid
31.
Phonon
–
In physics, a phonon is a collective excitation in a periodic, elastic arrangement of atoms or molecules in condensed matter, like solids and some liquids. Often designated a quasiparticle, it represents a state in the quantum mechanical quantization of the modes of vibrations of elastic structures of interacting particles. Phonons play a role in many of the physical properties of condensed matter, like thermal conductivity. The study of phonons is an important part of condensed matter physics, the concept of phonons was introduced in 1932 by Soviet physicist Igor Tamm. The name phonon comes from the Greek word φωνή, which translates to sound or voice because long-wavelength phonons give rise to sound, shorter-wavelength higher-frequency phonons are responsible for the majority of the thermal capacity of solids. A phonon is a quantum description of an elementary vibrational motion in which a lattice of atoms or molecules uniformly oscillates at a single frequency. In classical mechanics this designates a normal mode of vibration, normal modes are important because any arbitrary lattice vibration can be considered to be a superposition of these elementary vibration modes. While normal modes are wave-like phenomena in classical mechanics, phonons have particle-like properties too, the equations in this section do not use axioms of quantum mechanics but instead use relations for which there exists a direct correspondence in classical mechanics. For example, a regular, crystalline, lattice is composed of N particles. These particles may be atoms or molecules, N is a large number, say of the order of 1023, or on the order of Avogadros number for a typical sample of a solid. Since the lattice is rigid, the atoms must be exerting forces on one another to keep each atom near its equilibrium position. These forces may be Van der Waals forces, covalent bonds, electrostatic attractions, magnetic and gravitational forces are generally negligible. The forces between each pair of atoms may be characterized by an energy function V that depends on the distance of separation of the atoms. The potential energy of the lattice is the sum of all pairwise potential energies, ∑ i ≠ j V where ri is the position of the ith atom. It is difficult to solve this many-body problem explicitly in either classical or quantum mechanics, in order to simplify the task, two important approximations are usually imposed. First, the sum is only performed over neighboring atoms, although the electric forces in real solids extend to infinity, this approximation is still valid because the fields produced by distant atoms are effectively screened. Secondly, the potentials V are treated as harmonic potentials and this is permissible as long as the atoms remain close to their equilibrium positions. Formally, this is accomplished by Taylor expanding V about its value to quadratic order, giving V proportional to the displacement x2
32.
Heat capacity
–
Heat capacity or thermal capacity is a measurable physical quantity equal to the ratio of the heat added to an object to the resulting temperature change. The unit of capacity is joule per kelvin J K. Specific heat is the amount of heat needed to raise the temperature of one kilogram of mass by 1 kelvin, Heat capacity is an extensive property of matter, meaning it is proportional to the size of the system. The molar heat capacity is the capacity per unit amount of a pure substance. In some engineering contexts, the heat capacity is used. Other contributions can come from magnetic and electronic degrees of freedom in solids, for quantum mechanical reasons, at any given temperature, some of these degrees of freedom may be unavailable, or only partially available, to store thermal energy. In such cases, the capacity is a fraction of the maximum. As the temperature approaches zero, the heat capacity of a system approaches zero. Quantum theory can be used to predict the heat capacity of simple systems. In a previous theory of common in the early modern period, heat was thought to be a measurement of an invisible fluid. Bodies were capable of holding an amount of this fluid, hence the term heat capacity, named. Heat is no longer considered a fluid, but rather a transfer of disordered energy, nevertheless, at least in English, the term heat capacity survives. In some other languages, the thermal capacity is preferred. In the International System of Units, heat capacity has the unit joules per kelvin, if the temperature change is sufficiently small the heat capacity may be assumed to be constant, C = Q Δ T. Heat capacity is a property, meaning it depends on the extent or size of the physical system studied. A sample containing twice the amount of substance as another sample requires the transfer of twice the amount of heat to achieve the change in temperature. For many purposes it is convenient to report heat capacity as an intensive property. In practice, this is most often an expression of the property in relation to a unit of mass, in science and engineering, International standards now recommend that specific heat capacity always refer to division by mass
33.
Solid
–
Solid is one of the four fundamental states of matter. It is characterized by structural rigidity and resistance to changes of shape or volume, unlike a liquid, a solid object does not flow to take on the shape of its container, nor does it expand to fill the entire volume available to it like a gas does. The atoms in a solid are tightly bound to other, either in a regular geometric lattice or irregularly. The branch of physics deals with solids is called solid-state physics. Materials science is concerned with the physical and chemical properties of solids. Solid-state chemistry is concerned with the synthesis of novel materials, as well as the science of identification. The atoms, molecules or ions which make up solids may be arranged in a repeating pattern. Materials whose constituents are arranged in a regular pattern are known as crystals, in some cases, the regular ordering can continue unbroken over a large scale, for example diamonds, where each diamond is a single crystal. Almost all common metals, and many ceramics, are polycrystalline, in other materials, there is no long-range order in the position of the atoms. These solids are known as amorphous solids, examples include polystyrene, whether a solid is crystalline or amorphous depends on the material involved, and the conditions in which it was formed. Solids which are formed by slow cooling will tend to be crystalline, likewise, the specific crystal structure adopted by a crystalline solid depends on the material involved and on how it was formed. While many common objects, such as an ice cube or a coin, are chemically identical throughout, for example, a typical rock is an aggregate of several different minerals and mineraloids, with no specific chemical composition. Wood is an organic material consisting primarily of cellulose fibers embedded in a matrix of organic lignin. In materials science, composites of more than one constituent material can be designed to have desired properties, the forces between the atoms in a solid can take a variety of forms. For example, a crystal of sodium chloride is made up of sodium and chlorine. In diamond or silicon, the atoms share electrons and form covalent bonds, in metals, electrons are shared in metallic bonding. Some solids, particularly most organic compounds, are together with van der Waals forces resulting from the polarization of the electronic charge cloud on each molecule. The dissimilarities between the types of solid result from the differences between their bonding, metals typically are strong, dense, and good conductors of both electricity and heat
34.
Oscillation
–
Oscillation is the repetitive variation, typically in time, of some measure about a central value or between two or more different states. The term vibration is used to describe mechanical oscillation. Familiar examples of oscillation include a swinging pendulum and alternating current power, the simplest mechanical oscillating system is a weight attached to a linear spring subject to only weight and tension. Such a system may be approximated on an air table or ice surface, the system is in an equilibrium state when the spring is static. If the system is displaced from the equilibrium, there is a net restoring force on the mass, tending to bring it back to equilibrium. However, in moving the back to the equilibrium position, it has acquired momentum which keeps it moving beyond that position. If a constant force such as gravity is added to the system, the time taken for an oscillation to occur is often referred to as the oscillatory period. All real-world oscillator systems are thermodynamically irreversible and this means there are dissipative processes such as friction or electrical resistance which continually convert some of the energy stored in the oscillator into heat in the environment. Thus, oscillations tend to decay with time there is some net source of energy into the system. The simplest description of this process can be illustrated by oscillation decay of the harmonic oscillator. In addition, a system may be subject to some external force. In this case the oscillation is said to be driven, some systems can be excited by energy transfer from the environment. This transfer typically occurs where systems are embedded in some fluid flow, at sufficiently large displacements, the stiffness of the wing dominates to provide the restoring force that enables an oscillation. The harmonic oscillator and the systems it models have a degree of freedom. More complicated systems have more degrees of freedom, for two masses and three springs. In such cases, the behavior of each variable influences that of the others and this leads to a coupling of the oscillations of the individual degrees of freedom. For example, two pendulum clocks mounted on a wall will tend to synchronise. This phenomenon was first observed by Christiaan Huygens in 1665, more special cases are the coupled oscillators where energy alternates between two forms of oscillation
35.
Crystal structure
–
In crystallography, crystal structure is a description of the ordered arrangement of atoms, ions or molecules in a crystalline material. Ordered structures occur from the nature of the constituent particles to form symmetric patterns that repeat along the principal directions of three-dimensional space in matter. The smallest group of particles in the material that constitutes the pattern is the unit cell of the structure. The unit cell completely defines the symmetry and structure of the crystal lattice. The repeating patterns are said to be located at the points of the Bravais lattice, the lengths of the principal axes, or edges, of the unit cell and the angles between them are the lattice constants, also called lattice parameters. The symmetry properties of the crystal are described by the concept of space groups, all possible symmetric arrangements of particles in three-dimensional space may be described by the 230 space groups. The crystal structure and symmetry play a role in determining many physical properties, such as cleavage, electronic band structure. The crystal structure of a material can be described in terms of its unit cell, the unit cell is a box containing one or more atoms arranged in three dimensions. The unit cells stacked in three-dimensional space describe the arrangement of atoms of the crystal. Commonly, atomic positions are represented in terms of fractional coordinates, the atom positions within the unit cell can be calculated through application of symmetry operations to the asymmetric unit. The asymmetric unit refers to the smallest possible occupation of space within the unit cell and this does not, however imply that the entirety of the asymmetric unit must lie within the boundaries of the unit cell. Symmetric transformations of atom positions are calculated from the group of the crystal structure. Vectors and planes in a lattice are described by the three-value Miller index notation. It uses the indices ℓ, m, and n as directional parameters, which are separated by 90°, by definition, the syntax denotes a plane that intercepts the three points a1/ℓ, a2/m, and a3/n, or some multiple thereof. That is, the Miller indices are proportional to the inverses of the intercepts of the plane with the unit cell, if one or more of the indices is zero, it means that the planes do not intersect that axis. A plane containing a coordinate axis is translated so that it no longer contains that axis before its Miller indices are determined, the Miller indices for a plane are integers with no common factors. Negative indices are indicated with horizontal bars, as in, in an orthogonal coordinate system for a cubic cell, the Miller indices of a plane are the Cartesian components of a vector normal to the plane. Likewise, the planes are geometric planes linking nodes
36.
Quantum harmonic oscillator
–
The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary potential can usually be approximated as a harmonic potential at the vicinity of an equilibrium point. Furthermore, it is one of the few quantum-mechanical systems for which an exact, the first term in the Hamiltonian represents the possible kinetic energy states of the particle, and the second term represents its corresponding possible potential energy states. One may solve the differential equation representing this eigenvalue problem in the basis, for the wave function ⟨x|ψ⟩ = ψ. It turns out there is a family of solutions. In this basis, they amount to ψ n =12 n n. ⋅1 /4 ⋅ e − m ω x 22 ℏ ⋅ H n, n =0,1,2, …. The functions Hn are the physicists Hermite polynomials, H n = n e z 2 d n d z n, the corresponding energy levels are E n = ℏ ω = ℏ2 ω. This energy spectrum is noteworthy for three reasons, first, the energies are quantized, meaning that only discrete energy values are possible, this is a general feature of quantum-mechanical systems when a particle is confined. Second, these energy levels are equally spaced, unlike in the Bohr model of the atom. Third, the lowest achievable energy is not equal to the minimum of the well, but ħω/2 above it. This zero-point energy further has important implications in quantum field theory, as the energy increases, the probability density becomes concentrated at the classical turning points, where the states energy coincides with the potential energy. This is consistent with the harmonic oscillator, in which the particle spends most of its time at the turning points. The correspondence principle is thus satisfied, the ladder operator method, developed by Paul Dirac, allows extraction of the energy eigenvalues without directly solving the differential equation. It is generalizable to more complicated problems, notably in quantum field theory, the operator a is not Hermitian, since itself and its adjoint a† are not equal. The energy eigenstates |n⟩, when operated on by these ladder operators and it is then evident that a†, in essence, appends a single quantum of energy to the oscillator, while a removes a quantum. For this reason, they are referred to as creation and annihilation operators. From the relations above, we can define a number operator N. The commutation property yields N a † | n ⟩ = | n ⟩ = | n ⟩ = a † | n ⟩ and this means that a acts on |n⟩ to produce, up to a multiplicative constant, |n–1⟩, and a† acts on |n⟩ to produce |n+1⟩
37.
Planck's law
–
Plancks law describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature T. The law is named after Max Planck, who proposed it in 1900 and it is a pioneering result of modern physics and quantum theory. The spectral radiance of a body, Bν, describes the amount of energy it gives off as radiation of different frequencies. It is measured in terms of the power emitted per unit area of the body, the spectral radiance can also be measured per unit wavelength λ instead of per unit frequency. In this case, it is given by B λ =2 h c 2 λ51 e h c λ k B T −1. The law may also be expressed in terms, such as the number of photons emitted at a certain wavelength. The SI units of Bν are W·sr−1·m−2·Hz−1, while those of Bλ are W·sr−1·m−3, in the limit of low frequencies, Plancks law tends to the Rayleigh–Jeans law, while in the limit of high frequencies it tends to the Wien approximation. Every physical body spontaneously and continuously emits electromagnetic radiation, near thermodynamic equilibrium, the emitted radiation is nearly described by Plancks law. Because of its dependence on temperature, Planck radiation is said to be thermal radiation, the higher the temperature of a body the more radiation it emits at every wavelength. Planck radiation has an intensity at a specific wavelength that depends on the temperature. For example, at room temperature, a body emits thermal radiation that is mostly infrared, at higher temperatures the amount of infrared radiation increases and can be felt as heat, and the body glows visibly red. At even higher temperatures, a body is bright yellow or blue-white and emits significant amounts of short wavelength radiation, including ultraviolet. The surface of the sun emits large amounts of infrared and ultraviolet radiation, its emission is peaked in the visible spectrum. Planck radiation is the greatest amount of radiation that any body at thermal equilibrium can emit from its surface, the passage of radiation across an interface between media can be characterized by the emissivity of the interface, usually denoted by the symbol ε. It is in general dependent on chemical composition and physical structure, on temperature, on the wavelength, on the angle of passage, the emissivity of a natural interface is always between ε =0 and 1. A body that interfaces with another medium which both has ε =1 and absorbs all the incident upon it, is said to be a black body. At equilibrium, the radiation inside this enclosure follows Plancks law, just as the Maxwell–Boltzmann distribution is the unique maximum entropy energy distribution for a gas of material particles at thermal equilibrium, so is Plancks distribution for a gas of photons. If the photon gas is not Planckian, the law of thermodynamics guarantees that interactions will cause the photon energy distribution to change
38.
Electromagnetic radiation
–
In physics, electromagnetic radiation refers to the waves of the electromagnetic field, propagating through space carrying electromagnetic radiant energy. It includes radio waves, microwaves, infrared, light, ultraviolet, X-, classically, electromagnetic radiation consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields that propagate at the speed of light through a vacuum. The oscillations of the two fields are perpendicular to other and perpendicular to the direction of energy and wave propagation. The wavefront of electromagnetic waves emitted from a point source is a sphere, the position of an electromagnetic wave within the electromagnetic spectrum can be characterized by either its frequency of oscillation or its wavelength. Electromagnetic waves are produced whenever charged particles are accelerated, and these waves can interact with other charged particles. EM waves carry energy, momentum and angular momentum away from their source particle, quanta of EM waves are called photons, whose rest mass is zero, but whose energy, or equivalent total mass, is not zero so they are still affected by gravity. Thus, EMR is sometimes referred to as the far field, in this language, the near field refers to EM fields near the charges and current that directly produced them, specifically, electromagnetic induction and electrostatic induction phenomena. In the quantum theory of electromagnetism, EMR consists of photons, quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation. The energy of a photon is quantized and is greater for photons of higher frequency. This relationship is given by Plancks equation E = hν, where E is the energy per photon, ν is the frequency of the photon, a single gamma ray photon, for example, might carry ~100,000 times the energy of a single photon of visible light. The effects of EMR upon chemical compounds and biological organisms depend both upon the power and its frequency. EMR of visible or lower frequencies is called non-ionizing radiation, because its photons do not individually have enough energy to ionize atoms or molecules, the effects of these radiations on chemical systems and living tissue are caused primarily by heating effects from the combined energy transfer of many photons. In contrast, high ultraviolet, X-rays and gamma rays are called ionizing radiation since individual photons of high frequency have enough energy to ionize molecules or break chemical bonds. These radiations have the ability to cause chemical reactions and damage living cells beyond that resulting from simple heating, Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the speed of light. Maxwell’s equations were confirmed by Heinrich Hertz through experiments with radio waves, according to Maxwells equations, a spatially varying electric field is always associated with a magnetic field that changes over time. Likewise, a varying magnetic field is associated with specific changes over time in the electric field. In an electromagnetic wave, the changes in the field are always accompanied by a wave in the magnetic field in one direction
39.
Bose gas
–
An ideal Bose gas is a quantum-mechanical phase of matter, analogous to a classical ideal gas. It is composed of bosons, which have a value of spin. This condensate is known as a Bose–Einstein condensate, the thermodynamics of an ideal Bose gas is best calculated using the grand partition function. All thermodynamic quantities may be derived from the partition function. All partial derivatives are taken with respect to one of three variables while the other two are held constant. It is more convenient to deal with the grand potential defined as. For example, for a massive Bose gas in a box, α=3/2, for a massive Bose gas in a harmonic trap we will have α=3 and the critical energy is given by,1 α = f 3 where V=mω2r2/2 is the harmonic potential. It is seen that Ec is a function of volume only, the solution is, Ω ≈ − Li α +1 α. The problem with this continuum approximation for a Bose gas is that the state has been effectively ignored. This inaccuracy becomes serious when dealing with the Bose–Einstein condensate and will be dealt with in the next section, the problem here is that the Thomas–Fermi approximation has set the degeneracy of the ground state to zero, which is wrong. There is no state to accept the condensate and so the equation breaks down. Figure 1 shows the results of the solution to this equation for α=3/2, the solid black line is the fraction of excited states 1-N0/N for N =10,000 and the dotted black line is the solution for N =1000. The blue lines are the fraction of condensed particles N0/N The red lines plot values of the negative of the chemical potential μ, as the number of particles increases, the condensed and excited fractions tend towards a discontinuity at the critical temperature. From these expansions, we can find the behavior of the gas near T =0, in particular, we are interested in the limit as N approaches infinity, which can be easily determined from these expansions. The following table lists various thermodynamic quantities calculated in the limit of low temperature and high temperature, an equal sign indicates an exact result, while an approximation symbol indicates that only the first few terms of a series in τ α is shown. It is seen that all quantities approach the values for an ideal gas in the limit of large temperature. The above values can be used to calculate other thermodynamic quantities, in one dimension bosons with delta interaction behave as fermions, they obey Pauli exclusion principle. In one dimension Bose gas with delta interaction can be solved exactly by Bethe ansatz, the bulk free energy and thermodynamic potentials were calculated by Chen Nin Yang