Pressure is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure is the pressure relative to the ambient pressure. Various units are used to express pressure; some of these derive from a unit of force divided by a unit of area. Pressure may be expressed in terms of standard atmospheric pressure. Manometric units such as the centimetre of water, millimetre of mercury, inch of mercury are used to express pressures in terms of the height of column of a particular fluid in a manometer. Pressure is the amount of force applied at right angles to the surface of an object per unit area; the symbol for it is p or P. The IUPAC recommendation for pressure is a lower-case p. However, upper-case P is used; the usage of P vs p depends upon the field in which one is working, on the nearby presence of other symbols for quantities such as power and momentum, on writing style. Mathematically: p = F A, where: p is the pressure, F is the magnitude of the normal force, A is the area of the surface on contact.
Pressure is a scalar quantity. It relates the vector surface element with the normal force acting on it; the pressure is the scalar proportionality constant that relates the two normal vectors: d F n = − p d A = − p n d A. The minus sign comes from the fact that the force is considered towards the surface element, while the normal vector points outward; the equation has meaning in that, for any surface S in contact with the fluid, the total force exerted by the fluid on that surface is the surface integral over S of the right-hand side of the above equation. It is incorrect to say "the pressure is directed in such or such direction"; the pressure, as a scalar, has no direction. The force given by the previous relationship to the quantity has a direction, but the pressure does not. If we change the orientation of the surface element, the direction of the normal force changes accordingly, but the pressure remains the same. Pressure is distributed to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point.
It is a fundamental parameter in thermodynamics, it is conjugate to volume. The SI unit for pressure is the pascal, equal to one newton per square metre; this name for the unit was added in 1971. Other units of pressure, such as pounds per square inch and bar, are in common use; the CGS unit of pressure is 0.1 Pa.. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre and the like without properly identifying the force units, but using the names kilogram, kilogram-force, or gram-force as units of force is expressly forbidden in SI. The technical atmosphere is 1 kgf/cm2. Since a system under pressure has the potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume, it is therefore related to energy density and may be expressed in units such as joules per cubic metre. Mathematically: p =; some meteorologists prefer the hectopascal for atmospheric air pressure, equivalent to the older unit millibar. Similar pressures are given in kilopascals in most other fields, where the hecto- prefix is used.
The inch of mercury is still used in the United States. Oceanographers measure underwater pressure in decibars because pressure in the ocean increases by one decibar per metre depth; the standard atmosphere is an established constant. It is equal to typical air pressure at Earth mean sea level and is defined as 101325 Pa; because pressure is measured by its ability to displace a column of liquid in a manometer, pressures are expressed as a depth of a particular fluid. The most common choices are water; the pressure exerted by a column of liquid of height h and density ρ is given by the hydrostatic pressure equation p = ρgh, where g is the gravitational acceleration. Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column
The density, or more the volumetric mass density, of a substance is its mass per unit volume. The symbol most used for density is ρ, although the Latin letter D can be used. Mathematically, density is defined as mass divided by volume: ρ = m V where ρ is the density, m is the mass, V is the volume. In some cases, density is loosely defined as its weight per unit volume, although this is scientifically inaccurate – this quantity is more called specific weight. For a pure substance the density has the same numerical value as its mass concentration. Different materials have different densities, density may be relevant to buoyancy and packaging. Osmium and iridium are the densest known elements at standard conditions for temperature and pressure but certain chemical compounds may be denser. To simplify comparisons of density across different systems of units, it is sometimes replaced by the dimensionless quantity "relative density" or "specific gravity", i.e. the ratio of the density of the material to that of a standard material water.
Thus a relative density less than one means. The density of a material varies with pressure; this variation is small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object and thus increases its density. Increasing the temperature of a substance decreases its density by increasing its volume. In most materials, heating the bottom of a fluid results in convection of the heat from the bottom to the top, due to the decrease in the density of the heated fluid; this causes it to rise relative to more dense unheated material. The reciprocal of the density of a substance is called its specific volume, a term sometimes used in thermodynamics. Density is an intensive property in that increasing the amount of a substance does not increase its density. In a well-known but apocryphal tale, Archimedes was given the task of determining whether King Hiero's goldsmith was embezzling gold during the manufacture of a golden wreath dedicated to the gods and replacing it with another, cheaper alloy.
Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated and compared with the mass. Baffled, Archimedes is said to have taken an immersion bath and observed from the rise of the water upon entering that he could calculate the volume of the gold wreath through the displacement of the water. Upon this discovery, he leapt from his bath and ran naked through the streets shouting, "Eureka! Eureka!". As a result, the term "eureka" entered common parlance and is used today to indicate a moment of enlightenment; the story first appeared in written form in Vitruvius' books of architecture, two centuries after it took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time. From the equation for density, mass density has units of mass divided by volume; as there are many units of mass and volume covering many different magnitudes there are a large number of units for mass density in use.
The SI unit of kilogram per cubic metre and the cgs unit of gram per cubic centimetre are the most used units for density. One g/cm3 is equal to one thousand kg/m3. One cubic centimetre is equal to one millilitre. In industry, other larger or smaller units of mass and or volume are more practical and US customary units may be used. See below for a list of some of the most common units of density. A number of techniques as well as standards exist for the measurement of density of materials; such techniques include the use of a hydrometer, Hydrostatic balance, immersed body method, air comparison pycnometer, oscillating densitometer, as well as pour and tap. However, each individual method or technique measures different types of density, therefore it is necessary to have an understanding of the type of density being measured as well as the type of material in question; the density at all points of a homogeneous object equals its total mass divided by its total volume. The mass is measured with a scale or balance.
To determine the density of a liquid or a gas, a hydrometer, a dasymeter or a Coriolis flow meter may be used, respectively. Hydrostatic weighing uses the displacement of water due to a submerged object to determine the density of the object. If the body is not homogeneous its density varies between different regions of the object. In that case the density around any given location is determined by calculating the density of a small volume around that location. In the limit of an infinitesimal volume the density of an inhomogeneous object at a point becomes: ρ = d m / d V, where d V is an elementary volume at position r; the mass of the body t
In statistical mechanics, entropy is an extensive property of a thermodynamic system. It is related to the number Ω of microscopic configurations that are consistent with the macroscopic quantities that characterize the system. Under the assumption that each microstate is probable, the entropy S is the natural logarithm of the number of microstates, multiplied by the Boltzmann constant kB. Formally, S = k B ln Ω. Macroscopic systems have a large number Ω of possible microscopic configurations. For example, the entropy of an ideal gas is proportional to the number of gas molecules N. Twenty liters of gas at room temperature and atmospheric pressure has N ≈ 6×1023. At equilibrium, each of the Ω ≈ eN configurations can be regarded as random and likely; the second law of thermodynamics states. Such systems spontaneously evolve towards the state with maximum entropy. Non-isolated systems may lose entropy, provided their environment's entropy increases by at least that amount so that the total entropy increases.
Entropy is a function of the state of the system, so the change in entropy of a system is determined by its initial and final states. In the idealization that a process is reversible, the entropy does not change, while irreversible processes always increase the total entropy; because it is determined by the number of random microstates, entropy is related to the amount of additional information needed to specify the exact physical state of a system, given its macroscopic specification. For this reason, it is said that entropy is an expression of the disorder, or randomness of a system, or of the lack of information about it; the concept of entropy plays a central role in information theory. Boltzmann's constant, therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin in the International System of Units; the entropy of a substance is given as an intensive property—either entropy per unit mass or entropy per unit amount of substance. The French mathematician Lazare Carnot proposed in his 1803 paper Fundamental Principles of Equilibrium and Movement that in any machine the accelerations and shocks of the moving parts represent losses of moment of activity.
In other words, in any natural process there exists an inherent tendency towards the dissipation of useful energy. Building on this work, in 1824 Lazare's son Sadi Carnot published Reflections on the Motive Power of Fire which posited that in all heat-engines, whenever "caloric" falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body, he made the analogy with that of. This was an early insight into the second law of thermodynamics. Carnot based his views of heat on the early 18th century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, on the contemporary views of Count Rumford who showed that heat could be created by friction as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, that "no change occurs in the condition of the working body".
The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy, its conservation in all processes. In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, gave this "change" a mathematical interpretation by questioning the nature of the inherent loss of usable heat when work is done, e.g. heat produced by friction. Clausius described entropy as the transformation-content, i.e. dissipative energy use, of a thermodynamic system or working body of chemical species during a change of state. This was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, James Clerk Maxwell gave entropy a statistical basis. In 1877 Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy to be proportional to the natural logarithm of the number of microstates such a gas could occupy.
Henceforth, the essential problem in statistical thermodynamics, i.e. according to Erwin Schrödinger, has been to determine the distribution of a given amount of energy E over N identical systems. Carathéodory linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability. There are two related definitions of entropy: the thermodynamic definition and the statistical mechanics definition; the classical thermodynamics definition developed first. In the classical thermodynamics viewpoint, the system is composed of large numbers of constituents and the state of the system is described by the average thermodynamic properties of those constituents.
Ludwig Eduard Boltzmann was an Austrian physicist and philosopher whose greatest achievement was in the development of statistical mechanics, which explains and predicts how the properties of atoms determine the physical properties of matter. Boltzmann coined the word ergodic. Boltzmann was born in the capital of the Austrian Empire, his father, Ludwig Georg Boltzmann, was a revenue official. His grandfather, who had moved to Vienna from Berlin, was a clock manufacturer, Boltzmann's mother, Katharina Pauernfeind, was from Salzburg, he received his primary education from a private tutor at the home of his parents. Boltzmann attended high school in Upper Austria; when Boltzmann was 15, his father died. Boltzmann studied physics at the University of Vienna, starting in 1863. Among his teachers were Josef Loschmidt, Joseph Stefan, Andreas von Ettingshausen and Jozef Petzval. Boltzmann received his PhD degree in 1866 working under the supervision of Stefan. In 1867 he became a Privatdozent. After obtaining his doctorate degree, Boltzmann worked two more years as Stefan's assistant.
It was Stefan. In 1869 at age 25, thanks to a letter of recommendation written by Stefan, he was appointed full Professor of Mathematical Physics at the University of Graz in the province of Styria. In 1869 he spent several months in Heidelberg working with Robert Bunsen and Leo Königsberger and in 1871 with Gustav Kirchhoff and Hermann von Helmholtz in Berlin. In 1873 Boltzmann joined the University of Vienna as Professor of Mathematics and there he stayed until 1876. In 1872, long before women were admitted to Austrian universities, he met Henriette von Aigentler, an aspiring teacher of mathematics and physics in Graz, she was refused permission to audit lectures unofficially. Boltzmann advised her to appeal. On July 17, 1876 Ludwig Boltzmann married Henriette. Boltzmann went back to Graz to take up the chair of Experimental Physics. Among his students in Graz were Svante Arrhenius and Walther Nernst, he spent 14 happy years in Graz and it was there that he developed his statistical concept of nature.
Boltzmann was appointed to the Chair of Theoretical Physics at the University of Munich in Bavaria, Germany in 1890. In 1894, Boltzmann succeeded his teacher Joseph Stefan as Professor of Theoretical Physics at the University of Vienna. Boltzmann spent a great deal of effort in his final years defending his theories, he did not get along with some of his colleagues in Vienna Ernst Mach, who became a professor of philosophy and history of sciences in 1895. That same year Georg Helm and Wilhelm Ostwald presented their position on energetics at a meeting in Lübeck, they saw energy, not matter, as the chief component of the universe. Boltzmann's position carried the day among other physicists who supported his atomic theories in the debate. In 1900, Boltzmann went on the invitation of Wilhelm Ostwald. Ostwald offered Boltzmann the professorial chair in physics, which became vacant when Gustav Heinrich Wiedemann died. After Mach retired due to bad health, Boltzmann returned to Vienna in 1902. In 1903, together with Gustav von Escherich and Emil Müller, founded the Austrian Mathematical Society.
His students included Paul Ehrenfest and Lise Meitner. In Vienna, Boltzmann taught physics and lectured on philosophy. Boltzmann's lectures on natural philosophy were popular and received considerable attention, his first lecture was an enormous success. Though the largest lecture hall had been chosen for it, the people stood all the way down the staircase; because of the great successes of Boltzmann's philosophical lectures, the Emperor invited him for a reception at the Palace. In 1906, Boltzmann's deteriorating mental condition forced him to resign his position, he committed suicide on September 5, 1906, by hanging himself while on vacation with his wife and daughter in Duino, near Trieste. He is buried in the Viennese Zentralfriedhof, his tombstone bears the inscription of Boltzmann's entropy formula: S = k ⋅ log W Boltzmann's kinetic theory of gases seemed to presuppose the reality of atoms and molecules, but all German philosophers and many scientists like Ernst Mach and the physical chemist Wilhelm Ostwald disbelieved their existence.
During the 1890s Boltzmann attempted to formulate a compromise position which would allow both atomists and anti-atomists to do physics without arguing over atoms. His solution was to use Hertz's theory that atoms were Bilder, that is, pictures. Atomists could think the pictures were the real atoms while the anti-atomists could think of the pictures as representing a useful but unreal model, but this did not satisfy either group. Furthermore and many defenders of "pure thermodynamics" were trying hard to refute the kinetic theory of gases and statistical mechanics because of Boltzmann's assumptions about atoms and molecules and statistical interpretation of the second law of thermodynamics. Around the turn of the century, Boltzmann's science was being threatened by another philosophical objection; some physicists, including Mach's student, Gustav Jaumann, interpreted Hertz to mean that all electromagnetic behavior is continuous, as if there were no atoms and molecules, as if all physical behavior were ultimate
Probability is the measure of the likelihood that an event will occur. See glossary of probability and statistics. Probability quantifies as a number between 0 and 1, loosely speaking, 0 indicates impossibility and 1 indicates certainty; the higher the probability of an event, the more it is that the event will occur. A simple example is the tossing of a fair coin. Since the coin is fair, the two outcomes are both probable; these concepts have been given an axiomatic mathematical formalization in probability theory, used in such areas of study as mathematics, finance, science, artificial intelligence/machine learning, computer science, game theory, philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is used to describe the underlying mechanics and regularities of complex systems; when dealing with experiments that are random and well-defined in a purely theoretical setting, probabilities can be numerically described by the number of desired outcomes divided by the total number of all outcomes.
For example, tossing a fair coin twice will yield "head-head", "head-tail", "tail-head", "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents possess different views about the fundamental nature of probability: Objectivists assign numbers to describe some objective or physical state of affairs; the most popular version of objective probability is frequentist probability, which claims that the probability of a random event denotes the relative frequency of occurrence of an experiment's outcome, when repeating the experiment. This interpretation considers probability to be the relative frequency "in the long run" of outcomes. A modification of this is propensity probability, which interprets probability as the tendency of some experiment to yield a certain outcome if it is performed only once.
Subjectivists assign numbers per subjective probability. The degree of belief has been interpreted as, "the price at which you would buy or sell a bet that pays 1 unit of utility if E, 0 if not E." The most popular version of subjective probability is Bayesian probability, which includes expert knowledge as well as experimental data to produce probabilities. The expert knowledge is represented by some prior probability distribution; these data are incorporated in a likelihood function. The product of the prior and the likelihood, results in a posterior probability distribution that incorporates all the information known to date. By Aumann's agreement theorem, Bayesian agents whose prior beliefs are similar will end up with similar posterior beliefs. However, sufficiently different priors can lead to different conclusions regardless of how much information the agents share; the word probability derives from the Latin probabilitas, which can mean "probity", a measure of the authority of a witness in a legal case in Europe, correlated with the witness's nobility.
In a sense, this differs much from the modern meaning of probability, which, in contrast, is a measure of the weight of empirical evidence, is arrived at from inductive reasoning and statistical inference. The scientific study of probability is a modern development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues are still obscured by the superstitions of gamblers. According to Richard Jeffrey, "Before the middle of the seventeenth century, the term'probable' meant approvable, was applied in that sense, unequivocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances." However, in legal contexts especially,'probable' could apply to propositions for which there was good evidence.
The sixteenth century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes. Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal. Christiaan Huygens gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi and Abraham de Moivre's Doctrine of Chances treated the subject as a branch of mathematics. See Ian Hacking's The Emergence of Probability and James Franklin's The Science of Conjecture for histories of the early development of the concept of mathematical probability; the theory of errors may be traced back to Roger Cotes's Opera Miscellanea, but a memoir prepared by Thomas Simpson in 1755 first applied the theory to the discussion of errors of observation. The reprint of this memoir lays down the axioms that positive and negative errors are probable, that certain assignable limits define the range of all errors.
Simpson discusses c
Second law of thermodynamics
The second law of thermodynamics states that the total entropy of an isolated system can never decrease over time. The total entropy of a system and its surroundings can remain constant in ideal cases where the system is in thermodynamic equilibrium, or is undergoing a reversible process. In all processes that occur, including spontaneous processes, the total entropy of the system and its surroundings increases and the process is irreversible in the thermodynamic sense; the increase in entropy accounts for the irreversibility of natural processes, the asymmetry between future and past. The second law was an empirical finding, accepted as an axiom of thermodynamic theory. Statistical mechanics, classical or quantum, explains the microscopic origin of the law; the second law has been expressed in many ways. Its first formulation is credited to the French scientist Sadi Carnot, who in 1824 showed that there is an upper limit to the efficiency of conversion of heat to work, in a heat engine; the first law of thermodynamics provides the basic definition of internal energy, associated with all thermodynamic systems, states the rule of conservation of energy.
The second law is concerned with the direction of natural processes. It asserts that a natural process runs only in one sense, is not reversible. For example, heat always flows spontaneously from hotter to colder bodies, never the reverse, unless external work is performed on the system; the explanation of the phenomena was given in terms of entropy. Total entropy can never decrease over time for an isolated system because the entropy of an isolated system spontaneously evolves toward thermodynamic equilibrium: the entropy should stay the same or increase. In a fictive reversible process, an infinitesimal increment in the entropy of a system is defined to result from an infinitesimal transfer of heat to a closed system divided by the common temperature of the system in equilibrium and the surroundings which supply the heat: d S = δ Q T. Different notations are used for infinitesimal amounts of heat and infinitesimal amounts of entropy because entropy is a function of state, while heat, like work, is not.
For an possible infinitesimal process without exchange of mass with the surroundings, the second law requires that the increment in system entropy fulfills the inequality d S > δ Q T s u r r. This is because a general process for this case may include work being done on the system by its surroundings, which can have frictional or viscous effects inside the system, because a chemical reaction may be in progress, or because heat transfer occurs only irreversibly, driven by a finite difference between the system temperature and the temperature of the surroundings. Note that the equality still applies for pure heat flow, d S = δ Q T., the basis of the accurate determination of the absolute entropy of pure substances from measured heat capacity curves and entropy changes at phase transitions, i.e. by calorimetry. Introducing a set of internal variables ξ to describe the deviation of a thermodynamic system in physical equilibrium from the chemical equilibrium state, one can record the equality d S = δ Q T − 1 T ∑ j Ξ j δ ξ j.
The second term represents work of internal variables that can be perturbed by external influences, but the system cannot perform any positive work via internal variables. This statement introduces the impossibility of the reversion of evolution of the thermodynamic system in time and can be considered as a formulation of the second principle of thermodynamics – the formulation, which is, of course, equivalent to the formulation of the principle in terms of entropy; the zeroth law of thermodynamics in its usual short statement allows recognition that two bodies in a relation of thermal equilibrium have the same temperature that a test body has the same temperature as a reference thermometric body. For a body in thermal equilibrium with another, there are indefinitely many empirical temperature scales, in general depending on the properties of a particular reference thermometric body; the second law allows a distinguished temperatur
Temperature is a physical quantity expressing hot and cold. It is measured with a thermometer calibrated in one or more temperature scales; the most used scales are the Celsius scale, Fahrenheit scale, Kelvin scale. The kelvin is the unit of temperature in the International System of Units, in which temperature is one of the seven fundamental base quantities; the Kelvin scale is used in science and technology. Theoretically, the coldest a system can be is when its temperature is absolute zero, at which point the thermal motion in matter would be zero. However, an actual physical system or object can never attain a temperature of absolute zero. Absolute zero is denoted as 0 K on the Kelvin scale, −273.15 °C on the Celsius scale, −459.67 °F on the Fahrenheit scale. For an ideal gas, temperature is proportional to the average kinetic energy of the random microscopic motions of the constituent microscopic particles. Temperature is important in all fields of natural science, including physics, Earth science and biology, as well as most aspects of daily life.
Many physical processes are affected by temperature, such as physical properties of materials including the phase, solubility, vapor pressure, electrical conductivity rate and extent to which chemical reactions occur the amount and properties of thermal radiation emitted from the surface of an object speed of sound is a function of the square root of the absolute temperature Temperature scales differ in two ways: the point chosen as zero degrees, the magnitudes of incremental units or degrees on the scale. The Celsius scale is used for common temperature measurements in most of the world, it is an empirical scale, developed by a historical progress, which led to its zero point 0 °C being defined by the freezing point of water, additional degrees defined so that 100 °C was the boiling point of water, both at sea-level atmospheric pressure. Because of the 100-degree interval, it was called a centigrade scale. Since the standardization of the kelvin in the International System of Units, it has subsequently been redefined in terms of the equivalent fixing points on the Kelvin scale, so that a temperature increment of one degree Celsius is the same as an increment of one kelvin, though they differ by an additive offset of 273.15.
The United States uses the Fahrenheit scale, on which water freezes at 32 °F and boils at 212 °F at sea-level atmospheric pressure. Many scientific measurements use the Kelvin temperature scale, named in honor of the Scots-Irish physicist who first defined it, it is a absolute temperature scale. Its zero point, 0 K, is defined to coincide with the coldest physically-possible temperature, its degrees are defined through thermodynamics. The temperature of absolute zero occurs at 0 K = −273.15 °C, the freezing point of water at sea-level atmospheric pressure occurs at 273.15 K = 0 °C. The International System of Units defines a scale and unit for the kelvin or thermodynamic temperature by using the reliably reproducible temperature of the triple point of water as a second reference point; the triple point is a singular state with its own unique and invariant temperature and pressure, along with, for a fixed mass of water in a vessel of fixed volume, an autonomically and stably self-determining partition into three mutually contacting phases, vapour and solid, dynamically depending only on the total internal energy of the mass of water.
For historical reasons, the triple point temperature of water is fixed at 273.16 units of the measurement increment. There is a variety of kinds of temperature scale, it may be convenient to classify them theoretically based. Empirical temperature scales are older, while theoretically based scales arose in the middle of the nineteenth century. Empirically based temperature scales rely directly on measurements of simple physical properties of materials. For example, the length of a column of mercury, confined in a glass-walled capillary tube, is dependent on temperature, is the basis of the useful mercury-in-glass thermometer; such scales are valid only within convenient ranges of temperature. For example, above the boiling point of mercury, a mercury-in-glass thermometer is impracticable. Most materials expand with temperature increase, but some materials, such as water, contract with temperature increase over some specific range, they are hardly useful as thermometric materials. A material is of no use as a thermometer near one of its phase-change temperatures, for example its boiling-point.
In spite of these restrictions, most used practical thermometers are of the empirically based kind. It was used for calorimetry, which contributed to the discovery of thermodynamics. Empirical thermometry has serious drawbacks when judged as a basis for theoretical physics. Empirically based thermometers, beyond their base as simple direct measurements of ordinary physical properties of thermometric materials, can be re-calibrated, by use of theoretical physical reasoning, this can extend their range of adequacy. Theoretically-based temperature scales are based directly on theoretical arguments those of thermodynamics, kinetic theory and quantum mechanics, they rely on theoretical properties of idealized materials. They are more or less comparable with feasible physical devices and materials. Theoretically based temperature scales are used to provide calibrating standards for practi