Thermal expansion

Thermal expansion is the tendency of matter to change its shape and volume in response to a change in temperature. Temperature is a monotonic function of the average molecular kinetic energy of a substance; when a substance is heated, the kinetic energy of its molecules increases. Thus, the molecules begin vibrating/moving more and maintain a greater average separation. Materials which contract with increasing temperature are unusual; the relative expansion divided by the change in temperature is called the material's coefficient of thermal expansion and varies with temperature. If an equation of state is available, it can be used to predict the values of the thermal expansion at all the required temperatures and pressures, along with many other state functions. A number of materials contract on heating within certain temperature ranges. For example, the coefficient of thermal expansion of water drops to zero as it is cooled to 3.983 °C and becomes negative below this temperature. Pure silicon has a negative coefficient of thermal expansion for temperatures between about 18 and 120 kelvins.

Unlike gases or liquids, solid materials tend to keep their shape. Thermal expansion decreases with increasing bond energy, which has an effect on the melting point of solids, so, high melting point materials are more to have lower thermal expansion. In general, liquids expand more than solids; the thermal expansion of glasses is higher compared to that of crystals. At the glass transition temperature, rearrangements that occur in an amorphous material lead to characteristic discontinuities of coefficient of thermal expansion and specific heat; these discontinuities allow detection of the glass transition temperature where a supercooled liquid transforms to a glass. Absorption or desorption of water can change the size of many common materials. Common plastics exposed to water can, in the long term, expand by many percent; the coefficient of thermal expansion describes how the size of an object changes with a change in temperature. It measures the fractional change in size per degree change in temperature at a constant pressure.

Several types of coefficients have been developed: volumetric and linear. The choice of coefficient depends on the particular application and which dimensions are considered important. For solids, one might only be concerned over some area; the volumetric thermal expansion coefficient is the most basic thermal expansion coefficient, the most relevant for fluids. In general, substances expand or contract when their temperature changes, with expansion or contraction occurring in all directions. Substances that expand at the same rate in every direction are called isotropic. For isotropic materials, the area and volumetric thermal expansion coefficient are approximately twice and three times larger than the linear thermal expansion coefficient. Mathematical definitions of these coefficients are defined below for solids and gases. In the general case of a gas, liquid, or solid, the volumetric coefficient of thermal expansion is given by α V = 1 V p The subscript p indicates that the pressure is held constant during the expansion, the subscript V stresses that it is the volumetric expansion that enters this general definition.

In the case of a gas, the fact that the pressure is held constant is important, because the volume of a gas will vary appreciably with pressure as well as temperature. For a gas of low density this can be seen from the ideal gas law; when calculating thermal expansion it is necessary to consider whether the body is free to expand or is constrained. If the body is free to expand, the expansion or strain resulting from an increase in temperature can be calculated by using the applicable coefficient of Thermal Expansion. If the body is constrained so that it cannot expand internal stress will be caused by a change in temperature; this stress can be calculated by considering the strain that would occur if the body were free to expand and the stress required to reduce that strain to zero, through the stress/strain relationship characterised by the elastic or Young's modulus. In the special case of solid materials, external ambient pressure does not appreciably affect the size of an object and so it is not necessary to consider the effect of pressure changes.

Common engineering solids have coefficients of thermal expansion that do not vary over the range of temperatures where they are designed to be used, so where high accuracy is not required, practical calculations can be based on a constant, value of the coefficient of expansion. Linear expansion means change in one dimension as opposed to change in volume. To a first approximation, the change in length measurements of an object due to thermal expansion is related to temperature change by a "Coefficient of linear th

Heat

In thermodynamics, heat is energy in transfer to or from a thermodynamic system, by mechanisms other than thermodynamic work or transfer of matter. The mechanisms include conduction, through direct contact of immobile bodies, or through a wall or barrier, impermeable to matter; when there is a suitable path between two systems with different temperatures, heat transfer occurs immediately, spontaneously from the hotter to the colder system. Thermal conduction occurs by the stochastic motion of microscopic particles. In contrast, thermodynamic work is defined by mechanisms that act macroscopically and directly on the system's whole-body state variables; the definition of heat transfer does not require. For example, a bolt of lightning may transfer heat to a body. Convective circulation allows one body to heat another, through an intermediate circulating fluid that carries energy from a boundary of one to a boundary of the other. Though spontaneous, convective circulation does not and occur because of temperature difference.

Like thermodynamic work, heat transfer is a process involving two systems, not a property of any one system. In thermodynamics, energy transferred as heat contributes to change in the system's cardinal energy variable of state, for example its internal energy, or for example its enthalpy; this is to be distinguished from the ordinary language conception of heat as a property of the system. Although heat flows from a hotter body to a cooler one, it is possible to construct a heat pump or refrigeration system that does work to increase the difference in temperature between two systems. In contrast, a heat engine reduces an existing temperature difference to do work on another system; the amount of heat transferred in any process can be defined as the total amount of transferred energy excluding any macroscopic work, done and any energy contained in matter transferred. For the precise definition of heat, it is necessary that it occur by a path that does not include transfer of matter; as an amount of energy, the SI unit of heat is the joule.

The conventional symbol used to represent the amount of heat transferred in a thermodynamic process is Q. Heat is measured by its effect on the states of interacting bodies, for example, by the amount of ice melted or a change in temperature; the quantification of heat via the temperature change of a body is called calorimetry. As a form of energy, heat has the unit joule in the International System of Units. However, in many applied fields in engineering the British thermal unit and the calorie are used; the standard unit for the rate of heat transferred is the watt, defined as one joule per second. Use of the symbol Q for the total amount of energy transferred as heat is due to Rudolf Clausius in 1850: "Let the amount of heat which must be imparted during the transition of the gas in a definite manner from any given state to another, in which its volume is v and its temperature t, be called Q"Heat released by a system into its surroundings is by convention a negative quantity. Heat transfer rate, or heat flow per unit time, is denoted by Q ˙.

This should not be confused with a time derivative of a function of state since heat is not a function of state. Heat flux is defined as rate of heat transfer per unit cross-sectional area. In 1856, Rudolf Clausius, referring to closed systems, in which transfers of matter do not occur, defined the second fundamental theorem in the mechanical theory of heat: "if two transformations which, without necessitating any other permanent change, can mutually replace one another, be called equivalent the generations of the quantity of heat Q from work at the temperature T, has the equivalence-value:" Q T. In 1865, he came to define the entropy symbolized by S, such that, due to the supply of the amount of heat Q at temperature T the entropy of the system is increased by Δ S = Q T In a transfer of energy as heat without work being done, there are changes of entropy in both the surroundings which lose heat and the system which gains it; the increase, ΔS, of entropy in the system may be considered to consist of two parts, an increment, ΔS′ that matches, or'compensates', the change, −ΔS′, of entropy in the surroundings, a further increment, ΔS′′ that may be considered to be'generated' or'produced' in the system, is said therefore to be'uncompensated'.

Thus Δ S = Δ S ′ + Δ S ′ ′. {\displaystyl

Thermodynamic temperature

Thermodynamic temperature is the absolute measure of temperature and is one of the principal parameters of thermodynamics. Thermodynamic temperature is defined by the third law of thermodynamics in which the theoretically lowest temperature is the null or zero point. At this point, absolute zero, the particle constituents of matter have minimal motion and can become no colder. In the quantum-mechanical description, matter at absolute zero is in its ground state, its state of lowest energy. Thermodynamic temperature is also called absolute temperature, for two reasons: one, proposed by Kelvin, that it does not depend on the properties of a particular material; the International System of Units specifies a particular scale for thermodynamic temperature. It uses the kelvin scale for measurement and selects the triple point of water at 273.16 K as the fundamental fixing point. Other scales have been in use historically; the Rankine scale, using the degree Fahrenheit as its unit interval, is still in use as part of the English Engineering Units in the United States in some engineering fields.

ITS-90 gives a practical means of estimating the thermodynamic temperature to a high degree of accuracy. The temperature of a body at rest is a measure of the mean of the energy of the translational and rotational motions of matter's particle constituents, such as molecules and subatomic particles; the full variety of these kinetic motions, along with potential energies of particles, occasionally certain other types of particle energy in equilibrium with these, make up the total internal energy of a substance. Internal energy is loosely called the heat energy or thermal energy in conditions when no work is done upon the substance by its surroundings, or by the substance upon the surroundings. Internal energy may be stored in a number of ways within a substance, each way constituting a "degree of freedom". At equilibrium, each degree of freedom will have on average the same energy: k B T / 2 where k B is the Boltzmann constant, unless that degree of freedom is in the quantum regime; the internal degrees of freedom may be in the quantum regime at room temperature, but the translational degrees of freedom will be in the classical regime except at low temperatures and it may be said that, for most situations, the thermodynamic temperature is specified by the average translational kinetic energy of the particles.

Temperature is a measure of the random submicroscopic motions and vibrations of the particle constituents of matter. These motions comprise the internal energy of a substance. More the thermodynamic temperature of any bulk quantity of matter is the measure of the average kinetic energy per classical degree of freedom of its constituent particles. "Translational motions" are always in the classical regime. Translational motions are ordinary, whole-body movements in three-dimensional space in which particles move about and exchange energy in collisions. Figure 1 below shows translational motion in gases. Thermodynamic temperature's null point, absolute zero, is the temperature at which the particle constituents of matter are as close as possible to complete rest. Zero kinetic energy remains in a substance at absolute zero. Throughout the scientific world where measurements are made in SI units, thermodynamic temperature is measured in kelvins. Many engineering fields in the U. S. however, measure thermodynamic temperature using the Rankine scale.

By international agreement, the unit kelvin and its scale are defined by two points: absolute zero, the triple point of Vienna Standard Mean Ocean Water. Absolute zero, the lowest possible temperature, is defined as being 0 K and −273.15 °C. The triple point of water is defined as being 273.16 K and 0.01 °C. This definition does three things: It fixes the magnitude of the kelvin unit as being 1 part in 273.16 parts the difference between absolute zero and the triple point of water. Temperatures expressed in kelvins are converted to degrees Rankine by multiplying by 1.8. Temperatures expressed in degrees Rankine are converted to kelvins by dividing by 1.8. Although the kelvin and Celsius scales are defined using absolute zero and the triple point of water, it is impractical to use this definition at temperatures that are different from the triple point of water. ITS-90 is designed to represent the thermodynamic temperature as as possible throughout its range. Many different thermometer designs are required to cover the entire range.

These include helium vapor pressure thermometers, helium gas thermometers, standard platinum resistance thermometers and monochromatic radiation thermometers. For some types of thermometer the relationship between the property observed and temperature, is close to linear, so for most purposes a linear scale is sufficient, without point-by-point calibration

Entropy

In statistical mechanics, entropy is an extensive property of a thermodynamic system. It is related to the number Ω of microscopic configurations that are consistent with the macroscopic quantities that characterize the system. Under the assumption that each microstate is probable, the entropy S is the natural logarithm of the number of microstates, multiplied by the Boltzmann constant kB. Formally, S = k B ln Ω. Macroscopic systems have a large number Ω of possible microscopic configurations. For example, the entropy of an ideal gas is proportional to the number of gas molecules N. Twenty liters of gas at room temperature and atmospheric pressure has N ≈ 6×1023. At equilibrium, each of the Ω ≈ eN configurations can be regarded as random and likely; the second law of thermodynamics states. Such systems spontaneously evolve towards the state with maximum entropy. Non-isolated systems may lose entropy, provided their environment's entropy increases by at least that amount so that the total entropy increases.

Entropy is a function of the state of the system, so the change in entropy of a system is determined by its initial and final states. In the idealization that a process is reversible, the entropy does not change, while irreversible processes always increase the total entropy; because it is determined by the number of random microstates, entropy is related to the amount of additional information needed to specify the exact physical state of a system, given its macroscopic specification. For this reason, it is said that entropy is an expression of the disorder, or randomness of a system, or of the lack of information about it; the concept of entropy plays a central role in information theory. Boltzmann's constant, therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin in the International System of Units; the entropy of a substance is given as an intensive property—either entropy per unit mass or entropy per unit amount of substance. The French mathematician Lazare Carnot proposed in his 1803 paper Fundamental Principles of Equilibrium and Movement that in any machine the accelerations and shocks of the moving parts represent losses of moment of activity.

In other words, in any natural process there exists an inherent tendency towards the dissipation of useful energy. Building on this work, in 1824 Lazare's son Sadi Carnot published Reflections on the Motive Power of Fire which posited that in all heat-engines, whenever "caloric" falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body, he made the analogy with that of. This was an early insight into the second law of thermodynamics. Carnot based his views of heat on the early 18th century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, on the contemporary views of Count Rumford who showed that heat could be created by friction as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, that "no change occurs in the condition of the working body".

The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy, its conservation in all processes. In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, gave this "change" a mathematical interpretation by questioning the nature of the inherent loss of usable heat when work is done, e.g. heat produced by friction. Clausius described entropy as the transformation-content, i.e. dissipative energy use, of a thermodynamic system or working body of chemical species during a change of state. This was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, James Clerk Maxwell gave entropy a statistical basis. In 1877 Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy to be proportional to the natural logarithm of the number of microstates such a gas could occupy.

Henceforth, the essential problem in statistical thermodynamics, i.e. according to Erwin Schrödinger, has been to determine the distribution of a given amount of energy E over N identical systems. Carathéodory linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability. There are two related definitions of entropy: the thermodynamic definition and the statistical mechanics definition; the classical thermodynamics definition developed first. In the classical thermodynamics viewpoint, the system is composed of large numbers of constituents and the state of the system is described by the average thermodynamic properties of those constituents.

Thermodynamic databases for pure substances

Thermodynamic databases contain information about thermodynamic properties for substances, the most important being enthalpy and Gibbs free energy. Numerical values of these thermodynamic properties are collected as tables or are calculated from thermodynamic datafiles. Data is expressed as temperature-dependent values for one mole of substance at the standard pressure of 101.325 kPa, or 100 kPa. Both of these definitions for the standard condition for pressure are in use. Thermodynamic data is presented as a table or chart of function values for one mole of a substance. A thermodynamic datafile is a set of equation parameters from which the numerical data values can be calculated. Tables and datafiles are presented at a standard pressure of 1 bar or 1 atm, but in the case of steam and other industrially important gases, pressure may be included as a variable. Function values depend on the state of aggregation of the substance, which must be defined for the value to have any meaning; the state of aggregation for thermodynamic purposes is the standard state, sometimes called the reference state, defined by specifying certain conditions.

The normal standard state is defined as the most stable physical form of the substance at the specified temperature and a pressure of 1 bar or 1 atm. However, since any non-normal condition could be chosen as a standard state, it must be defined in the context of use. A physical standard state is one that exists for a time sufficient to allow measurements of its properties; the most common physical standard state is one, stable thermodynamically. It has no tendency to transform into any other physical state. If a substance can exist but is not thermodynamically stable, it is called a metastable state. A non-physical standard state is one whose properties are obtained by extrapolation from a physical state. Metastable liquids and solids are important because some substances can persist and be used in that state indefinitely. Thermodynamic functions that refer to conditions in the normal standard state are designated with a small superscript °; the relationship between certain physical and thermodynamic properties may be described by an equation of state.

It is difficult to measure the absolute amount of any thermodynamic quantity involving the internal energy, since the internal energy of a substance can take many forms, each of which has its own typical temperature at which it begins to become important in thermodynamic reactions. It is therefore the change in these functions, of most interest; the isobaric change in enthalpy H above the common reference temperature of 298.15 K is called the high temperature heat content, the sensible heat, or the relative high-temperature enthalpy, called henceforth the heat content. Different databases designate this term in different ways. All of these terms mean the molar heat content for a substance in its normal standard state above a reference temperature of 298.15 K. Data for gases is for the hypothetical ideal gas at the designated standard pressure; the SI unit for enthalpy is J/mol, is a positive number above the reference temperature. The heat content has been measured and tabulated for all known substances, is expressed as a polynomial function of temperature.

The heat content of an ideal gas is independent of pressure, but the heat content of real gases varies with pressure, hence the need to define the state for the gas and the pressure. Note that for some thermodynamic databases such as for steam, the reference temperature is 273.15 K. The heat capacity C is the ratio of heat added to the temperature increase. For an incremental isobaric addition of heat: Cp is therefore the slope of a plot of temperature vs. isobaric heat content. The SI units for heat capacity are J/; when heat is added to a condensed-phase substance, its temperature increases until a phase change temperature is reached. With further addition of heat, the temperature remains constant while the phase transition takes place; the amount of substance that transforms is a function of the amount of heat added. After the transition is complete, adding more heat increases the temperature. In other words, the enthalpy of a substance changes isothermally; the enthalpy change resulting from a phase transition is designated ΔH.

There are four types of enthalpy changes resulting from a phase transition. To wit: Enthalpy of transformation; this applies to the transformations from one solid phase to another, such as the transformation from α-Fe to γ -Fe. The transformation is designated ΔHtr. Enthalpy of fusion or melting; this is designated ΔHm. Enthalpy of vaporization; this is designated ΔHv. Enthalpy of sublimation; this is designated ΔHs. Cp is infinite at phase transition temperatures. At the Curie temperature, Cp shows a sharp discontinuity. Values of ΔH are given for the transition at the normal standard state temperature for the two states, if so, are designa

Density

The density, or more the volumetric mass density, of a substance is its mass per unit volume. The symbol most used for density is ρ, although the Latin letter D can be used. Mathematically, density is defined as mass divided by volume: ρ = m V where ρ is the density, m is the mass, V is the volume. In some cases, density is loosely defined as its weight per unit volume, although this is scientifically inaccurate – this quantity is more called specific weight. For a pure substance the density has the same numerical value as its mass concentration. Different materials have different densities, density may be relevant to buoyancy and packaging. Osmium and iridium are the densest known elements at standard conditions for temperature and pressure but certain chemical compounds may be denser. To simplify comparisons of density across different systems of units, it is sometimes replaced by the dimensionless quantity "relative density" or "specific gravity", i.e. the ratio of the density of the material to that of a standard material water.

Thus a relative density less than one means. The density of a material varies with pressure; this variation is small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object and thus increases its density. Increasing the temperature of a substance decreases its density by increasing its volume. In most materials, heating the bottom of a fluid results in convection of the heat from the bottom to the top, due to the decrease in the density of the heated fluid; this causes it to rise relative to more dense unheated material. The reciprocal of the density of a substance is called its specific volume, a term sometimes used in thermodynamics. Density is an intensive property in that increasing the amount of a substance does not increase its density. In a well-known but apocryphal tale, Archimedes was given the task of determining whether King Hiero's goldsmith was embezzling gold during the manufacture of a golden wreath dedicated to the gods and replacing it with another, cheaper alloy.

Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated and compared with the mass. Baffled, Archimedes is said to have taken an immersion bath and observed from the rise of the water upon entering that he could calculate the volume of the gold wreath through the displacement of the water. Upon this discovery, he leapt from his bath and ran naked through the streets shouting, "Eureka! Eureka!". As a result, the term "eureka" entered common parlance and is used today to indicate a moment of enlightenment; the story first appeared in written form in Vitruvius' books of architecture, two centuries after it took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time. From the equation for density, mass density has units of mass divided by volume; as there are many units of mass and volume covering many different magnitudes there are a large number of units for mass density in use.

The SI unit of kilogram per cubic metre and the cgs unit of gram per cubic centimetre are the most used units for density. One g/cm3 is equal to one thousand kg/m3. One cubic centimetre is equal to one millilitre. In industry, other larger or smaller units of mass and or volume are more practical and US customary units may be used. See below for a list of some of the most common units of density. A number of techniques as well as standards exist for the measurement of density of materials; such techniques include the use of a hydrometer, Hydrostatic balance, immersed body method, air comparison pycnometer, oscillating densitometer, as well as pour and tap. However, each individual method or technique measures different types of density, therefore it is necessary to have an understanding of the type of density being measured as well as the type of material in question; the density at all points of a homogeneous object equals its total mass divided by its total volume. The mass is measured with a scale or balance.

To determine the density of a liquid or a gas, a hydrometer, a dasymeter or a Coriolis flow meter may be used, respectively. Hydrostatic weighing uses the displacement of water due to a submerged object to determine the density of the object. If the body is not homogeneous its density varies between different regions of the object. In that case the density around any given location is determined by calculating the density of a small volume around that location. In the limit of an infinitesimal volume the density of an inhomogeneous object at a point becomes: ρ = d m / d V, where d V is an elementary volume at position r; the mass of the body t

Thermal conductivity

The thermal conductivity of a material is a measure of its ability to conduct heat. It is denoted by k, λ, or κ. Heat transfer occurs at a lower rate in materials of low thermal conductivity than in materials of high thermal conductivity. For instance, metals have high thermal conductivity and are efficient at conducting heat, while the opposite is true for insulating materials like Styrofoam. Correspondingly, materials of high thermal conductivity are used in heat sink applications and materials of low thermal conductivity are used as thermal insulation; the reciprocal of thermal conductivity is called thermal resistivity. The defining equation for thermal conductivity is q = − k ∇ T, where q is the heat flux, k is the thermal conductivity, ∇ T is the temperature gradient; this is known as Fourier's Law for heat conduction. Although expressed as a scalar, the most general form of thermal conductivity is a second-rank tensor. However, the tensorial description only becomes necessary in materials.

Consider a solid material placed between two environments of different temperatures. Let T 1 be the temperature at x = 0 and T 2 be the temperature at x = L, suppose T 2 > T 1. A possible realization of this scenario is a building on a cold winter day: the solid material in this case would be the building wall, separating the cold outdoor environment from the warm indoor environment. According to the second law of thermodynamics, heat will flow from the hot environment to the cold one in an attempt to equalize the temperature difference; this is quantified in terms of a heat flux q, which gives the rate, per unit area, at which heat flows in a given direction. In many materials, q is observed to be directly proportional to the temperature difference and inversely proportional to the separation: q = − k ⋅ T 2 − T 1 L; the constant of proportionality k is the thermal conductivity. In the present scenario, since T 2 > T 1 heat flows in the minus x-direction and q is negative, which in turn means that k > 0.

In general, k is always defined to be positive. The same definition of k can be extended to gases and liquids, provided other modes of energy transport, such as convection and radiation, are eliminated. For simplicity, we have assumed here that the k does not vary as temperature is varied from T 1 to T 2. Cases in which the temperature variation of k is non-negligible must be addressed using the more general definition of k discussed below. Thermal conduction is defined as the transport of energy due to random molecular motion across a temperature gradient, it is distinguished from energy transport by convection and molecular work in that it does not involve macroscopic flows or work-performing internal stresses. Energy flow due to thermal conduction is classified as heat and is quantified by the vector q, which gives the heat flux at position r and time t. According to the second law of thermodynamics, heat flows from high to low temperature. Hence, it reasonable to postulate that q is proportional to the gradient of the temperature field T, i.e. q = − k ∇ T, where the constant of proportionality, k > 0, is the thermal conductivity.

This is called Fourier's law of heat conduction. In actuality, it is not a law but a definition of thermal conductivity in terms of the independent physical quantities q and T; as such, its usefulness depends on the ability to determine k for a given material under given conditions. Note that k