1.
Heat
–
In physics, heat is the amount of energy flowing from one body to another spontaneously due to their temperature difference, or by any means other than through work or the transfer of matter. Thus, energy exchanged as heat during a process changes the energy of each body by equal. The sign of the quantity of heat can indicate the direction of the transfer, for example from system A to system B, negation indicates energy flowing in the opposite direction. While heat flows spontaneously from hot to cold, it is possible to construct a heat pump or refrigeration system that does work to increase the difference in temperature between two systems, conversely, a heat engine reduces an existing temperature difference to do work on another system. Heat is a consequence of the motion of particles. When heat is transferred between two objects or systems, the energy of the object or systems particles increases, as this occurs, the arrangement between particles becomes more and more disordered. In other words, heat is related to the concept of entropy, historically, many energy units for measurement of heat have been used. The standards-based unit in the International System of Units is the joule, Heat is measured by its effect on the states of interacting bodies, for example, by the amount of ice melted or a change in temperature. The quantification of heat via the change of a body is called calorimetry. In calorimetry, sensible heat is defined with respect to a specific chosen state variable of the system, sensible heat causes a change of the temperature of the system while leaving the chosen state variable unchanged. Heat transfer that occurs at a constant system temperature but changes the state variable is called latent heat with respect to the variable, for infinitesimal changes, the total incremental heat transfer is then the sum of the latent and sensible heat. Physicist James Clerk Maxwell, in his 1871 classic Theory of Heat, was one of many who began to build on the established idea that heat has something to do with matter in motion. This was the idea put forth by Benjamin Thompson in 1798. One of Maxwells recommended books was Heat as a Mode of Motion, Maxwell outlined four stipulations for the definition of heat, It is something which may be transferred from one body to another, according to the second law of thermodynamics. It is a quantity, and so can be treated mathematically. It cannot be treated as a substance, because it may be transformed into something that is not a material substance. Heat is one of the forms of energy and this was the way of the historical pioneers of thermodynamics. Maxwell writes that convection as such is not a purely thermal phenomenon, in thermodynamics, convection in general is regarded as transport of internal energy
2.
Thermodynamic temperature
–
Thermodynamic temperature is the absolute measure of temperature and is one of the principal parameters of thermodynamics. Thermodynamic temperature is defined by the law of thermodynamics in which the theoretically lowest temperature is the null or zero point. At this point, absolute zero, the constituents of matter have minimal motion. In the quantum-mechanical description, matter at absolute zero is in its ground state, the International System of Units specifies a particular scale for thermodynamic temperature. It uses the Kelvin scale for measurement and selects the point of water at 273.16 K as the fundamental fixing point. Other scales have been in use historically, the Rankine scale, using the degree Fahrenheit as its unit interval, is still in use as part of the English Engineering Units in the United States in some engineering fields. ITS-90 gives a means of estimating the thermodynamic temperature to a very high degree of accuracy. Internal energy is called the heat energy or thermal energy in conditions when no work is done upon the substance by its surroundings. Internal energy may be stored in a number of ways within a substance, each way constituting a degree of freedom. At equilibrium, each degree of freedom will have on average the energy, k B T /2 where k B is the Boltzmann constant. Temperature is a measure of the random submicroscopic motions and vibrations of the constituents of matter. These motions comprise the internal energy of a substance, more specifically, the thermodynamic temperature of any bulk quantity of matter is the measure of the average kinetic energy per classical degree of freedom of its constituent particles. Translational motions are almost always in the classical regime, translational motions are ordinary, whole-body movements in three-dimensional space in which particles move about and exchange energy in collisions. Figure 1 below shows translational motion in gases, Figure 4 below shows translational motion in solids, Zero kinetic energy remains in a substance at absolute zero. Throughout the scientific world where measurements are made in SI units, many engineering fields in the U. S. however, measure thermodynamic temperature using the Rankine scale. By international agreement, the kelvin and its scale are defined by two points, absolute zero, and the triple point of Vienna Standard Mean Ocean Water. Absolute zero, the lowest possible temperature, is defined as being precisely 0 K, the triple point of water is defined as being precisely 273.16 K and 0.01 °C. This definition does three things, It fixes the magnitude of the unit as being precisely 1 part in 273.15 kelvins
3.
State of matter
–
In physics, a state of matter is one of the distinct forms that matter takes on. Four states of matter are observable in everyday life, solid, liquid, gas, some other states are believed to be possible but remain theoretical for now. For a complete list of all states of matter, see the list of states of matter. Historically, the distinction is based on qualitative differences in properties. Matter in the state maintains a fixed volume and shape, with component particles close together. Matter in the state maintains a fixed volume, but has a variable shape that adapts to fit its container. Its particles are close together but move freely. Matter in the state has both variable volume and shape, adapting both to fit its container. Its particles are close together nor fixed in place. Matter in the state has variable volume and shape, but as well as neutral atoms, it contains a significant number of ions and electrons. Plasma is the most common form of matter in the universe. The term phase is used as a synonym for state of matter. In a solid the particles are packed together. The forces between particles are strong so that the particles move freely but can only vibrate. As a result, a solid has a stable, definite shape, solids can only change their shape by force, as when broken or cut. In crystalline solids, the particles are packed in a regularly ordered, there are various different crystal structures, and the same substance can have more than one structure. For example, iron has a cubic structure at temperatures below 912 °C. Ice has fifteen known crystal structures, or fifteen solid phases, glasses and other non-crystalline, amorphous solids without long-range order are not thermal equilibrium ground states, therefore they are described below as nonclassical states of matter
4.
Thermal expansion
–
Thermal expansion is the tendency of matter to change in shape, area, and volume in response to a change in temperature. Temperature is a function of the average molecular kinetic energy of a substance. When a substance is heated, the energy of its molecules increases. Thus, the molecules begin vibrating/moving more and usually maintain an average separation. Materials which contract with increasing temperature are unusual, this effect is limited in size, the degree of expansion divided by the change in temperature is called the materials coefficient of thermal expansion and generally varies with temperature. If an equation of state is available, it can be used to predict the values of the expansion at all the required temperatures and pressures. A number of contract on heating within certain temperature ranges. For example, the coefficient of expansion of water drops to zero as it is cooled to 3. Also, fairly pure silicon has a coefficient of thermal expansion for temperatures between about 18 and 120 Kelvin. Unlike gases or liquids, solid materials tend to keep their shape when undergoing thermal expansion, in general, liquids expand slightly more than solids. The thermal expansion of glasses is higher compared to that of crystals, at the glass transition temperature, rearrangements that occur in an amorphous material lead to characteristic discontinuities of coefficient of thermal expansion and specific heat. These discontinuities allow detection of the transition temperature where a supercooled liquid transforms to a glass. Absorption or desorption of water can change the size of common materials. Common plastics exposed to water can, in the long term, the coefficient of thermal expansion describes how the size of an object changes with a change in temperature. Specifically, it measures the change in size per degree change in temperature at a constant pressure. Several types of coefficients have been developed, volumetric, area, which is used depends on the particular application and which dimensions are considered important. For solids, one might only be concerned with the change along a length, the volumetric thermal expansion coefficient is the most basic thermal expansion coefficient, and the most relevant for fluids. In general, substances expand or contract when their temperature changes, substances that expand at the same rate in every direction are called isotropic
5.
Pressure
–
Pressure is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure is the relative to the ambient pressure. Various units are used to express pressure, Pressure may also be expressed in terms of standard atmospheric pressure, the atmosphere is equal to this pressure and the torr is defined as 1⁄760 of this. Manometric units such as the centimetre of water, millimetre of mercury, Pressure is the amount of force acting per unit area. The symbol for it is p or P, the IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in one is working, on the nearby presence of other symbols for quantities such as power and momentum. Mathematically, p = F A where, p is the pressure, F is the normal force and it relates the vector surface element with the normal force acting on it. It is incorrect to say the pressure is directed in such or such direction, the pressure, as a scalar, has no direction. The force given by the relationship to the quantity has a direction. If we change the orientation of the element, the direction of the normal force changes accordingly. Pressure is distributed to solid boundaries or across arbitrary sections of normal to these boundaries or sections at every point. It is a parameter in thermodynamics, and it is conjugate to volume. The SI unit for pressure is the pascal, equal to one newton per square metre and this name for the unit was added in 1971, before that, pressure in SI was expressed simply in newtons per square metre. Other units of pressure, such as pounds per square inch, the CGS unit of pressure is the barye, equal to 1 dyn·cm−2 or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre, but using the names kilogram, gram, kilogram-force, or gram-force as units of force is expressly forbidden in SI. The technical atmosphere is 1 kgf/cm2, since a system under pressure has potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume. It is therefore related to density and may be expressed in units such as joules per cubic metre. Similar pressures are given in kilopascals in most other fields, where the prefix is rarely used
6.
Second law of thermodynamics
–
The second law of thermodynamics states that the total entropy of an isolated system can only increase over time. It can remain constant in ideal cases where the system is in a state or undergoing a reversible process. The increase in entropy accounts for the irreversibility of processes. Historically, the law was an empirical finding that was accepted as an axiom of thermodynamic theory. Statistical thermodynamics, classical or quantum, explains the origin of the law. The second law has been expressed in many ways and its first formulation is credited to the French scientist Sadi Carnot in 1824, who showed that there is an upper limit to the efficiency of conversion of heat to work in a heat engine. The first law of thermodynamics provides the definition of internal energy, associated with all thermodynamic systems. The second law is concerned with the direction of natural processes and it asserts that a natural process runs only in one sense, and is not reversible. For example, heat flows spontaneously from hotter to colder bodies. Its modern definition is in terms of entropy, different notations are used for infinitesimal amounts of heat and infinitesimal amounts of entropy because entropy is a function of state, while heat, like work, is not. For an actually possible infinitesimal process without exchange of matter with the surroundings, the second law allows a distinguished temperature scale, which defines an absolute, thermodynamic temperature, independent of the properties of any particular reference thermometric body. These statements cast the law in general physical terms citing the impossibility of certain processes, the Clausius and the Kelvin statements have been shown to be equivalent. The historical origin of the law of thermodynamics was in Carnots principle. The Carnot engine is a device of special interest to engineers who are concerned with the efficiency of heat engines. Interpreted in the light of the first law, it is equivalent to the second law of thermodynamics. It states The efficiency of a quasi-static or reversible Carnot cycle depends only on the temperatures of the two reservoirs, and is the same, whatever the working substance. A Carnot engine operated in this way is the most efficient possible heat engine using those two temperatures, the German scientist Rudolf Clausius laid the foundation for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work. The statement by Clausius uses the concept of passage of heat, as is usual in thermodynamic discussions, this means net transfer of energy as heat, and does not refer to contributory transfers one way and the other
7.
Heat capacity
–
Heat capacity or thermal capacity is a measurable physical quantity equal to the ratio of the heat added to an object to the resulting temperature change. The unit of capacity is joule per kelvin J K. Specific heat is the amount of heat needed to raise the temperature of one kilogram of mass by 1 kelvin, Heat capacity is an extensive property of matter, meaning it is proportional to the size of the system. The molar heat capacity is the capacity per unit amount of a pure substance. In some engineering contexts, the heat capacity is used. Other contributions can come from magnetic and electronic degrees of freedom in solids, for quantum mechanical reasons, at any given temperature, some of these degrees of freedom may be unavailable, or only partially available, to store thermal energy. In such cases, the capacity is a fraction of the maximum. As the temperature approaches zero, the heat capacity of a system approaches zero. Quantum theory can be used to predict the heat capacity of simple systems. In a previous theory of common in the early modern period, heat was thought to be a measurement of an invisible fluid. Bodies were capable of holding an amount of this fluid, hence the term heat capacity, named. Heat is no longer considered a fluid, but rather a transfer of disordered energy, nevertheless, at least in English, the term heat capacity survives. In some other languages, the thermal capacity is preferred. In the International System of Units, heat capacity has the unit joules per kelvin, if the temperature change is sufficiently small the heat capacity may be assumed to be constant, C = Q Δ T. Heat capacity is a property, meaning it depends on the extent or size of the physical system studied. A sample containing twice the amount of substance as another sample requires the transfer of twice the amount of heat to achieve the change in temperature. For many purposes it is convenient to report heat capacity as an intensive property. In practice, this is most often an expression of the property in relation to a unit of mass, in science and engineering, International standards now recommend that specific heat capacity always refer to division by mass
8.
Thermodynamics
–
Thermodynamics is a branch of science concerned with heat and temperature and their relation to energy and work. The behavior of these quantities is governed by the four laws of thermodynamics, the laws of thermodynamics are explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to a variety of topics in science and engineering, especially physical chemistry, chemical engineering. The initial application of thermodynamics to mechanical heat engines was extended early on to the study of chemical compounds, Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged in the following decades, statistical thermodynamics, or statistical mechanics, concerned itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a mathematical approach to the field in his axiomatic formulation of thermodynamics. A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis, the first law specifies that energy can be exchanged between physical systems as heat and work. In thermodynamics, interactions between large ensembles of objects are studied and categorized, central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. With these tools, thermodynamics can be used to describe how systems respond to changes in their environment and this can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium, non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field. Guericke was driven to make a vacuum in order to disprove Aristotles long-held supposition that nature abhors a vacuum. Shortly after Guericke, the English physicist and chemist Robert Boyle had learned of Guerickes designs and, in 1656, in coordination with English scientist Robert Hooke, using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyles Law was formulated, which states that pressure, later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and he did not, however, follow through with his design. Nevertheless, in 1697, based on Papins designs, engineer Thomas Savery built the first engine, although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. Black and Watt performed experiments together, but it was Watt who conceived the idea of the condenser which resulted in a large increase in steam engine efficiency. Drawing on all the work led Sadi Carnot, the father of thermodynamics, to publish Reflections on the Motive Power of Fire
9.
Entropy
–
In statistical thermodynamics, entropy is a measure of the number of microscopic configurations Ω that a thermodynamic system can have when in a state as specified by some macroscopic variables. Formally, S = k B ln Ω, for example, gas in a container with known volume, pressure, and temperature could have an enormous number of possible configurations of the collection of individual gas molecules. Each instantaneous configuration of the gas may be regarded as random, Entropy may be understood as a measure of disorder within a macroscopic system. The second law of thermodynamics states that an isolated systems entropy never decreases, such systems spontaneously evolve towards thermodynamic equilibrium, the state with maximum entropy. Non-isolated systems may lose entropy, provided their environments entropy increases by at least that amount, since entropy is a function of the state of the system, a change in entropy of a system is determined by its initial and final states. This applies whether the process is reversible or irreversible, however, irreversible processes increase the combined entropy of the system and its environment. The above definition is called the macroscopic definition of entropy because it can be used without regard to any microscopic description of the contents of a system. The concept of entropy has found to be generally useful and has several other formulations. Entropy was discovered when it was noticed to be a quantity that behaves as a function of state and it has the dimension of energy divided by temperature, which has a unit of joules per kelvin in the International System of Units. But the entropy of a substance is usually given as an intensive property—either entropy per unit mass or entropy per unit amount of substance. In statistical mechanics this reflects that the state of a system is generally non-degenerate. Understanding the role of entropy in various processes requires an understanding of how. It is often said that entropy is an expression of the disorder, or randomness of a system, the second law is now often seen as an expression of the fundamental postulate of statistical mechanics through the modern definition of entropy. In other words, in any natural process there exists an inherent tendency towards the dissipation of useful energy and he made the analogy with that of how water falls in a water wheel. This was an insight into the second law of thermodynamics. g. Clausius described entropy as the transformation-content, i. e. dissipative energy use and this was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, henceforth, the essential problem in statistical thermodynamics, i. e. according to Erwin Schrödinger, has been to determine the distribution of a given amount of energy E over N identical systems. Carathéodory linked entropy with a definition of irreversibility, in terms of trajectories
10.
Free expansion
–
Free expansion is an irreversible process in which a gas expands into an insulated evacuated chamber. It is also called Joule expansion, real gases experience a temperature change during free expansion. Since the gas expands, Vf > Vi, which implies that the pressure does drop, during free expansion, no work is done by the gas. The gas goes through states that are not in thermodynamic equilibrium before reaching its final state, for example, the pressure changes locally from point to point, and the volume occupied by the gas is not a well defined quantity. A free expansion is achieved by opening a stopcock that allows the gas to expand into a vacuum. Although it would be difficult to achieve in reality, it is instructive to imagine a free expansion caused by moving a piston faster than virtually any atom, no work is done because there is no pressure on the piston. No heat energy leaves or enters the piston, nevertheless, there is an entropy change. But the well-known formula for change, Δ S = ∫ d Q r e v T
11.
Chemical thermodynamics
–
Chemical thermodynamics is the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. The structure of chemical thermodynamics is based on the first two laws of thermodynamics, starting from the first and second laws of thermodynamics, four equations called the fundamental equations of Gibbs can be derived. From these four, a multitude of equations, relating the thermodynamic properties of the system can be derived using relatively simple mathematics. This outlines the framework of chemical thermodynamics. Gibbs’ collection of papers provided the first unified body of thermodynamic theorems from the principles developed by others, such as Clausius, the first was the 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall. This book was responsible for supplanting the chemical affinity with the free energy in the English-speaking world. The second was the 1933 book Modern Thermodynamics by the methods of Willard Gibbs written by E. A. Guggenheim, the primary objective of chemical thermodynamics is the establishment of a criterion for the determination of the feasibility or spontaneity of a given transformation. The 3 laws of thermodynamics, The energy of the universe is constant, breaking or making of chemical bonds involves energy or heat, which may be either absorbed or evolved from a chemical system. Energy that can be released because of a reaction between a set of substances is equal to the difference between the energy content of the products and the reactants. This change in energy is called the change in energy of a chemical reaction. The change in energy is a process which is equal to the heat change if it is measured under conditions of constant volume. Another useful term is the heat of combustion, which is the energy released due to a combustion reaction, food is similar to hydrocarbon fuel and carbohydrate fuels, and when it is oxidized, its caloric content is similar. In chemical thermodynamics the term used for the potential energy is chemical potential. Even for homogeneous bulk materials, the energy functions depend on the composition, as do all the extensive thermodynamic potentials. If the quantities, the number of species, are omitted from the formulae. For a bulk system they are the last remaining extensive variables, the expression for dG is especially useful at constant T and P, conditions which are easy to achieve experimentally and which approximates the condition in living creatures T, P = ∑ i μ i d N i. While this formulation is mathematically defensible, it is not particularly transparent since one does not simply add or remove molecules from a system. There is always a process involved in changing the composition, e. g. a chemical reaction and we should find a notation which does not seem to imply that the amounts of the components can be changed independently
12.
Third law of thermodynamics
–
Entropy is related to the number of accessible microstates, and for a system consisting of many particles, quantum mechanics indicates that there is only one unique state with minimum energy. The constant value is called the entropy of the system. Here a condensed system refers to liquids and solids, a classical formulation by Nernst is, It is impossible for any process, no matter how idealized, to reduce the entropy of a system to its absolute-zero value in a finite number of operations. It was proven in 2017 by Masanes and Oppenheim, the 3rd law was developed by the chemist Walther Nernst during the years 1906–12, and is therefore often referred to as Nernsts theorem or Nernsts postulate. The third law of thermodynamics states that the entropy of a system at zero is a well-defined constant. This is because a system at zero temperature exists in its ground state, in 1912 Nernst stated the law thus, It is impossible for any procedure to lead to the isotherm T =0 in a finite number of steps. An alternative version of the law of thermodynamics as stated by Gilbert N. This version states not only ΔS will reach zero at 0 K, some crystals form defects which causes a residual entropy. This residual entropy disappears when the barriers to transitioning to one ground state are overcome. With the development of mechanics, the third law of thermodynamics changed from a fundamental law to a derived law. The counting of states is from the state of absolute zero. In simple terms, the law states that the entropy of a perfect crystal of a pure substance approaches zero as the temperature approaches zero. The alignment of a perfect crystal leaves no ambiguity as to the location and orientation of each part of the crystal, as the energy of the crystal is reduced, the vibrations of the individual atoms are reduced to nothing, and the crystal becomes the same everywhere. The third law provides a reference point for the determination of entropy at any other temperature. The entropy of a system, determined relative to this point, is then the absolute entropy of that system. Mathematically, the entropy of any system at zero temperature is the natural log of the number of ground states times Boltzmanns constant kB = 6977137999999999999♠1. 38×10−23 J K−1. The entropy of a crystal lattice as defined by Nernsts theorem is zero provided that its ground state is unique. As a result, the initial value of zero is selected S0 =0 is used for convenience