1.
Thermodynamics
–
Thermodynamics is a branch of physics concerned with heat and temperature and their relation to energy and work. The behavior of these quantities is governed by the four laws of thermodynamics, the laws of thermodynamics are explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to a variety of topics in science and engineering, especially physical chemistry, chemical engineering. The initial application of thermodynamics to mechanical heat engines was extended early on to the study of chemical compounds, Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged in the following decades, statistical thermodynamics, or statistical mechanics, concerned itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a mathematical approach to the field in his axiomatic formulation of thermodynamics. A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis, the first law specifies that energy can be exchanged between physical systems as heat and work. In thermodynamics, interactions between large ensembles of objects are studied and categorized, central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. With these tools, thermodynamics can be used to describe how systems respond to changes in their environment and this can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium, non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field. Guericke was driven to make a vacuum in order to disprove Aristotles long-held supposition that nature abhors a vacuum. Shortly after Guericke, the English physicist and chemist Robert Boyle had learned of Guerickes designs and, in 1656, in coordination with English scientist Robert Hooke, using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyles Law was formulated, which states that pressure, later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and he did not, however, follow through with his design. Nevertheless, in 1697, based on Papins designs, engineer Thomas Savery built the first engine, although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. Black and Watt performed experiments together, but it was Watt who conceived the idea of the condenser which resulted in a large increase in steam engine efficiency. Drawing on all the work led Sadi Carnot, the father of thermodynamics, to publish Reflections on the Motive Power of Fire
2.
Carnot heat engine
–
A Carnot heat engine is an engine that operates on the reversible Carnot cycle. The basic model for this engine was developed by Nicolas Léonard Sadi Carnot in 1824, the Carnot engine model was graphically expanded upon by Benoît Paul Émile Clapeyron in 1834 and mathematically elaborated upon by Rudolf Clausius in 1857 from which the concept of entropy emerged. Every thermodynamic system exists in a particular state, a thermodynamic cycle occurs when a system is taken through a series of different states, and finally returned to its initial state. In the process of going through this cycle, the system may perform work on its surroundings, a heat engine acts by transferring energy from a warm region to a cool region of space and, in the process, converting some of that energy to mechanical work. The cycle may also be reversed, in the adjacent diagram, from Carnots 1824 work, Reflections on the Motive Power of Fire, there are two bodies A and B, kept each at a constant temperature, that of A being higher than that of B. These two bodies to which we can give, or from which we can remove the heat without causing their temperatures to vary, exercise the functions of two unlimited reservoirs of caloric. We will call the first the furnace and the second the refrigerator. ”Carnot then explains how we can obtain power, i. e. “work”. It also acts as a cooler and hence can also act as a Refrigerator, the previous image shows the original piston-and-cylinder diagram used by Carnot in discussing his ideal engines. The figure at right shows a diagram of a generic heat engine. In the diagram, the “working body”, an introduced by Clausius in 1850. Carnot had postulated that the body could be any substance capable of expansion, such as vapor of water, vapor of alcohol, vapor of mercury. The output work W here is the movement of the piston as it is used to turn a crank-arm, Carnot defined work as “weight lifted through a height”. The Carnot cycle when acting as a heat engine consists of the steps, Reversible isothermal expansion of the gas at the hot temperature. During this step the gas is allowed to expand and it work on the surroundings. The temperature of the gas does not change during the process, the gas expansion is propelled by absorption of heat energy Q1 and of entropy Δ S H = Q H / T H from the high temperature reservoir. For this step the piston and cylinder are assumed to be thermally insulated, the gas continues to expand, doing work on the surroundings, and losing an equivalent amount of internal energy. The gas expansion causes it to cool to the cold temperature, Reversible isothermal compression of the gas at the cold temperature, TC. Now the surroundings do work on the gas, causing an amount of heat energy Q2, once again the piston and cylinder are assumed to be thermally insulated
3.
Statistical mechanics
–
Statistical mechanics is a branch of theoretical physics using probability theory to study the average behaviour of a mechanical system, where the state of the system is uncertain. A common use of mechanics is in explaining the thermodynamic behaviour of large systems. This branch of mechanics, which treats and extends classical thermodynamics, is known as statistical thermodynamics or equilibrium statistical mechanics. Statistical mechanics also finds use outside equilibrium, an important subbranch known as non-equilibrium statistical mechanics deals with the issue of microscopically modelling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions or flows of particles, in physics there are two types of mechanics usually examined, classical mechanics and quantum mechanics. Statistical mechanics fills this disconnection between the laws of mechanics and the experience of incomplete knowledge, by adding some uncertainty about which state the system is in. The statistical ensemble is a probability distribution over all states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points, in quantum statistical mechanics, the ensemble is a probability distribution over pure states, and can be compactly summarized as a density matrix. These two meanings are equivalent for many purposes, and will be used interchangeably in this article, however the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself also evolves, as the systems in the ensemble continually leave one state. The ensemble evolution is given by the Liouville equation or the von Neumann equation, one special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium, Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics, non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems. The primary goal of thermodynamics is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles. Whereas statistical mechanics proper involves dynamics, here the attention is focussed on statistical equilibrium, Statistical equilibrium does not mean that the particles have stopped moving, rather, only that the ensemble is not evolving. A sufficient condition for statistical equilibrium with a system is that the probability distribution is a function only of conserved properties. There are many different equilibrium ensembles that can be considered, additional postulates are necessary to motivate why the ensemble for a given system should have one form or another. A common approach found in textbooks is to take the equal a priori probability postulate
4.
Chemical thermodynamics
–
Chemical thermodynamics is the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. The structure of chemical thermodynamics is based on the first two laws of thermodynamics, starting from the first and second laws of thermodynamics, four equations called the fundamental equations of Gibbs can be derived. From these four, a multitude of equations, relating the thermodynamic properties of the system can be derived using relatively simple mathematics. This outlines the framework of chemical thermodynamics. Gibbs’ collection of papers provided the first unified body of thermodynamic theorems from the principles developed by others, such as Clausius, the first was the 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall. This book was responsible for supplanting the chemical affinity with the free energy in the English-speaking world. The second was the 1933 book Modern Thermodynamics by the methods of Willard Gibbs written by E. A. Guggenheim, the primary objective of chemical thermodynamics is the establishment of a criterion for the determination of the feasibility or spontaneity of a given transformation. The 3 laws of thermodynamics, The energy of the universe is constant, breaking or making of chemical bonds involves energy or heat, which may be either absorbed or evolved from a chemical system. Energy that can be released because of a reaction between a set of substances is equal to the difference between the energy content of the products and the reactants. This change in energy is called the change in energy of a chemical reaction. The change in energy is a process which is equal to the heat change if it is measured under conditions of constant volume. Another useful term is the heat of combustion, which is the energy released due to a combustion reaction, food is similar to hydrocarbon fuel and carbohydrate fuels, and when it is oxidized, its caloric content is similar. In chemical thermodynamics the term used for the potential energy is chemical potential. Even for homogeneous bulk materials, the energy functions depend on the composition, as do all the extensive thermodynamic potentials. If the quantities, the number of species, are omitted from the formulae. For a bulk system they are the last remaining extensive variables, the expression for dG is especially useful at constant T and P, conditions which are easy to achieve experimentally and which approximates the condition in living creatures T, P = ∑ i μ i d N i. While this formulation is mathematically defensible, it is not particularly transparent since one does not simply add or remove molecules from a system. There is always a process involved in changing the composition, e. g. a chemical reaction and we should find a notation which does not seem to imply that the amounts of the components can be changed independently
5.
Equilibrium thermodynamics
–
Equilibrium Thermodynamics is the systematic study of transformations of matter and energy in systems in terms of a concept called thermodynamic equilibrium. The word equilibrium implies a state of balance, Equilibrium thermodynamics, in origins, derives from analysis of the Carnot cycle. Here, typically a system, as cylinder of gas, initially in its own state of thermodynamic equilibrium, is set out of balance via heat input from a combustion reaction. Then, through a series of steps, as the system settles into its equilibrium state. In an equilibrium state the potentials, or driving forces, within the system, are in exact balance, an equilibrium state is mathematically ascertained by seeking the extrema of a thermodynamic potential function, whose nature depends on the constraints imposed on the system. For example, a reaction at constant temperature and pressure will reach equilibrium at a minimum of its components Gibbs free energy. In equilibrium thermodynamics, by contrast, the state of the system will be considered uniform throughout, defined macroscopically by such quantities as temperature, pressure, systems are studied in terms of change from one equilibrium state to another, such a change is called a thermodynamic process. Ruppeiner geometry is a type of information used to study thermodynamics. It claims that thermodynamic systems can be represented by Riemannian geometry, non-equilibrium thermodynamics Thermodynamics Adkins, C. J. Equilibrium Thermodynamics, 3rd Ed. & Boles, M. Thermodynamics – an Engineering Approach, 4th Ed, modern Thermodynamics – From Heat Engines to Dissipative Structures. New York, John Wiley & Sons
6.
Non-equilibrium thermodynamics
–
Non-equilibrium thermodynamics is concerned with transport processes and with the rates of chemical reactions. It relies on what may be thought of as more or less nearness to thermodynamic equilibrium, non-equilibrium thermodynamics is a work in progress, not an established edifice. This article will try to sketch some approaches to it and some concepts important for it, some systems and processes are, however, in a useful sense, near enough to thermodynamic equilibrium to allow description with useful accuracy by currently known non-equilibrium thermodynamics. Nevertheless, many systems and processes will always remain far beyond the scope of non-equilibrium thermodynamic methods. This is because of the small size of atoms, as compared with macroscopic systems. The thermodynamic study of systems requires more general concepts than are dealt with by equilibrium thermodynamics. Another fundamental and very important difference is the difficulty or impossibility in defining entropy at an instant of time in terms for systems not in thermodynamic equilibrium. A profound difference separates equilibrium from non-equilibrium thermodynamics, equilibrium thermodynamics ignores the time-courses of physical processes. In contrast, non-equilibrium thermodynamics attempts to describe their time-courses in continuous detail, equilibrium thermodynamics restricts its considerations to processes that have initial and final states of thermodynamic equilibrium, the time-courses of processes are deliberately ignored. For example, in thermodynamics, a process is allowed to include even a violent explosion that cannot be described by non-equilibrium thermodynamics. Equilibrium thermodynamics does, however, for development, use the idealized concept of the quasi-static process. A quasi-static process is a conceptual smooth mathematical passage along a path of states of thermodynamic equilibrium. It is an exercise in differential geometry rather than a process that could occur in actuality, non-equilibrium thermodynamics, on the other hand, attempting to describe continuous time-courses, need its state variables to have a very close connection with those of equilibrium thermodynamics. This profoundly restricts the scope of thermodynamics, and places heavy demands on its conceptual framework. The suitable relationship that defines non-equilibrium thermodynamic state variables is as follows and it is necessary that measuring probes be small enough, and rapidly enough responding, to capture relevant non-uniformity. In reality, these requirements are demanding, and it may be difficult or practically, or even theoretically. This is part of why non-equilibrium thermodynamics is a work in progress, non-equilibrium thermodynamics is a work in progress, not an established edifice. This article will try to sketch some approaches to it and some concepts important for it, one problem of interest is the thermodynamic study of non-equilibrium steady states, in which entropy production and some flows are non-zero, but there is no time variation of physical variables
7.
Laws of thermodynamics
–
The four laws of thermodynamics define fundamental physical quantities that characterize thermodynamic systems at thermal equilibrium. The laws describe how these quantities behave under various circumstances, the four laws of thermodynamics are, Zeroth law of thermodynamics, If two systems are in thermal equilibrium with a third system, they are in thermal equilibrium with each other. This law helps define the notion of temperature, first law of thermodynamics, When energy passes, as work, as heat, or with matter, into or out from a system, the systems internal energy changes in accord with the law of conservation of energy. Equivalently, perpetual motion machines of the first kind are impossible, second law of thermodynamics, In a natural thermodynamic process, the sum of the entropies of the interacting thermodynamic systems increases. Equivalently, perpetual motion machines of the second kind are impossible, third law of thermodynamics, The entropy of a system approaches a constant value as the temperature approaches absolute zero. With the exception of non-crystalline solids the entropy of a system at zero is typically close to zero. There have been suggestions of additional laws, but none of them achieves the generality of the four accepted laws, the laws of thermodynamics are important fundamental laws in physics and they are applicable in other natural sciences. The zeroth law of thermodynamics may be stated in the following form, the law is intended to allow the existence of an empirical parameter, the temperature, as a property of a system such that systems in thermal equilibrium with each other have the same temperature. Though this version of the law is one of the commonly stated. Some statements go further so as to supply the important physical fact that temperature is one-dimensional, hence it was numbered the zeroth law. The importance of the law as a foundation to the laws is that it allows the definition of temperature in a non-circular way without reference to entropy. Such a temperature definition is said to be empirical, the first law of thermodynamics may be stated in several ways, The increase in internal energy of a closed system is equal to total of the energy added to the system. In particular, if the energy entering the system is supplied as heat and energy leaves the system as work, the heat is accounted as positive and this states that energy can be neither created nor destroyed. However, energy can change forms, and energy can flow from one place to another, a particular consequence of the law of conservation of energy is that the total energy of an isolated system does not change. The concept of energy and its relationship to temperature. If a system has a temperature, then its total energy has three distinguishable components. If the system is in motion as a whole, it has kinetic energy, If the system as a whole is in an externally imposed force field, it has potential energy relative to some reference point in space. Finally, it has energy, which is a fundamental quantity of thermodynamics
8.
First law of thermodynamics
–
The first law of thermodynamics is a version of the law of conservation of energy, adapted for thermodynamic systems. The law of conservation of energy states that the energy of an isolated system is constant, energy can be transformed from one form to another. Equivalently, perpetual motion machines of the first kind are impossible, investigations into the nature of heat and work and their relationship began with the invention of the first engines used to extract water from mines. Improvements to such engines so as to increase their efficiency and power output came first from mechanics that worked with such machines, deeper investigations that placed those on a mathematical and physics basis came later. The first law of thermodynamics was developed empirically over about half a century, the first full statements of the law came in 1850 from Rudolf Clausius and from William Rankine, Rankines statement is less distinct relative to Clausius. A main aspect of the struggle was to deal with the previously proposed caloric theory of heat, in 1840, Germain Hess stated a conservation law for the so-called heat of reaction for chemical reactions. His law was recognized as a consequence of the first law of thermodynamics. The primitive notion of heat was taken as established, especially through calorimetry regarded as a subject in its own right. Jointly primitive with this notion of heat were the notions of empirical temperature and this framework also took as primitive the notion of transfer of energy as work. This framework did not presume a concept of energy in general, by one author, this framework has been called the thermodynamic approach. The first explicit statement of the first law of thermodynamics, by Rudolf Clausius in 1850, because of its definition in terms of increments, the value of the internal energy of a system is not uniquely defined. It is defined only up to an additive constant of integration. This non-uniqueness is in keeping with the mathematical nature of the internal energy. The internal energy is customarily stated relative to a conventionally chosen standard reference state of the system, the concept of internal energy is considered by Bailyn to be of enormous interest. Its quantity cannot be measured, but can only be inferred. Bailyn likens it to the states of an atom, that were revealed by Bohrs energy relation hν = En − En. In each case, a quantity is revealed by considering the difference of measured quantities. In 1907, George H. Bryan wrote about systems between which there is no transfer of matter, Definition, when energy flows from one system or part of a system to another otherwise than by the performance of mechanical work, the energy so transferred is called heat
9.
Second law of thermodynamics
–
The second law of thermodynamics states that the total entropy of an isolated system can only increase over time. It can remain constant in ideal cases where the system is in a state or undergoing a reversible process. The increase in entropy accounts for the irreversibility of processes. Historically, the law was an empirical finding that was accepted as an axiom of thermodynamic theory. Statistical thermodynamics, classical or quantum, explains the origin of the law. The second law has been expressed in many ways and its first formulation is credited to the French scientist Sadi Carnot in 1824, who showed that there is an upper limit to the efficiency of conversion of heat to work in a heat engine. The first law of thermodynamics provides the definition of internal energy, associated with all thermodynamic systems. The second law is concerned with the direction of natural processes and it asserts that a natural process runs only in one sense, and is not reversible. For example, heat flows spontaneously from hotter to colder bodies. Its modern definition is in terms of entropy, different notations are used for infinitesimal amounts of heat and infinitesimal amounts of entropy because entropy is a function of state, while heat, like work, is not. For an actually possible infinitesimal process without exchange of matter with the surroundings, the second law allows a distinguished temperature scale, which defines an absolute, thermodynamic temperature, independent of the properties of any particular reference thermometric body. These statements cast the law in general physical terms citing the impossibility of certain processes, the Clausius and the Kelvin statements have been shown to be equivalent. The historical origin of the law of thermodynamics was in Carnots principle. The Carnot engine is a device of special interest to engineers who are concerned with the efficiency of heat engines. Interpreted in the light of the first law, it is equivalent to the second law of thermodynamics. It states The efficiency of a quasi-static or reversible Carnot cycle depends only on the temperatures of the two reservoirs, and is the same, whatever the working substance. A Carnot engine operated in this way is the most efficient possible heat engine using those two temperatures, the German scientist Rudolf Clausius laid the foundation for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work. The statement by Clausius uses the concept of passage of heat, as is usual in thermodynamic discussions, this means net transfer of energy as heat, and does not refer to contributory transfers one way and the other
10.
Third law of thermodynamics
–
Entropy is related to the number of accessible microstates, and for a system consisting of many particles, quantum mechanics indicates that there is only one unique state with minimum energy. The constant value is called the entropy of the system. Here a condensed system refers to liquids and solids, a classical formulation by Nernst is, It is impossible for any process, no matter how idealized, to reduce the entropy of a system to its absolute-zero value in a finite number of operations. It was proven in 2017 by Masanes and Oppenheim, the 3rd law was developed by the chemist Walther Nernst during the years 1906–12, and is therefore often referred to as Nernsts theorem or Nernsts postulate. The third law of thermodynamics states that the entropy of a system at zero is a well-defined constant. This is because a system at zero temperature exists in its ground state, in 1912 Nernst stated the law thus, It is impossible for any procedure to lead to the isotherm T =0 in a finite number of steps. An alternative version of the law of thermodynamics as stated by Gilbert N. This version states not only ΔS will reach zero at 0 K, some crystals form defects which causes a residual entropy. This residual entropy disappears when the barriers to transitioning to one ground state are overcome. With the development of mechanics, the third law of thermodynamics changed from a fundamental law to a derived law. The counting of states is from the state of absolute zero. In simple terms, the law states that the entropy of a perfect crystal of a pure substance approaches zero as the temperature approaches zero. The alignment of a perfect crystal leaves no ambiguity as to the location and orientation of each part of the crystal, as the energy of the crystal is reduced, the vibrations of the individual atoms are reduced to nothing, and the crystal becomes the same everywhere. The third law provides a reference point for the determination of entropy at any other temperature. The entropy of a system, determined relative to this point, is then the absolute entropy of that system. Mathematically, the entropy of any system at zero temperature is the natural log of the number of ground states times Boltzmanns constant kB =1. 38×10−23 J K−1. The entropy of a crystal lattice as defined by Nernsts theorem is zero provided that its ground state is unique. As a result, the initial value of zero is selected S0 =0 is used for convenience
11.
Thermodynamic system
–
Usually, by default, a thermodynamic system is taken to be in its own internal state of thermodynamic equilibrium, as opposed to a non-equilibrium state. The thermodynamic system is enclosed by walls that separate it from its surroundings. The thermodynamic state of a system is its internal state as specified by its state variables. In addition to the variables, a thermodynamic account also requires a special kind of quantity called a state function. For example, if the variables are internal energy, volume and mole amounts. These quantities are inter-related by one or more functional relationships called equations of state, thermodynamics imposes restrictions on the possible equations of state and on the characteristic equation. The restrictions are imposed by the laws of thermodynamics, the only states considered in equilibrium thermodynamics are equilibrium states. In 1824 Sadi Carnot described a system as the working substance of any heat engine under study. The very existence of such systems may be considered a fundamental postulate of equilibrium thermodynamics. According to Bailyn, the commonly rehearsed statement of the law of thermodynamics is a consequence of this fundamental postulate. In equilibrium thermodynamics the state variables do not include fluxes because in a state of thermodynamic equilibrium all fluxes have zero values by postulation, non-equilibrium thermodynamics allows its state variables to include non-zero fluxes, that describe transfers of matter or energy or entropy between a system and its surroundings. Thermodynamic equilibrium is characterized by absence of flow of matter or energy, equilibrium thermodynamics, as a subject in physics, considers macroscopic bodies of matter and energy in states of internal thermodynamic equilibrium. It uses the concept of thermodynamic processes, by which bodies pass from one state to another by transfer of matter. The term thermodynamic system is used to refer to bodies of matter, the possible equilibria between bodies are determined by the physical properties of the walls that separate the bodies. Equilibrium thermodynamics in general does not measure time, equilibrium thermodynamics is a relatively simple and well settled subject. One reason for this is the existence of a well defined quantity called the entropy of a body. It is characterized by presence of flows of matter and energy, for this topic, very often the bodies considered have smooth spatial inhomogeneities, so that spatial gradients, for example a temperature gradient, are well enough defined. Thus the description of thermodynamic systems is a field theory
12.
Thermodynamic state
–
Once such a set of values of thermodynamic variables has been specified for a system, the values of all thermodynamic properties of the system are uniquely determined. Usually, by default, a state is taken to be one of thermodynamic equilibrium. This means that the state is not merely the condition of the system at a specific time, Thermodynamics sets up an idealized formalism that can be summarized by a system of postulates of thermodynamics. A thermodynamic system is not simply a physical system, a thermodynamic system is a macroscopic object, the microscopic details of which are not explicitly considered in its thermodynamic description. The number of state variables required to specify the state depends on the system. Always the number is two or more, usually it is not more than some dozen, the choice is usually made on the basis of the walls and surroundings that are relevant for the thermodynamic processes that are to be considered for the system. For Planck, the characteristic of a thermodynamic state of a system that consists of a single phase. Such non-equilibrium identifying state variables indicate that some non-zero flow may be occurring within the system or between system and surroundings and they are uniquely determined by the thermodynamic state as it has been identified by the original state variables. For an idealized continuous or quasi-static process, this means that infinitesimal incremental changes in such variables are exact differentials, together, the incremental changes throughout the process, and the initial and final states, fully determine the idealized process. In the most commonly cited example, an ideal gas. Thus the thermodynamic state would range over a state space. The remaining variable, as well as other such as the internal energy. The state functions satisfy certain constraints, expressed in the laws of thermodynamics. Various thermodynamic diagrams have been developed to model the transitions between thermodynamic states, physical systems found in nature are practically always dynamic and complex, but in many cases, macroscopic physical systems are amenable to description based on proximity to ideal conditions. One such ideal condition is that of an equilibrium state. Such a state is an object of classical or equilibrium thermodynamics. Based on many observations, thermodynamics postulates that all systems that are isolated from the environment will evolve so as to approach unique stable equilibrium states. A few different types of equilibrium are listed below, thermal Equilibrium, When the temperature throughout a system is uniform, the system is in thermal equilibrium
13.
Equation of state
–
In physics and thermodynamics, an equation of state is a thermodynamic equation relating state variables which describes the state of matter under a given set of physical conditions. It is an equation which provides a mathematical relationship between two or more state functions associated with the matter, such as its temperature, pressure, volume. Equations of state are useful in describing the properties of fluids, mixtures of fluids, solids, the most prominent use of an equation of state is to correlate densities of gases and liquids to temperatures and pressures. One of the simplest equations of state for this purpose is the gas law. However, this becomes increasingly inaccurate at higher pressures and lower temperatures. Therefore, a number of more accurate equations of state have been developed for gases, at present, there is no single equation of state that accurately predicts the properties of all substances under all conditions. Measurements of equation-of-state parameters, especially at pressures, can be made using lasers. In addition, there are equations of state describing solids. There are equations that model the interior of stars, including stars, dense matter. A related concept is the perfect fluid equation of state used in cosmology, in practical context, the equations of state are instrumental for PVT calculation in process engineering problems and especially in petroleum gas/liquid equilibrium calculations. A successful PVT model based on an equation of state can be helpful to determine the state of the flow regime. Boyles Law was perhaps the first expression of an equation of state, in 1662, the Irish physicist and chemist Robert Boyle performed a series of experiments employing a J-shaped glass tube, which was sealed on one end. Mercury was added to the tube, trapping a fixed quantity of air in the short, then the volume of gas was measured as additional mercury was added to the tube. The pressure of the gas could be determined by the difference between the level in the short end of the tube and that in the long. Through these experiments, Boyle noted that the gas volume varied inversely with the pressure, in mathematical form, this can be stated as, p V = c o n s t a n t. The above relationship has also attributed to Edme Mariotte and is sometimes referred to as Mariottes law. However, Mariottes work was not published until 1676, in 1787 the French physicist Jacques Charles found that oxygen, nitrogen, hydrogen, carbon dioxide, and air expand to roughly the same extent over the same 80 kelvin interval. Later, in 1802, Joseph Louis Gay-Lussac published results of similar experiments, daltons Law of partial pressure states that the pressure of a mixture of gases is equal to the sum of the pressures of all of the constituent gases alone
14.
Ideal gas
–
An ideal gas is a theoretical gas composed of many randomly moving point particles whose only interaction is perfectly elastic collision. The ideal gas concept is useful because it obeys the ideal gas law, an equation of state. One mole of a gas has a volume of 22.710947 litres at STP as defined by IUPAC since 1982. At normal conditions such as temperature and pressure, most real gases behave qualitatively like an ideal gas. Many gases such as nitrogen, oxygen, hydrogen, noble gases, the ideal gas model tends to fail at lower temperatures or higher pressures, when intermolecular forces and molecular size become important. It also fails for most heavy gases, such as many refrigerants, at high pressures, the volume of a real gas is often considerably greater than that of an ideal gas. At low temperatures, the pressure of a gas is often considerably less than that of an ideal gas. At some point of low temperature and high pressure, real gases undergo a phase transition, the model of an ideal gas, however, does not describe or allow phase transitions. These must be modeled by more complex equations of state, the deviation from the ideal gas behaviour can be described by a dimensionless quantity, the compressibility factor, Z. The ideal gas model has been explored in both the Newtonian dynamics and in quantum mechanics, the ideal gas model has also been used to model the behavior of electrons in a metal, and it is one of the most important models in statistical mechanics. There are three classes of ideal gas, the classical or Maxwell–Boltzmann ideal gas, the ideal quantum Bose gas, composed of bosons. The classical ideal gas can be separated into two types, The classical thermodynamic ideal gas and the ideal quantum Boltzmann gas. The ideal quantum Boltzmann gas overcomes this limitation by taking the limit of the quantum Bose gas, the behavior of a quantum Boltzmann gas is the same as that of a classical ideal gas except for the specification of these constants. The ideal gas law is an extension of experimentally discovered gas laws, real fluids at low density and high temperature approximate the behavior of a classical ideal gas. This deviation is expressed as a compressibility factor, the classical thermodynamic properties of an ideal gas can be described by two equations of state. Multiplying the equations representing the three laws, V ∗ V ∗ V = k b a Gives, V ∗ V ∗ V =, under ideal conditions, V = R, that is, P V = n R T. The other equation of state of an ideal gas must express Joules law, in order to switch from macroscopic quantities to microscopic ones, we use n R = N k B where N is the number of gas particles kB is the Boltzmann constant. The probability distribution of particles by velocity or energy is given by the Maxwell speed distribution, the assumption of spherical particles is necessary so that there are no rotational modes allowed, unlike in a diatomic gas
15.
Real gas
–
Real gases are non-hypothetical gases whose molecules occupy space and have interactions, consequently, they adhere to gas laws. The deviation from ideality can be described by the compressibility factor Z and it is almost always more accurate than the van der Waals equation, and often more accurate than some equations with more than two parameters. The equation is R T = or alternatively, p = R T V m − b − a T V m where a and b two empirical parameters that are not the parameters as in the van der Waals equation. The Virial equation derives from a treatment of statistical mechanics. P V m = R T or alternatively p V m = R T where A, B, C, A′, B′, Peng–Robinson equation of state has the interesting property being useful in modeling some liquids as well as real gases. Note that the γ constant is a derivative of constant α, englewood Cliffs, New Jersey 07632,1993. ISBN 0-13-275702-8 Stanley M. Walas, Phase Equilibria in Chemical Engineering, ISBN 0-409-95162-5 M. Aznar, and A. Silva Telles, A Data Bank of Parameters for the Attractive Coefficient of the Peng–Robinson Equation of State, Braz. Eng. vol.14 no.1 São Paulo Mar.1997, rao The corresponding-states principle and its practice, thermodynamic, transport and surface properties of fluids by Hong Wei Xiang http, //www. ccl. net/cca/documents/dyoung/topics-orig/eq_state. html
16.
State of matter
–
In physics, a state of matter is one of the distinct forms that matter takes on. Four states of matter are observable in everyday life, solid, liquid, gas, some other states are believed to be possible but remain theoretical for now. For a complete list of all states of matter, see the list of states of matter. Historically, the distinction is based on qualitative differences in properties. Matter in the state maintains a fixed volume and shape, with component particles close together. Matter in the state maintains a fixed volume, but has a variable shape that adapts to fit its container. Its particles are close together but move freely. Matter in the state has both variable volume and shape, adapting both to fit its container. Its particles are close together nor fixed in place. Matter in the state has variable volume and shape, but as well as neutral atoms, it contains a significant number of ions and electrons. Plasma is the most common form of matter in the universe. The term phase is used as a synonym for state of matter. In a solid the particles are packed together. The forces between particles are strong so that the particles move freely but can only vibrate. As a result, a solid has a stable, definite shape, solids can only change their shape by force, as when broken or cut. In crystalline solids, the particles are packed in a regularly ordered, there are various different crystal structures, and the same substance can have more than one structure. For example, iron has a cubic structure at temperatures below 912 °C. Ice has fifteen known crystal structures, or fifteen solid phases, glasses and other non-crystalline, amorphous solids without long-range order are not thermal equilibrium ground states, therefore they are described below as nonclassical states of matter
17.
Thermodynamic equilibrium
–
Thermodynamic equilibrium is an axiomatic concept of thermodynamics. It is an state of a single thermodynamic system, or a relation between several thermodynamic systems connected by more or less permeable or impermeable walls. In thermodynamic equilibrium there are no net macroscopic flows of matter or of energy, in a system in its own state of internal thermodynamic equilibrium, no macroscopic change occurs. Systems in mutual thermodynamic equilibrium are simultaneously in mutual thermal, mechanical, chemical, Systems can be in one kind of mutual equilibrium, though not in others. In thermodynamic equilibrium, all kinds of equilibrium hold at once and indefinitely, in a macroscopic equilibrium, almost or perfectly exactly balanced microscopic exchanges occur, this is the physical explanation of the notion of macroscopic equilibrium. A thermodynamic system in its own state of thermodynamic equilibrium has a spatially uniform temperature. Its intensive properties, other than temperature, may be driven to spatial inhomogeneity by a long range force field imposed on it by its surroundings. In non-equilibrium systems, by contrast, there are net flows of matter or energy, If such changes can be triggered to occur in a system in which they are not already occurring, it is said to be in a metastable equilibrium. Though it is not a widely named law, it is an axiom of thermodynamics that there exist states of thermodynamic equilibrium, Classical thermodynamics deals with states of dynamic equilibrium. The state of a system at equilibrium is the one for which some thermodynamic potential is minimized, or for which the entropy is maximized. Thermodynamic equilibrium is the stable stationary state that is approached or eventually reached as the system interacts with its surroundings over a long time. The above-mentioned potentials are mathematically constructed to be the thermodynamic quantities that are minimized under the conditions in the specified surroundings. For a completely isolated system, S is maximum at thermodynamic equilibrium, for a system with controlled constant temperature and volume, A is minimum at thermodynamic equilibrium. For a system with controlled constant temperature and pressure, G is minimum at thermodynamic equilibrium, the various types of equilibriums are achieved as follows, Two systems are in thermal equilibrium when their temperatures are the same. Two systems are in equilibrium when their pressures are the same. Two systems are in equilibrium when their chemical potentials are the same. All forces are balanced and there is no significant external driving force, often the surroundings of a thermodynamic system may also be regarded as another thermodynamic system. In this view, one may consider the system and its surroundings as two systems in contact, with long-range forces also linking them
18.
Control volume
–
In continuum mechanics and thermodynamics, a control volume is a mathematical abstraction employed in the process of creating mathematical models of physical processes. In an inertial frame of reference, it is a volume fixed in space or moving with constant flow velocity through which the continuum flows, the surface enclosing the control volume is referred to as the control surface. At steady state, a volume can be thought of as an arbitrary volume in which the mass of the continuum remains constant. As a continuum moves through the volume, the mass entering the control volume is equal to the mass leaving the control volume. At steady state, and in the absence of work and heat transfer and it is analogous to the classical mechanics concept of the free body diagram. Typically, to understand how a physical law applies to the system under consideration, one first begins by considering how it applies to a small, control volume. There is nothing special about a particular volume, it simply represents a small part of the system to which physical laws can be easily applied. This gives rise to what is termed a volumetric, or volume-wise formulation of the mathematical model, in this way, the corresponding point-wise formulation of the mathematical model can be developed so it can describe the physical behaviour of an entire system. In continuum mechanics the equations are in integral form. Finding forms of the equation that are independent of the control volumes allows simplification of the integral signs, computations in continuum mechanics often require that the regular time derivation operator d / d t is replaced by the substantive derivative operator D / D t. This can be seen as follows, consider a bug that is moving through a volume where there is some scalar, e. g. pressure, that varies with time and position, p = p. If the bug is just moving with the flow, the formula applies. The last parenthesized expression is the derivative of the scalar pressure. Since the pressure p in this computation is a scalar field, we may abstract it
19.
Thermodynamic instruments
–
A thermodynamic instrument is any device which facilitates the quantitative measurement of thermodynamic systems. In order for a parameter to be truly defined, a technique for its measurement must be specified. For example, the definition of temperature is what a thermometer reads. The question follows - what is a thermometer, there are two types of thermodynamic instruments, the meter and the reservoir. A thermodynamic meter is any device which measures any parameter of a thermodynamic system, a thermodynamic reservoir is a system which is so large that it does not appreciably alter its state parameters when brought into contact with the test system. Two general complementary tools are the meter and the reservoir and it is important that these two types of instruments are distinct. A meter does not perform its task accurately if it behaves like a reservoir of the variable it is trying to measure. If, for example, a thermometer, were to act as a reservoir it would alter the temperature of the system being measured. Ideal meters have no effect on the variables of the system they are measuring. A meter is a system which displays some aspect of its thermodynamic state to the observer. The nature of its contact with the system it is measuring can be controlled, the theoretical thermometer described below is just such a meter. In some cases, the parameter is actually defined in terms of an idealized measuring instrument. For example, the law of thermodynamics states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature, an idealized thermometer is a sample of an ideal gas at constant pressure. From the ideal gas law, the volume of such a sample can be used as an indicator of temperature, although pressure is defined mechanically, a pressure-measuring device called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature. A calorimeter is a device which is used to measure and define the energy of a system. Some common thermodynamic meters are, Thermometer - a device which measures temperature as described above Barometer - a device which measures pressure, an ideal gas barometer may be constructed by mechanically connecting an ideal gas to the system being measured, while thermally insulating it. The volume will then measure pressure, by the ideal gas equation P=NkT/V, calorimeter - a device which measures the heat energy added to a system
20.
Thermodynamic process
–
Classical thermodynamics considers three main kinds of thermodynamic process, change in a system, cycles in a system, and flow processes. Defined by change in a system, a process is a passage of a thermodynamic system from an initial to a final state of thermodynamic equilibrium. The initial and final states are the elements of the process. The actual course of the process is not the primary concern and this is the customary default meaning of the term thermodynamic process. Such processes are useful for thermodynamic theory, defined by a cycle of transfers into and out of a system, a cyclic process is described by the quantities transferred in the several stages of the cycle, which recur unchangingly. The descriptions of the states of the system are not the primary concern. Cyclic processes were important conceptual devices in the days of thermodynamical investigation. Defined by flows through a system, a process is a steady state of flows into. The internal state of the contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the materials, and, on the side, the transfers of heat, work. Flow processes are of interest in engineering, defined by change in a system, a thermodynamic process is a passage of a thermodynamic system from an initial to a final state of thermodynamic equilibrium. The initial and final states are the elements of the process. The actual course of the process is not the primary concern, a state of thermodynamic equilibrium endures unchangingly unless it is interrupted by a thermodynamic operation that initiates a thermodynamic process. Then it may be described by a process function that does depend on the path. Such idealized processes are useful in the theory of thermodynamics, defined by a cycle of transfers into and out of a system, a cyclic process is described by the quantities transferred in the several stages of the cycle. The descriptions of the states of the system may be of little or even no interest. A cycle is a sequence of a number of thermodynamic processes that indefinitely often repeatedly returns the system to its original state. For this, the states themselves are not necessarily described
21.
Isobaric process
–
An isobaric process is a thermodynamic process in which the pressure stays constant, ΔP =0. The heat transferred to the system work, but also changes the internal energy of the system. This article uses the sign convention for work, where positive work is work done on the system. Using this convention, by the first law of thermodynamics, Q = Δ U − W where W is work, U is internal energy, and Q is heat. Pressure-volume work by the system is defined as, W = − ∫ p d V where Δ means change over the whole process. Since pressure is constant, this means that W = − p Δ V. Applying the ideal gas law, this becomes W = − n R Δ T assuming that the quantity of gas stays constant, e. g. there is no phase transition during a chemical reaction. According to the theorem, the change in internal energy is related to the temperature of the system by Δ U = n c V Δ T. Substituting the last two equations into the first equation produces, Q = n c V Δ T + n R Δ T = n Δ T = n c P Δ T, where cP is specific heat at a constant pressure. To find the specific heat capacity of the gas involved. The property γ is either called the index or the heat capacity ratio. Some published sources might use k instead of γ, molar isochoric specific heat, c V = R γ −1. Molar isobaric specific heat, c p = γ R γ −1, the values for γ are γ = 7/5 for diatomic gases like air and its major components, and γ = 5/3 for monatomic gases like the noble gases. If the process moves towards the right, then it is an expansion, if the process moves towards the left, then it is a compression. The motivation for the specific conventions of thermodynamics comes from early development of heat engines. When designing an engine, the goal is to have the system produce. The source of energy in an engine, is a heat input. If the volume compresses, then W <0 and that is, during isobaric compression the gas does negative work, or the environment does positive work
22.
Isochoric process
–
The isochoric process here should be a quasi-static process. An isochoric thermodynamic process is characterized by constant volume, i. e. ΔV =0, the process does no pressure-volume work, since such work is defined by Δ W = P Δ V, where P is pressure. The sign convention is such that work is performed by the system on the environment. If the process is not quasi-static, the work can perhaps be done in a volume constant thermodynamic process, where cv is the specific heat capacity at constant volume, T1 is the initial temperature and T2 is the final temperature. We conclude with, Δ Q = m c v Δ T On a pressure volume diagram and its thermodynamic conjugate, an isobaric process would appear as a straight horizontal line. If an ideal gas is used in a process. Take for example a gas heated in a container, the pressure and temperature of the gas will increase. The ideal Otto cycle is an example of a process when it is assumed that the burning of the gasoline-air mixture in an internal combustion engine car is instantaneous. There is an increase in the temperature and the pressure of the gas inside the cylinder while the remains the same. The noun isochor and the adjective isochoric are derived from the Greek words ἴσος meaning equal, isobaric process Adiabatic process Cyclic process Isothermal process Polytropic process http, //lorien. ncl. ac. uk/ming/webnotes/Therm1/revers/isocho. htm
23.
Isothermal process
–
An isothermal process is a change of a system, in which the temperature remains constant, ΔT =0. In contrast, a process is where a system exchanges no heat with its surroundings. In other words, in a process, the value ΔT =0 and therefore ΔU =0 but Q ≠0, while in an adiabatic process. Isothermal processes can occur in any kind of system that has some means of regulating the temperature, including highly structured machines, some parts of the cycles of some heat engines are carried out isothermally. In the thermodynamic analysis of reactions, it is usual to first analyze what happens under isothermal conditions. Phase changes, such as melting or evaporation, are also isothermal processes when, as is usually the case, isothermal processes are often used and a starting point in analyzing more complex, non-isothermal processes. Isothermal processes are of special interest for ideal gases and this is a consequence of Joules second law which states that the internal energy of a fixed amount of an ideal gas depends only on its temperature. Thus, in a process the internal energy of an ideal gas is constant. This is a result of the fact that in a gas there are no intermolecular forces. Note that this is only for ideal gases, the internal energy depends on pressure as well as on temperature for liquids, solids. In the isothermal compression of a gas there is work is done on the system to decrease the volume, doing work on the gas increases the internal energy and will tend to increase the temperature. To maintain the constant temperature energy must leave the system as heat, if the gas is ideal, the amount of energy entering the environment is equal to the work done on the gas, because internal energy does not change. For details of the calculations, see calculation of work, for an adiabatic process, in which no heat flows into or out of the gas because its container is well insulated, Q =0. If there is no work done, i. e. a free expansion. For an ideal gas, this means that the process is also isothermal, thus, specifying that a process is isothermal is not sufficient to specify a unique process. For the special case of a gas to which Boyles law applies, the value of the constant is nRT, where n is the number of moles of gas present and R is the ideal gas constant. In other words, the gas law pV = nRT applies. This means that p = n R T V = constant V holds, the family of curves generated by this equation is shown in the graph in Figure 1
24.
Adiabatic process
–
In thermodynamics, an adiabatic process is one that occurs without transfer of heat or matter between a thermodynamic system and its surroundings. In an adiabatic process, energy is transferred only as work, the adiabatic process provides a rigorous conceptual basis for the theory used to expound the first law of thermodynamics, and as such it is a key concept in thermodynamics. The adiabatic flame temperature is the temperature that would be achieved by a if the process of combustion took place in the absence of heat loss to the surroundings. A process that does not involve the transfer of heat or matter into or out of a system, so that Q =0, is called an adiabatic process, the assumption that a process is adiabatic is a frequently made simplifying assumption. Even though the cylinders are not insulated and are quite conductive, the same can be said to be true for the expansion process of such a system. The assumption of adiabatic isolation of a system is a useful one, the behaviour of actual machines deviates from these idealizations, but the assumption of such perfect behaviour provide a useful first approximation of how the real world works. According to Laplace, when sound travels in a gas, there is no loss of heat in the medium and the propagation of sound is adiabatic. For this adiabatic process, the modulus of elasticity E = γP where γ is the ratio of specific heats at constant pressure and at constant volume, such a process is called an isentropic process and is said to be reversible. Fictively, if the process is reversed, the energy added as work can be recovered entirely as work done by the system, if the walls of a system are not adiabatic, and energy is transferred in as heat, entropy is transferred into the system with the heat. Such a process is neither adiabatic nor isentropic, having Q >0, naturally occurring adiabatic processes are irreversible. The transfer of energy as work into an isolated system can be imagined as being of two idealized extreme kinds. In one such kind, there is no entropy produced within the system, in nature, this ideal kind occurs only approximately, because it demands an infinitely slow process and no sources of dissipation. The other extreme kind of work is work, for which energy is added as work solely through friction or viscous dissipation within the system. The second law of thermodynamics observes that a process, of transfer of energy as work, always consists at least of isochoric work. Every natural process, adiabatic or not, is irreversible, with ΔS >0, the adiabatic compression of a gas causes a rise in temperature of the gas. Adiabatic expansion against pressure, or a spring, causes a drop in temperature, in contrast, free expansion is an isothermal process for an ideal gas. Adiabatic heating occurs when the pressure of a gas is increased from work done on it by its surroundings and this finds practical application in diesel engines which rely on the lack of quick heat dissipation during their compression stroke to elevate the fuel vapor temperature sufficiently to ignite it. Adiabatic heating occurs in the Earths atmosphere when an air mass descends, for example, in a wind, Foehn wind
25.
Isentropic process
–
Such an idealized process is useful in engineering as a model of and basis of comparison for real processes. The word isentropic is occasionally, though not customarily, interpreted in another way and this is contrary to its original and customarily used definition. The second law of thermodynamics states that, T d S ≥ δ Q where δ Q is the amount of energy the system gains by heating, T is the temperature of the system, and d S is the change in entropy. The equal sign refers to a process, which is an imagined idealized theoretical limit. For an isentropic process, which by definition is reversible, there is no transfer of energy as heat because the process is adiabatic, for reversible processes, an isentropic transformation is carried out by thermally insulating the system from its surroundings. The entropy of a given mass does not change during a process that is internally reversible, a process during which the entropy remains constant is called an isentropic process, written △ s =0 or s 1 = s 2. Some isentropic thermodynamic devices include, pumps, gas compressors, turbines, nozzles, real world cycles have inherent losses due to inefficient compressors and turbines. The real world system are not truly isentropic but are rather idealized as isentropic for calculation purposes, in fluid dynamics, an isentropic flow is a fluid flow that is both adiabatic and reversible. That is, no heat is added to the flow, for an isentropic flow of a perfect gas, several relations can be derived to define the pressure, density and temperature along a streamline. Note that energy can be exchanged with the flow in an isentropic transformation, an example of such an exchange would be an isentropic expansion or compression that entails work done on or by the flow. For an isentropic flow, entropy density can vary between different streamlines, if the entropy density is the same everywhere, then the flow is said to be homentropic. All reversible adiabatic processes are isentropic, for any transformation of an ideal gas, it is always true that d U = n C v d T, and d H = n C p d T. Using the general results derived above for d U and d H, then d U = n C v d T = − p d V, and d H = n C p d T = V d p. So for a gas, the heat capacity ratio can be written as. Hence on integrating the above equation, assuming a perfect gas, we get p V γ = constant i. e. H. S. J. and Sonntag, fundamentals of Classical Thermodynamics, John Wiley & Sons, Inc. Library of Congress Catalog Card Number, 65-19470
26.
Isenthalpic process
–
An isenthalpic process or isoenthalpic process is a process that proceeds without any change in enthalpy, H, or specific enthalpy, h. The throttling process is an example of an isenthalpic process. Consider the lifting of a valve or safety valve on a pressure vessel. The specific enthalpy of the fluid inside the vessel is the same as the specific enthalpy of the fluid as it escapes from the valve. With a knowledge of the enthalpy of the fluid. Sonntag, Fundamentals of Classical Thermodynamics, John Wiley & Sons, Inc
27.
Quasistatic process
–
In thermodynamics, a quasi-static process is a thermodynamic process that happens slowly enough for the system to remain in internal equilibrium. An example of this is quasi-static compression, where the volume of a system changes at a slow enough to allow the pressure to remain uniform. Any reversible process is necessarily a quasi-static one, however, quasi-static processes involving entropy production are irreversible. Some ambiguity exists in the literature concerning the distinction between quasi-static and reversible processes, as these are taken as synonyms. The reason is the theorem that any process is also a quasi-static one. The definition given above is closer to the understanding of the word “quasi-” “static”
28.
Polytropic process
–
A polytropic process is a thermodynamic process that obeys the relation, p v n = C where p is the pressure, v is specific volume, n is the polytropic index, and C is a constant. The polytropic process equation can describe multiple expansion and compression processes which include heat transfer, in addition, when the ideal gas law applies, n =1 is an isothermic process, n = γ is an adiabatic process. Consider an ideal gas in a system undergoing a slow process with negligible changes in kinetic. Assuming K remain constant during the transformation, as d f f = d this relation can be integrated as d =0 ⟶ p v K + γ = C where C is a constant. Thus, the process is polytropic, with the coefficient n = K + γ and this derivation can be expanded to include polytropic processes in open systems, including instances where the kinetic energy is significant. It can also be expanded to include polytropic processes. For certain values of the index, the process will be synonymous with other common processes. Some examples of the effects of varying index values are given in the table, when the index n is between any two of the former values, it means that the polytropic curve will cut through the curves of the two bounding indices. For an ideal gas,1 < γ <2, since by Mayers relation γ = c p c v = c v + R c v =1 + R c v = c p c p − R. A solution to the Lane–Emden equation using a fluid is known as a polytrope
29.
Free expansion
–
Free expansion is an irreversible process in which a gas expands into an insulated evacuated chamber. It is also called Joule expansion, real gases experience a temperature change during free expansion. Since the gas expands, Vf > Vi, which implies that the pressure does drop, during free expansion, no work is done by the gas. The gas goes through states that are not in thermodynamic equilibrium before reaching its final state, for example, the pressure changes locally from point to point, and the volume occupied by the gas is not a well defined quantity. A free expansion is achieved by opening a stopcock that allows the gas to expand into a vacuum. Although it would be difficult to achieve in reality, it is instructive to imagine a free expansion caused by moving a piston faster than virtually any atom, no work is done because there is no pressure on the piston. No heat energy leaves or enters the piston, nevertheless, there is an entropy change. But the well-known formula for change, Δ S = ∫ d Q r e v T
30.
Reversible process (thermodynamics)
–
Throughout the entire reversible process, the system is in thermodynamic equilibrium with its surroundings. Since it would take an infinite amount of time for the process to finish. However, if the system undergoing the changes responds much faster than the applied change, in a reversible cycle, a reversible process which is cyclic, the system and its surroundings will be returned to their original states if the forward cycle is followed by the reverse cycle. Thermodynamic processes can be carried out in one of two ways, reversibly or irreversibly, reversibility refers to performing a reaction continuously at equilibrium. The phenomenon of maximized work and minimized heat can be visualized on a curve, as the area beneath the equilibrium curve. In order to work, one must follow the equilibrium curve closely. When described in terms of pressure and volume, it occurs when the pressure or the volume of a system changes so dramatically and instantaneously that the other does not have time to catch up. A classic example of irreversibility is allowing a certain volume of gas to be released into a vacuum. By releasing pressure on a sample and thus allowing it to occupy a large space, however, significant work will be required, with a corresponding amount of energy dissipated as heat flow to the environment, in order to reverse the process. An alternative definition of a process is a process that, after it has taken place, can be reversed and. In thermodynamic terms, a process taking place would refer to its transition from its state to its final state. In an irreversible process, finite changes are made, therefore the system is not at equilibrium throughout the process, at the same point in an irreversible cycle, the system will be in the same state, but the surroundings are permanently changed after each cycle. A reversible process changes the state of a system in such a way that the net change in the entropy of the system. In some cases, it is important to distinguish between reversible and quasistatic processes, Reversible processes are always quasistatic, but the converse is not always true. For example, a compression of a gas in a cylinder where there exists friction between the piston and the cylinder is a quasistatic, but not reversible process. Historically, the term Tesla principle was used to describe certain reversible processes invented by Nikola Tesla, however, this phrase is no longer in conventional use. The principle stated that some systems could be reversed and operated in a complementary manner and it was developed during Teslas research in alternating currents where the currents magnitude and direction varied cyclically. During a demonstration of the Tesla turbine, the disks revolved, if the turbines operation was reversed, the disks acted as a pump
31.
Irreversible process
–
In science, a process that is not reversible is called irreversible. This concept arises frequently in thermodynamics, a system that undergoes an irreversible process may still be capable of returning to its initial state, however, the impossibility occurs in restoring the environment to its own initial conditions. An irreversible process increases the entropy of the universe, however, because entropy is a state function, the change in entropy of the system is the same whether the process is reversible or irreversible. The second law of thermodynamics can be used to determine whether a process is reversible or not, all complex natural processes are irreversible. A certain amount of energy will be used as the molecules of the working body do work on each other when they change from one state to another. Many biological processes that were thought to be reversible have been found to actually be a pairing of two irreversible processes. Thermodynamics defines the behaviour of large numbers of entities, whose exact behavior is given by more specific laws. The irreversibility of thermodynamics must be statistical in nature, that is, that it must be highly unlikely, but not impossible. The German physicist Rudolf Clausius, in the 1850s, was the first to quantify the discovery of irreversibility in nature through his introduction of the concept of entropy. For example, a cup of hot coffee placed in an area of temperature will transfer heat to its surroundings. However, that same initial cup of coffee will never absorb heat from its surroundings causing it to grow even hotter with the temperature of the room decreasing, therefore, the process of the coffee cooling down is irreversible unless extra energy is added to the system. However, a paradox arose when attempting to reconcile microanalysis of a system with observations of its macrostate, many processes are mathematically reversible in their microstate when analyzed using classical Newtonian mechanics. His formulas quantified the work done by William Thomson, 1st Baron Kelvin who had argued that, in 1890, he published his first explanation of nonlinear dynamics, also called chaos theory. Sensitivity to initial conditions relating to the system and its environment at the compounds into an exhibition of irreversible characteristics within the observable. In the physical realm, many processes are present to which the inability to achieve 100% efficiency in energy transfer can be attributed. The following is a list of events which contribute to the irreversibility of processes. The internal energy of the gas remains the same, while the volume increases, the original state cannot be recovered by simply compressing the gas to its original volume, since the internal energy will be increased by this compression. The original state can only be recovered by then cooling the re-compressed system, the diagram to the right applies only if the first expansion is free
32.
Endoreversible thermodynamics
–
Endoreversible thermodynamics is a subset of irreversible thermodynamics aimed at making more realistic assumptions about heat transfer than are typically made in reversible thermodynamics. Endoreversible thermodynamics was discovered in simultaneous work by Novikov and Chambadal, for some typical cycles, the above equation gives the following results, As shown, the endoreversible efficiency much more closely models the observed data. However, such an engine violates Carnots principle which states that work can be any time there is a difference in temperature. The fact that the hot and cold reservoirs are not at the temperature as the working fluid they are in contact with means that work can and is done at the hot. The result is tantamount to coupling the high and low temperature parts of the cycle, so that the cycle collapses. It is well known that the temperature is the geometric mean temperature T H T L so that the efficiency is the Carnot efficiency for an engine working between T H and T H T L. Due to occasional confusion about the origins of the above equation, heat engine An introduction to endoreversible thermodynamics is given in the thesis by Katharina Wagner. It is also introduced by Hoffman et al, a thorough discussion of the concept, together with many applications in engineering, is given in the book by Hans Ulrich Fuchs
33.
Thermodynamic cycle
–
In the process of passing through a cycle, the working fluid may convert heat from a warm source into useful work, and dispose of the remaining heat to a cold sink, thereby acting as a heat engine. Conversely, the cycle may be reversed and use work to move heat from a cold source, during a closed cycle, the system returns to its original thermodynamic state of temperature and pressure. Process quantities, such as heat and work are process dependent, ein might be the work and heat input during the cycle and Eout would be the work and heat output during the cycle. The first law of thermodynamics also dictates that the net heat input is equal to the net work output over a cycle, the repeating nature of the process path allows for continuous operation, making the cycle an important concept in thermodynamics. Thermodynamic cycles are often represented mathematically as quasistatic processes in the modeling of the workings of an actual device, two primary classes of thermodynamic cycles are power cycles and heat pump cycles. Power cycles are cycles which convert some heat input into a mechanical work output, cycles composed entirely of quasistatic processes can operate as power or heat pump cycles by controlling the process direction. On a pressure-volume diagram or temperature-entropy diagram, the clockwise and counterclockwise directions indicate power and heat pump cycles, because the net variation in state properties during a thermodynamic cycle is zero, it forms a closed loop on a PV diagram. A PV diagrams Y axis shows pressure and X axis shows volume, if the cyclic process moves clockwise around the loop, then W will be positive, and it represents a heat engine. If it moves counterclockwise, then W will be negative, and this does not exclude energy transfer as work. Isothermal, The process is at a constant temperature during that part of the cycle and this does not exclude energy transfer as heat or work. Isobaric, Pressure in that part of the cycle will remain constant and this does not exclude energy transfer as heat or work. Isochoric, The process is constant volume and this does not exclude energy transfer as heat or work. Isentropic, The process is one of constant entropy and this excludes the transfer of heat but not work. Thermodynamic power cycles are the basis for the operation of heat engines, power cycles can be organized into two categories, real cycles and ideal cycles. Cycles encountered in real world devices are difficult to analyze because of the presence of complicating effects, power cycles can also be divided according to the type of heat engine they seek to model. The most common used to model internal combustion engines are the Otto cycle, which models gasoline engines, and the Diesel cycle. There is no difference between the two except the purpose of the refrigerator is to cool a very small space while the heat pump is intended to warm a house. Both work by moving heat from a space to a warm space
34.
Heat engine
–
In thermodynamics, a heat engine is a system that converts heat or thermal energy—and chemical energy—to mechanical energy, which can then be used to do mechanical work. It does this by bringing a working substance from a higher temperature to a lower state temperature. A heat source generates thermal energy that brings the working substance to the high temperature state, the working substance generates work in the working body of the engine while transferring heat to the colder sink until it reaches a low temperature state. During this process some of the energy is converted into work by exploiting the properties of the working substance. The working substance can be any system with a heat capacity. During this process, a lot of heat is lost to the surroundings, in general an engine converts energy to mechanical work. Heat engines distinguish themselves from other types of engines by the fact that their efficiency is limited by Carnots theorem. Since the heat source that supplies energy to the engine can thus be powered by virtually any kind of energy. Heat engines are often confused with the cycles they attempt to implement, typically, the term engine is used for a physical device and cycle for the model. In thermodynamics, heat engines are often modeled using an engineering model such as the Otto cycle. The theoretical model can be refined and augmented with data from an operating engine. Since very few implementations of heat engines exactly match their underlying thermodynamic cycles. In general terms, the larger the difference in temperature between the hot source and the sink, the larger is the potential thermal efficiency of the cycle. The efficiency of heat engines proposed or used today has a large range. 25% for most automotive gasoline engines 49% for a supercritical coal-fired power station such as the Avedøre Power Station, all these processes gain their efficiency from the temperature drop across them. Significant energy may be used for equipment, such as pumps. Heat engines can be characterized by their power, which is typically given in kilowatts per litre of engine displacement. The result offers an approximation of the power output of an engine
35.
Heat pump and refrigeration cycle
–
Thermodynamic heat pump cycles or refrigeration cycles are the conceptual and mathematical models for heat pumps and refrigerators. A heat pump is a machine or device that moves heat from one location at a temperature to another location at a higher temperature using mechanical work or a high-temperature heat source. Thus a heat pump may be thought of as a heater if the objective is to warm the heat sink, in either case, the operating principles are identical. Heat is moved from a place to a warm place. According to the law of thermodynamics heat cannot spontaneously flow from a colder location to a hotter area. An air conditioner requires work to cool a space, moving heat from the cooler interior to the warmer outdoors. Similarly, a refrigerator moves heat from inside the cold icebox to the warmer room-temperature air of the kitchen, the operating principle of the refrigeration cycle was described mathematically by Sadi Carnot in 1824 as a heat engine. A heat pump can be thought of as an engine which is operating in reverse. Heat pump and refrigeration cycles can be classified as vapor compression, vapor absorption, gas cycle, the vapor-compression cycle is used in most household refrigerators as well as in many large commercial and industrial refrigeration systems. Figure 1 provides a schematic diagram of the components of a typical vapor-compression refrigeration system, the thermodynamics of the cycle can be analysed on a diagram as shown in Figure 2. In this cycle, a refrigerant such as Freon enters the compressor as a vapor. The vapor is compressed at constant entropy and exits the compressor superheated, the liquid refrigerant goes through the expansion valve where its pressure abruptly decreases, causing flash evaporation and auto-refrigeration of, typically, less than half of the liquid. That results in a mixture of liquid and vapor at a temperature and pressure. The cold liquid-vapor mixture then travels through the coil or tubes and is completely vaporized by cooling the warm air being blown by a fan across the evaporator coil or tubes. The resulting refrigerant vapor returns to the inlet to complete the thermodynamic cycle. The absorption cycle is similar to the cycle, except for the method of raising the pressure of the refrigerant vapor. Some work is required by the pump but, for a given quantity of refrigerant. In an absorption refrigerator, a combination of refrigerant and absorbent is used
36.
Thermal efficiency
–
For a power cycle, thermal efficiency indicates the extent to which the energy added by heat is converted to net work output. In the case of a refrigeration or heat pump cycle, thermal efficiency indicates the extent to which the energy added by work is converted to net heat output. In general, energy efficiency is the ratio between the useful output of a device and the input, in energy terms. For thermal efficiency, the input, Q i n, to the device is heat, the desired output is mechanical work, W o u t, or heat, Q o u t, or possibly both. Because the input heat normally has a financial cost, a memorable. From the first law of thermodynamics, the output cannot exceed the input, so 0 ≤ η t h <1 When expressed as a percentage. Efficiency is typically less than 100% because there are such as friction. The largest diesel engine in the peaks at 51. 7%. In a combined cycle plant, thermal efficiencies are approaching 60%, such a real-world value may be used as a figure of merit for the device. For engines where a fuel is burned there are two types of efficiency, indicated thermal efficiency and brake thermal efficiency. This efficiency is only appropriate when comparing similar types or similar devices, for other systems the specifics of the calculations of efficiency vary but the non dimensional input is still the same. Efficiency = Output energy / input energy Heat engines transform thermal energy, or heat, Qin into mechanical energy, or work, so the energy lost to the environment by heat engines is a major waste of energy resources. This inefficiency can be attributed to three causes, there is an overall theoretical limit to the efficiency of any heat engine due to temperature, called the Carnot efficiency. Second, specific types of engines have lower limits on their due to the inherent irreversibility of the engine cycle they use. Thirdly, the behavior of real engines, such as mechanical friction. The second law of thermodynamics puts a limit on the thermal efficiency of all heat engines. Even an ideal, frictionless engine cant convert anywhere near 100% of its heat into work. No device converting heat into energy, regardless of its construction
37.
Conjugate variables (thermodynamics)
–
In thermodynamics, the internal energy of a system is expressed in terms of pairs of conjugate variables such as temperature and entropy or pressure and volume. In fact, all thermodynamic potentials are expressed in terms of conjugate pairs, the product of two quantities that are conjugate has units of energy or sometimes power. For a mechanical system, an increment of energy is the product of a force times a small displacement. A similar situation exists in thermodynamics and these forces and their associated displacements are called conjugate variables. The thermodynamic force is always a variable and the displacement is always an extensive variable. The intensive variable is the derivative of the energy with respect to the extensive variable, while all other extensive variables are held constant. The thermodynamic square can be used as a tool to recall, in the above description, the product of two conjugate variables yields an energy. In other words, the pairs are conjugate with respect to energy. In general, conjugate pairs can be defined with respect to any state function. Conjugate pairs with respect to entropy are often used, in which the product of the conjugate pairs yields an entropy, such conjugate pairs are particularly useful in the analysis of irreversible processes, as exemplified in the derivation of the Onsager reciprocal relations. The present article is concerned only with energy-conjugate variables and these forces and their associated displacements are called conjugate variables. For example, consider the p V conjugate pair, the pressure p acts as a generalized force, Pressure differences force a change in volume d V, and their product is the energy lost by the system due to work. Here, pressure is the force, volume is the associated displacement. In a similar way, temperature differences drive changes in entropy, the thermodynamic force is always an intensive variable and the displacement is always an extensive variable, yielding an extensive energy. The intensive variable is the derivative of the energy with respect to the extensive variable. The theory of thermodynamic potentials is not complete until one considers the number of particles in a system as a variable on par with the other quantities such as volume. The number of particles is, like volume and entropy, the displacement variable in a conjugate pair, the generalized force component of this pair is the chemical potential. The chemical potential may be thought of as a force which, in cases where there are a mixture of chemicals and phases, this is a useful concept
38.
Thermodynamic diagrams
–
Thermodynamic diagrams are diagrams used to represent the thermodynamic states of a material and the consequences of manipulating this material. For instance, a diagram may be used to demonstrate the behavior of a fluid as it is changed by a compressor. Especially in meteorology they are used to analyze the state of the atmosphere derived from the measurements of radiosondes. In such diagrams, temperature and humidity values are displayed with respect to pressure, thus the diagram gives at a first glance the actual atmospheric stratification and vertical water vapor distribution. Further analysis gives the base and top height of convective clouds or possible instabilities in the stratification. The main feature of thermodynamic diagrams is the equivalence between the area in the diagram and energy, the P-alpha-diagram shows a strong deformation of the grid for atmospheric conditions and is therefore not useful in atmospheric sciences. The three diagrams are constructed from the P-alpha-diagram by using appropriate coordinate transformations, not a thermodynamic diagram in a strict sense since it does not display the energy - area equivalence is the Stüve diagram Due to its simpler construction it is preferred in education. With the help of these lines, parameters such as cloud condensation level, level of free convection, etc. can be derived from the soundings. The Physics of Atmospheres by John Houghton, Cambridge University Press 2002, www. met. tamu. edu/. /aws-tr79-006. pdf A very large technical manual how to use the diagrams. Www. comet. ucar. edu/. /sld010. htm A course on how to use diagrams at Comet, the Cooperative Program for Operational Meteorology, Education and Training
39.
Intensive and extensive properties
–
Physical properties of materials and systems can often be categorized as being either intensive or extensive quantities, according to how the property changes when the size of the system changes. According to IUPAC, a property is one whose magnitude is independent of the size of the system. An extensive property is one whose magnitude is additive for subsystems, an intensive property is a bulk property, meaning that it is a physical property of a system that does not depend on the system size or the amount of material in the system. Examples of intensive properties include temperature, T, refractive index, n, density, ρ, when a diamond is cut, the pieces maintain their intrinsic hardness, so hardness is independent of the size of the system. By contrast, a property is additive for subsystems. For example, both the mass, m, and the volume, V, of a diamond are directly proportional to the amount that is left after cutting it from the raw mineral, mass and volume are extensive properties, but hardness is intensive. The ratio of two properties of the same object or system is an intensive property. For example, the ratio of a mass and volume. The terms intensive and extensive quantities were introduced by Richard C, an intensive property is a physical quantity whose value does not depend on the amount of the substance for which it is measured. For example, the temperature of a system in equilibrium is the same as the temperature of any part of it. If the system is divided the temperature of each subsystem is identical, the same applies to the density of a homogeneous system, if the system is divided in half, the mass and the volume change in the identical ratio and the density remains unchanged. Additionally, the point of a substance is another example of an intensive property. For example, the point of water is 100 °C at a pressure of one atmosphere. The distinction between intensive and extensive properties has some theoretical uses, other intensive properties are derived from those two variables. Examples of intensive properties include, The IUPAC Gold Book defines an extensive property as a physical quantity whose magnitude is additive for subsystems. The value of such a property is proportional to the size of the system it describes. For example, the amount of required to melt ice at constant temperature and pressure is an extensive property. The amount of required to melt one ice cube would be much less than the amount of heat required to melt an iceberg
40.
State function
–
State functions do not depend on the path by which the system arrived at its present state. A state function describes the state of a system. In contrast, mechanical work and heat are process quantities or path functions, the mode of description breaks down for quantities exhibiting hysteresis effects. A thermodynamic system is described by a number of parameters which are not necessarily independent. The number of parameters needed to describe the system is the dimension of the space of the system. For example, a gas having a fixed number of particles is a simple case of a two-dimensional system. In this example, any system is specified by two parameters, such as pressure and volume, or perhaps pressure and temperature. They are simply different coordinate systems in the thermodynamic state space. Given pressure and temperature, the volume is calculable from them, likewise, given pressure and volume, the temperature is calculable from them. An analogous statement holds for higher-dimensional spaces, as described by the state postulate, if the state space is two-dimensional as in the above example, one may visualize the state space as a three-dimensional graph. The labels of the axes are not generally unique, since there are more variables than three in this case, and any two independent variables suffice to define the state. When a system changes state continuously, it out a path in the state space. The path can be specified by noting the values of the parameters as the system traces out the path, perhaps as a function of time. For example, we might have the pressure P and the volume V as functions of time from time t 0 to t 1 and this will specify a path in our two dimensional state space example. We can now all sorts of functions of time which we may integrate over the path. It is clear that in order to calculate the work W in the integral, we will have to know the functions P and V at each time t. A state function is a function of the parameters of the system which only depends upon the values at the endpoints of the path. For example, suppose we wish to calculate the work plus the integral of V d P over the path, the product P V is therefore a state function of the system
41.
Thermodynamic temperature
–
Thermodynamic temperature is the absolute measure of temperature and is one of the principal parameters of thermodynamics. Thermodynamic temperature is defined by the law of thermodynamics in which the theoretically lowest temperature is the null or zero point. At this point, absolute zero, the constituents of matter have minimal motion. In the quantum-mechanical description, matter at absolute zero is in its ground state, the International System of Units specifies a particular scale for thermodynamic temperature. It uses the Kelvin scale for measurement and selects the point of water at 273.16 K as the fundamental fixing point. Other scales have been in use historically, the Rankine scale, using the degree Fahrenheit as its unit interval, is still in use as part of the English Engineering Units in the United States in some engineering fields. ITS-90 gives a means of estimating the thermodynamic temperature to a very high degree of accuracy. Internal energy is called the heat energy or thermal energy in conditions when no work is done upon the substance by its surroundings. Internal energy may be stored in a number of ways within a substance, each way constituting a degree of freedom. At equilibrium, each degree of freedom will have on average the energy, k B T /2 where k B is the Boltzmann constant. Temperature is a measure of the random submicroscopic motions and vibrations of the constituents of matter. These motions comprise the internal energy of a substance, more specifically, the thermodynamic temperature of any bulk quantity of matter is the measure of the average kinetic energy per classical degree of freedom of its constituent particles. Translational motions are almost always in the classical regime, translational motions are ordinary, whole-body movements in three-dimensional space in which particles move about and exchange energy in collisions. Figure 1 below shows translational motion in gases, Figure 4 below shows translational motion in solids, Zero kinetic energy remains in a substance at absolute zero. Throughout the scientific world where measurements are made in SI units, many engineering fields in the U. S. however, measure thermodynamic temperature using the Rankine scale. By international agreement, the kelvin and its scale are defined by two points, absolute zero, and the triple point of Vienna Standard Mean Ocean Water. Absolute zero, the lowest possible temperature, is defined as being precisely 0 K, the triple point of water is defined as being precisely 273.16 K and 0.01 °C. This definition does three things, It fixes the magnitude of the unit as being precisely 1 part in 273.15 kelvins
42.
Entropy
–
In statistical thermodynamics, entropy is a measure of the number of microscopic configurations Ω that a thermodynamic system can have when in a state as specified by some macroscopic variables. Formally, S = k B ln Ω, for example, gas in a container with known volume, pressure, and temperature could have an enormous number of possible configurations of the collection of individual gas molecules. Each instantaneous configuration of the gas may be regarded as random, Entropy may be understood as a measure of disorder within a macroscopic system. The second law of thermodynamics states that an isolated systems entropy never decreases, such systems spontaneously evolve towards thermodynamic equilibrium, the state with maximum entropy. Non-isolated systems may lose entropy, provided their environments entropy increases by at least that amount, since entropy is a function of the state of the system, a change in entropy of a system is determined by its initial and final states. This applies whether the process is reversible or irreversible, however, irreversible processes increase the combined entropy of the system and its environment. The above definition is called the macroscopic definition of entropy because it can be used without regard to any microscopic description of the contents of a system. The concept of entropy has found to be generally useful and has several other formulations. Entropy was discovered when it was noticed to be a quantity that behaves as a function of state and it has the dimension of energy divided by temperature, which has a unit of joules per kelvin in the International System of Units. But the entropy of a substance is usually given as an intensive property—either entropy per unit mass or entropy per unit amount of substance. In statistical mechanics this reflects that the state of a system is generally non-degenerate. Understanding the role of entropy in various processes requires an understanding of how. It is often said that entropy is an expression of the disorder, or randomness of a system, the second law is now often seen as an expression of the fundamental postulate of statistical mechanics through the modern definition of entropy. In other words, in any natural process there exists an inherent tendency towards the dissipation of useful energy and he made the analogy with that of how water falls in a water wheel. This was an insight into the second law of thermodynamics. g. Clausius described entropy as the transformation-content, i. e. dissipative energy use and this was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, henceforth, the essential problem in statistical thermodynamics, i. e. according to Erwin Schrödinger, has been to determine the distribution of a given amount of energy E over N identical systems. Carathéodory linked entropy with a definition of irreversibility, in terms of trajectories
43.
Introduction to entropy
–
Entropy is an important concept in the branch of science known as thermodynamics. The idea of irreversibility is central to the understanding of entropy, everyone has an intuitive understanding of irreversibility. If one watches a movie of everyday life running forward and in reverse, the intuitive meaning of expressions such as you cant unscramble an egg, or you cant take the cream out of the coffee is that these are irreversible processes. No matter how long you wait, the cream wont jump out of the coffee into the creamer, all real physical processes involving systems in everyday life, with many atoms or molecules, are irreversible. For an irreversible process in a system, the thermodynamic state variable known as entropy is always increasing. The reason that the movie in reverse is so easily recognized is because it shows processes for which entropy is decreasing, in everyday life, there may be processes in which the increase of entropy is practically unobservable, almost zero. In these cases, a movie of the run in reverse will not seem unlikely. In thermodynamics, one says that this process is practically reversible, the statement of the fact that the entropy of an isolated system never decreases is known as the second law of thermodynamics. Classical thermodynamics is a theory which describes a system in terms of the thermodynamic variables of the system or its parts. Some thermodynamic variables are familiar, temperature, pressure, volume, entropy is a thermodynamic variable which is less familiar and not as easily understood. A system is any region of space containing matter and energy, A cup of coffee, a glass of icewater, an automobile, thermodynamic variables do not give a complete picture of the system. Thermodynamics deals with matter in a sense, it would be valid even if the atomic theory of matter were wrong. This is an important quality, because it means that based on thermodynamics is unlikely to require alteration as new facts about atomic structure. The essence of thermodynamics is embodied in the four laws of thermodynamics, unfortunately, thermodynamics provides little insight into what is happening at a microscopic level. Statistical mechanics is a theory which explains thermodynamics in microscopic terms. It explains thermodynamics in terms of the possible detailed microscopic situations the system may be in when the variables of the system are known. These are known as microstates while the description of the system in thermodynamic terms specifies the macrostate of the system, many different microstates can yield the same macrostate. It is important to understand that statistical mechanics does not define temperature, pressure, entropy and they are already defined by thermodynamics
44.
Pressure
–
Pressure is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure is the relative to the ambient pressure. Various units are used to express pressure, Pressure may also be expressed in terms of standard atmospheric pressure, the atmosphere is equal to this pressure and the torr is defined as 1⁄760 of this. Manometric units such as the centimetre of water, millimetre of mercury, Pressure is the amount of force acting per unit area. The symbol for it is p or P, the IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in one is working, on the nearby presence of other symbols for quantities such as power and momentum. Mathematically, p = F A where, p is the pressure, F is the normal force and it relates the vector surface element with the normal force acting on it. It is incorrect to say the pressure is directed in such or such direction, the pressure, as a scalar, has no direction. The force given by the relationship to the quantity has a direction. If we change the orientation of the element, the direction of the normal force changes accordingly. Pressure is distributed to solid boundaries or across arbitrary sections of normal to these boundaries or sections at every point. It is a parameter in thermodynamics, and it is conjugate to volume. The SI unit for pressure is the pascal, equal to one newton per square metre and this name for the unit was added in 1971, before that, pressure in SI was expressed simply in newtons per square metre. Other units of pressure, such as pounds per square inch, the CGS unit of pressure is the barye, equal to 1 dyn·cm−2 or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre, but using the names kilogram, gram, kilogram-force, or gram-force as units of force is expressly forbidden in SI. The technical atmosphere is 1 kgf/cm2, since a system under pressure has potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume. It is therefore related to density and may be expressed in units such as joules per cubic metre. Similar pressures are given in kilopascals in most other fields, where the prefix is rarely used
45.
Volume (thermodynamics)
–
In thermodynamics, the volume of a system is an important extensive parameter for describing its thermodynamic state. The specific volume, a property, is the systems volume per unit of mass. Volume is a function of state and is interdependent with other properties such as pressure and temperature. For example, volume is related to the pressure and temperature of a gas by the ideal gas law. The physical volume of a system may or may not coincide with a control volume used to analyze the system, the volume of a thermodynamic system typically refers to the volume of the working fluid, such as, for example, the fluid within a piston. Changes to this volume may be made through an application of work, an isochoric process however operates at a constant-volume, thus no work can be produced. Many other thermodynamic processes will result in a change in volume, a polytropic process, in particular, causes changes to the system so that the quantity p V n is constant. Note that for specific polytropic indexes a polytropic process will be equivalent to a constant-property process, for instance, for very large values of n approaching infinity, the process becomes constant-volume. Gases are compressible, thus their volumes may be subject to change during thermodynamic processes, liquids, however, are nearly incompressible, thus their volumes can be often taken as constant. In general, compressibility is defined as the volume change of a fluid or solid as a response to a pressure. Similarly, thermal expansion is the tendency of matter to change in volume in response to a change in temperature, many thermodynamic cycles are made up of varying processes, some which maintain a constant volume and some which do not. A vapor-compression refrigeration cycle, for example, follows a sequence where the refrigerant fluid transitions between the liquid and vapor states of matter, typical units for volume are m 3, l, and f t 3. Mechanical work performed on a working fluid causes a change in the constraints of the system, in other words, for work to occur. Hence volume is an important parameter in characterizing many thermodynamic processes where an exchange of energy in the form of work is involved, volume is one of a pair of conjugate variables, the other being pressure. As with all pairs, the product is a form of energy. The product p V is the energy lost to a due to mechanical work. This product is one term which makes up enthalpy H, H = U + p V, the second law of thermodynamics describes constraints on the amount of useful work which can be extracted from a thermodynamic system. Similarly, the value of heat capacity to use in a given process depends on whether the process produces a change in volume
46.
Vapor quality
–
In thermodynamics, vapour quality is the mass fraction in a saturated mixture that is vapour, i. e. saturated vapour has a quality of 100%, and saturated liquid has a quality of 0%. Vapour quality is a property which can be used in conjunction with other independent intensive properties to specify the thermodynamic state of the working fluid of a thermodynamic system. It has no meaning for substances which are not saturated mixtures, quality χ can be calculated by dividing the mass of the vapour by the mass of the total mixture, χ = m v a p o u r m t o t a l where m indicates mass. Another definition used by chemical engineers defines quality of a fluid as the fraction that is saturated liquid, by this definition, a saturated liquid has q =0. A saturated vapour has q =1, an alternative definition is the equilibrium thermodynamic quality. It can be used only for single-component mixtures, and can take values <0 and >1, χ e q = h − h f h f g, subscripts f and g refer to saturated liquid and saturated gas respectively, and fg refers to vaporisation. Another expression of the concept is, χ = m v m l + m v where m v is the vapour mass. The genesis of the idea of quality was derived from the origins of thermodynamics. Low quality steam would contain a high moisture percentage and therefore damage components more easily, high quality steam would not corrode the steam engine. Steam engines use water vapour to drive pistons or turbines which create work, the quality of steam can be quantitatively described by steam quality, the proportion of saturated steam in a saturated water/steam mixture. A steam quality of 0 indicates 100% water while a quality of 1 indicates 100% steam. The quality of steam on which whistles are blown is variable. Steam quality determines the velocity of sound, which declines with decreasing dryness due to the inertia of the liquid phase, also, the specific volume of steam for a given temperature decreases with decreasing dryness. Steam quality is very useful in determining enthalpy of saturated water/steam mixtures since the enthalpy of steam is many orders of higher than enthalpy of water
47.
Reduced properties
–
In thermodynamics, the reduced properties of a fluid are a set of state variables normalized by the fluids state properties at its critical point. These dimensionless thermodynamic coordinates, taken together with a substances compressibility factor, reduced properties are also used to define the Peng–Robinson equation of state, a model designed to provide reasonable accuracy near the critical point. They are also used to critical exponents, which describe the behaviour of physical quantities near continuous phase transitions, both the reduced temperature and the reduced pressure are often used in thermodynamical formulas like the Peng–Robinson equation of state
48.
Process function
–
As an example, mechanical work and heat are process functions because they describe quantitatively the transition between equilibrium states of a thermodynamic system. Path functions depend on the path taken to one state from another. Examples of path functions include work, heat and arc length, in contrast to path functions, state functions are independent of the path taken. Thermodynamic state variables are point functions, differing from path functions, for a given state, considered as a point, there is a definite value for each state variable and state function. Infinitesimal changes in a process function X are often indicated by δ X to distinguish them from changes in a state function Y which is written d Y. The quantity d Y is a differential, while δ X is not. In general, a process function X may be either holonomic or non-holonomic, for a holonomic process function, an auxiliary state function λ may be defined such that Y = λ X is a state function. For a non-holonomic process function, no such function may be defined, in other words, for a holonomic process function, λ may be defined such that d Y = λ δ X is an exact differential. For example, thermodynamic work is a process function since the integrating factor λ =1 / p will yield exact differential of the volume state function d V = δ W / p. Thermodynamics
49.
Work (thermodynamics)
–
Thermodynamic work is a version of the concept of work in physics. The external factors may be electromagnetic, gravitational, or pressure/volume or other simply mechanical constraints, thermodynamic work is defined to be measurable solely from knowledge of such external macroscopic forces. These forces are associated with state variables of the system that always occur in conjugate pairs, for example pressure and volume, magnetic flux density. In the SI system of measurement, work is measured in joules, the rate at which work is performed is power. Work, i. e. weight lifted through a height, was defined in 1824 by Sadi Carnot in his famous paper Reflections on the Motive Power of Fire. Specifically, according to Carnot, We use here motive power to express the effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height and it has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised. In 1845, the English physicist James Joule wrote a paper On the mechanical equivalent of heat for the British Association meeting in Cambridge. In this experiment, the friction and agitation of the paddle-wheel on the body of water caused heat to be generated which, in turn, both the temperature change ∆T of the water and the height of the fall ∆h of the weight mg were recorded. Using these values, Joule was able to determine the equivalent of heat. Joule estimated an equivalent of heat to be 819 ft•lbf/Btu. The modern day definitions of heat, work, temperature, thermodynamic work is performed by actions such as compression, and including shaft work, stirring, and rubbing. In the simplest cases, for example, there are work of change of volume against a resisting pressure, an example of isochoric work is when an outside agency, in the surrounds of the system, drives a frictional action on the surface of the system. The amount of energy transferred as work is measured through quantities defined externally to the system of interest, in an important sign convention, work that adds to the internal energy of the system is counted as positive. Nevertheless, on the hand, for historical reasons, an oft-encountered sign convention is to consider work done by the system on its surroundings as positive. Thermodynamic work does not account for any energy transferred between systems as heat or though transfer of matter, in particular, all forms of work can be converted into the mechanical work of lifting a weight, which was the original form of thermodynamic work considered by Carnot and Joule. Some authors have considered this equivalence to the lifting of a weight as a characteristic of work. In contrast, the conversion of heat into work in an engine can never exceed the Carnot efficiency
50.
Heat
–
WWE Heat was a professional wrestling television program produced by World Wrestling Entertainment. Heat was most recently streamed on WWE. com on Friday afternoons for North American viewers, the final episode was uploaded to WWE. com. The show was replaced internationally with WWE Vintage Collection, a program featuring classic matches, the show was originally introduced on the USA Network on August 2,1998 in the United States. The one-hour show would be broadcast on Sunday nights, it would be live most weeks and it was the second primary program of the WWFs weekly television show line-up, serving as a supplement to the Monday Night Raw program. On scheduled WWF pay-per-view event nights, Heat would also serve as a show to the events. The show was signed for only 6 episodes but was very popular and was continued. With the premiere of SmackDown. in August 1999, coverage of Heat was significantly reduced in favor of the newer show, also led to Heat being taped before SmackDown. With matches for WWF syndication programs like Jakked/Metal being taped before Raw broadcasts, premiered, also Heat briefly became a complete weekly summary show, featuring occasional interviews and music videos. After only a few weeks following the change, Heat began airing exclusive matches again. Occasionally, special editions of the show aired heavily promoted, for Super Bowl XXXIII in 1999, Heat aired as Halftime Heat on the USA Network during halftime of the Super Bowl. These specials ended following the move to MTV in 2000. When the show started airing on MTV in late 2000, it was broadcast live from WWF New York, WWF personalities and performers would appear at the restaurant as special guests while Michael Cole and Tazz provided commentary to matches. The United Kingdoms coverage of Heat began in January 2000, when Channel 4 started broadcasting the show at 4pm on Sundays and these one-hour shows were a magazine-type show, usually featuring three or four brief matches as well as highlights from Raw and SmackDown. Were aired on this version of the show, a separate commentary team was used on airings in the United Kingdom, with references aimed more at that specific audience. The two-person announce team was a mix of individuals including Kevin Kelly, Michael Cole, Michael Hayes, during the middle of 2000, Heat started to get moved around the Channel 4 schedule, usually between the afternoon and midnight. Towards the end of 2000, the show was moved to being broadcast in the early-hours of Monday mornings. The show stayed in the time-slot until December 2001 when Channel 4s deal with the World Wrestling Federation expired in the United Kingdom, in April 2002, the show returned to its original filming schedule, again before Raw. Eventually, the live from WWF New York format was retired, ratings were still moderate for Heat, although the show lost some popularity once SmackDown
51.
Material properties (thermodynamics)
–
The thermodynamic properties of materials are intensive thermodynamic parameters which are specific to a given material. Each is directly related to a second order differential of a thermodynamic potential, for a single component system, only three second derivatives are needed in order to derive all others, and so only three material properties are needed to derive all others. Thermodynamics and an Introduction to Thermostatistics, new York, John Wiley & Sons
52.
Thermodynamic databases for pure substances
–
Thermodynamic databases contain information about thermodynamic properties for substances, the most important being enthalpy, entropy, and Gibbs free energy. Numerical values of thermodynamic properties are collected as tables or are calculated from thermodynamic datafiles. Data is expressed as temperature-dependent values for one mole of substance at the pressure of 101.325 kPa. Unfortunately, both of these definitions for the condition for pressure are in use. Thermodynamic data is presented as a table or chart of function values for one mole of a substance. A thermodynamic datafile is a set of equation parameters from which the data values can be calculated. Tables and datafiles are usually presented at a pressure of 1 bar or 1 atm. Function values depend on the state of aggregation of the substance, the state of aggregation for thermodynamic purposes is the standard state, sometimes called the reference state, and defined by specifying certain conditions. The normal standard state is defined as the most stable physical form of the substance at the specified temperature. However, since any non-normal condition could be chosen as a standard state, a physical standard state is one that exists for a time sufficient to allow measurements of its properties. The most common physical standard state is one that is stable thermodynamically and it has no tendency to transform into any other physical state. If a substance can exist but is not thermodynamically stable, it is called a metastable state, a non-physical standard state is one whose properties are obtained by extrapolation from a physical state. Metastable liquids and solids are important because some substances can persist, Thermodynamic functions that refer to conditions in the normal standard state are designated with a small superscript °. The relationship between physical and thermodynamic properties may be described by an equation of state. It is therefore the change in these functions that is of most interest, different databases designate this term in different ways, for example HT-H298, H°-H°298, H°T-H°298 or H°-H°, where Tr means the reference temperature. All of these terms mean the molar heat content for a substance in its standard state above a reference temperature of 298.15 K. Data for gases is for the ideal gas at the designated standard pressure. The SI unit for enthalpy is J/mol, and is a number above the reference temperature
53.
Heat capacity
–
Heat capacity or thermal capacity is a measurable physical quantity equal to the ratio of the heat added to an object to the resulting temperature change. The unit of capacity is joule per kelvin J K. Specific heat is the amount of heat needed to raise the temperature of one kilogram of mass by 1 kelvin, Heat capacity is an extensive property of matter, meaning it is proportional to the size of the system. The molar heat capacity is the capacity per unit amount of a pure substance. In some engineering contexts, the heat capacity is used. Other contributions can come from magnetic and electronic degrees of freedom in solids, for quantum mechanical reasons, at any given temperature, some of these degrees of freedom may be unavailable, or only partially available, to store thermal energy. In such cases, the capacity is a fraction of the maximum. As the temperature approaches zero, the heat capacity of a system approaches zero. Quantum theory can be used to predict the heat capacity of simple systems. In a previous theory of common in the early modern period, heat was thought to be a measurement of an invisible fluid. Bodies were capable of holding an amount of this fluid, hence the term heat capacity, named. Heat is no longer considered a fluid, but rather a transfer of disordered energy, nevertheless, at least in English, the term heat capacity survives. In some other languages, the thermal capacity is preferred. In the International System of Units, heat capacity has the unit joules per kelvin, if the temperature change is sufficiently small the heat capacity may be assumed to be constant, C = Q Δ T. Heat capacity is a property, meaning it depends on the extent or size of the physical system studied. A sample containing twice the amount of substance as another sample requires the transfer of twice the amount of heat to achieve the change in temperature. For many purposes it is convenient to report heat capacity as an intensive property. In practice, this is most often an expression of the property in relation to a unit of mass, in science and engineering, International standards now recommend that specific heat capacity always refer to division by mass
54.
Compressibility
–
In thermodynamics and fluid mechanics, compressibility is a measure of the relative volume change of a fluid or solid as a response to a pressure change. β = −1 V ∂ V ∂ p where V is volume, the specification above is incomplete, because for any object or system the magnitude of the compressibility depends strongly on whether the process is adiabatic or isothermal. Accordingly, isothermal compressibility is defined, β T = −1 V T where the subscript T indicates that the differential is to be taken at constant temperature. Isentropic compressibility is defined, β S = −1 V S where S is entropy, for a solid, the distinction between the two is usually negligible. The minus sign makes the compressibility positive in the case that an increase in pressure induces a reduction in volume, the speed of sound is defined in classical mechanics as, c 2 = S where ρ is the density of the material. It follows, by replacing partial derivatives, that the compressibility can be expressed as, β S =1 ρ c 2 The inverse of the compressibility is called the bulk modulus. That page also contains some examples for different materials, the compressibility equation relates the isothermal compressibility to the structure of the liquid. The term compressibility is also used in thermodynamics to describe the deviance in the properties of a real gas from those expected from an ideal gas. The compressibility factor is defined as Z = p V _ R T where p is the pressure of the gas, T is its temperature, and V _ is its molar volume. The deviation from ideal gas behavior tends to become particularly significant near the critical point, in these cases, a generalized compressibility chart or an alternative equation of state better suited to the problem must be utilized to produce accurate results. This pressure dependent transition occurs for atmospheric oxygen in the 2500 K to 4000 K temperature range, in transition regions, where this pressure dependent dissociation is incomplete, both beta and the differential, constant pressure heat capacity greatly increase. For moderate pressures, above 10,000 K the gas further dissociates into free electrons and ions, Z for the resulting plasma can similarly be computed for a mole of initial air, producing values between 2 and 4 for partially or singly ionized gas. Each dissociation absorbs a great deal of energy in a reversible process, ions or free radicals transported to the object surface by diffusion may release this extra energy if the surface catalyzes the slower recombination process. The isothermal compressibility is related to the isentropic compressibility by the relation, more simply stated, β T β S = γ where, γ is the heat capacity ratio. The Earth sciences use compressibility to quantify the ability of a soil or rock to reduce in volume under applied pressure and this concept is important for specific storage, when estimating groundwater reserves in confined aquifers. Geologic materials are made up of two portions, solids and voids, the void space can be full of liquid or gas. Geologic materials reduces in volume only when the spaces are reduced. This can happen over a period of time, resulting in settlement and it is an important concept in geotechnical engineering in the design of certain structural foundations
55.
Thermal expansion
–
Thermal expansion is the tendency of matter to change in shape, area, and volume in response to a change in temperature. Temperature is a function of the average molecular kinetic energy of a substance. When a substance is heated, the energy of its molecules increases. Thus, the molecules begin vibrating/moving more and usually maintain an average separation. Materials which contract with increasing temperature are unusual, this effect is limited in size, the degree of expansion divided by the change in temperature is called the materials coefficient of thermal expansion and generally varies with temperature. If an equation of state is available, it can be used to predict the values of the expansion at all the required temperatures and pressures. A number of contract on heating within certain temperature ranges. For example, the coefficient of expansion of water drops to zero as it is cooled to 3. Also, fairly pure silicon has a coefficient of thermal expansion for temperatures between about 18 and 120 Kelvin. Unlike gases or liquids, solid materials tend to keep their shape when undergoing thermal expansion, in general, liquids expand slightly more than solids. The thermal expansion of glasses is higher compared to that of crystals, at the glass transition temperature, rearrangements that occur in an amorphous material lead to characteristic discontinuities of coefficient of thermal expansion and specific heat. These discontinuities allow detection of the transition temperature where a supercooled liquid transforms to a glass. Absorption or desorption of water can change the size of common materials. Common plastics exposed to water can, in the long term, the coefficient of thermal expansion describes how the size of an object changes with a change in temperature. Specifically, it measures the change in size per degree change in temperature at a constant pressure. Several types of coefficients have been developed, volumetric, area, which is used depends on the particular application and which dimensions are considered important. For solids, one might only be concerned with the change along a length, the volumetric thermal expansion coefficient is the most basic thermal expansion coefficient, and the most relevant for fluids. In general, substances expand or contract when their temperature changes, substances that expand at the same rate in every direction are called isotropic
56.
Thermodynamic equations
–
Thermodynamics is based on a fundamental set of postulates, that became the laws of thermodynamics. Carnot used the motive power for work. In the footnotes to his famous On the Motive Power of Fire, he states and this effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised, the equilibrium state of a thermodynamic system is described by specifying its state. The state of a system is specified by a number of extensive quantities, the most familiar of which are volume, internal energy. Extensive parameters are properties of the system, as contrasted with intensive parameters which can be defined at a single point. The extensive parameters are generally conserved in some way as long as the system is insulated to changes to that parameter from the outside, the truth of this statement for volume is trivial, for particles one might say that the total particle number of each atomic element is conserved. In the case of energy, the statement of the conservation of energy is known as the first law of thermodynamics, a thermodynamic system is in equilibrium when it is no longer changing in time. This may happen in a short time, or it may happen with glacial slowness. A thermodynamic system may be composed of many subsystems which may or may not be insulated from each other with respect to the various extensive quantities. If we have a system in equilibrium in which we relax some of its constraints. The thermodynamic parameters may now be thought of as variables and the state may be thought of as a point in a space of thermodynamic parameters. The change in the state of the system can be seen as a path in this state space and this change is called a thermodynamic process. Thermodynamic equations are now used to express the relationships between the parameters at these different equilibrium state. The concept which governs the path that a thermodynamic system traces in space as it goes from one equilibrium state to another is that of entropy. The entropy is first viewed as a function of all of the extensive thermodynamic parameters. The second law of thermodynamics specifies that the state that it moves to is in fact the one with the greatest entropy. Once we know the entropy as a function of the variables of the system
57.
Carnot's theorem (thermodynamics)
–
Carnots theorem, developed in 1824 by Nicolas Léonard Sadi Carnot, also called Carnots rule, is a principle that specifies limits on the maximum efficiency any heat engine can obtain. The efficiency of a Carnot engine depends solely on the difference between the hot and cold temperature reservoirs, Carnots theorem states, All heat engines between two heat reservoirs are less efficient than a Carnot heat engine operating between the same reservoirs. Every Carnot heat engine between a pair of reservoirs is equally efficient, regardless of the working substance employed or the operation details. Based on modern thermodynamics, Carnots theorem is a result of the law of thermodynamics. Historically, however, it was based on contemporary caloric theory, the proof of the Carnot theorem is a proof by contradiction, or reductio ad absurdum, as illustrated by the figure showing two heat engines operating between two reservoirs of different temperature. The heat engine with more efficiency is driving a heat engine with less efficiency and this pair of engines receives no outside energy, and operates solely on the energy released when heat is transferred from the hot and into the cold reservoir. However, if η M > η L, then the net heat flow would be backwards, i. e. into the hot reservoir and it is generally agreed that this is impossible because it violates the second law of thermodynamics. We begin by verifying the values of work and heat flow depicted in the figure, first, we must point out an important caveat, the engine with less efficiency is being driven as a heat pump, and therefore must be a reversible engine. If the less efficient engine is not reversible, then the device could be built, a reversible heat engine with low thermodynamic efficiency, W / Q h delivers more heat to the hot reservoir for a given amount of work when it is being driven as a heat pump. Having established that the flow values shown in the figure are correct, Carnots theorem may be proven for irreversible. As the figure shows, this will cause heat to flow from the cold to the hot reservoir without any work or energy. Therefore both heat engines have the same efficiency, and we conclude that, All reversible engines that operate between the two heat reservoirs have the same efficiency. This is an important result because it helps establish the Clausius theorem, Δ S = ∫ a b d Q r e v T, over all paths. If this integral were not path independent, then entropy, S, if one of the engines is irreversible, it must be the engine, placed so that it reverse drives the less efficient but reversible engine. But if this engine is more efficient than the reversible engine. The efficiency of the engine is the work divided by the heat introduced to the system or where wcy is the work done per cycle, thus, the efficiency depends only on qC/qH. This can only be the case if f = q 3 q 1 = q 2 q 3 q 1 q 2 = f f, specializing to the case that T1 is a fixed reference temperature, the temperature of the triple point of water. Then for any T2 and T3, f = f f =273.16 ⋅ f 273.16 ⋅ f and this is because Carnots theorem applies to engines converting thermal energy to work, whereas fuel cells and batteries instead convert chemical energy to work
58.
Clausius theorem
–
The equality holds in the reversible case and the inequality holds in the irreversible case. The reversible case is used to introduce the state function. This is because in cyclic process the variation of a function is zero. The Clausius Theorem is an explanation of the Second Law of Thermodynamics. Clausius developed this in his efforts to explain entropy and define it quantitatively, in more direct terms, the theorem gives us a way to determine if a cyclical process is reversible or irreversible. The Clausius Theorem provides a formula for understanding the second law. Clausius was one of the first to work on the idea of entropy and is responsible for giving it that name. What is now known as the Clausius Theorem was first published in 1862 in Clausius’ sixth memoir, Clausius sought to show a proportional relationship between entropy and the energy flow by heating into a system. In a system, this energy can be transformed into work. Clausius writes that “The algebraic sum of all the transformations occurring in a process can only be less than zero, or, as an extreme case. Clausius then took this a further and determined that the following equation must be found true for any cyclical process that is possible, reversible or not. This equation is the “Clausius Inequality”, ∮ δ Q T ≤0 Now that this is known, there must be a relation developed between the Clausius Inequality and entropy. This is in contrast to the amount of energy added as heat and as work, in a cyclic process, therefore, the entropy of the system at the beginning of the cycle must equal the entropy at the end of the cycle. In the irreversible case, entropy will be created in the system, in the reversible case, no entropy is created and the amount of entropy added is equal to the amount extracted. The temperature that enters in the denominator of the integrand in the Clausius Inequality is actually the temperature of the reservoir with which the system exchanges heat. At each instant of the process, the system is in contact with an external reservoir, ∮ δ Q T ≤0 Hence, we proved the Clausius Theorem. Introduction to entropy Carnot heat engine Judith McGovern, the Clausius Inequality And The Mathematical Statement Of The Second Law
59.
Fundamental thermodynamic relation
–
D U = T d S − P d V Here, U is internal energy, T is absolute temperature, S is entropy, P is pressure, and V is volume. This relation applies to a change, or to a change in a closed system of uniform temperature and pressure at constant composition. This is only one expression of the thermodynamic relation. It may be expressed in other ways, using different variables, However, since U, S, and V are thermodynamic state functions, the above relation holds also for non-reversible changes in a system of uniform pressure and temperature at constant composition. The last term must be zero for a reversible process, the above derivation uses the first and second laws of thermodynamics. The first law of thermodynamics is essentially a definition of heat, However, the second law of thermodynamics is not a defining relation for the entropy. The fundamental definition of entropy of a system containing an amount of energy of E is. Here δ E is a small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of δ E. However, in the thermodynamic limit, the specific entropy does not depend on δ E. The entropy is thus a measure of the uncertainty about exactly which quantum state the system is in and this allows us to extract all the thermodynamical quantities of interest. Suppose that the system has some external parameter, x, that can be changed, in general, the energy eigenstates of the system will depend on x. The generalized force, X, corresponding to the external parameter x is defined such that X d x is the performed by the system if x is increased by an amount dx. E. g. if x is the volume, then X is the pressure, suppose we change x to x + dx. Since these energy eigenstates increase in energy by Y dx, all such energy eigenstates that are in the interval ranging from E - Y dx to E move from below E to above E, there are N Y = Ω Y δ E Y d x such energy eigenstates. If Y d x ≤ δ E, all these energy eigenstates will move into the range between E and E + δ E and contribute to an increase in Ω. The number of energy eigenstates that move from below E + δ E to above E + δ E is, of course, the difference N Y − N Y is thus the net contribution to the increase in Ω. Note that if Y dx is larger than δ E there will be energy eigenstates that move from below E to above E + δ E and they are counted in both N Y and N Y, therefore the above expression is also valid in that case. e. It does not scale with system size, in contrast, the last term scales as the inverse system size and thus vanishes in the thermodynamic limit
60.
Ideal gas law
–
The ideal gas law is the equation of state of a hypothetical ideal gas. It is an approximation of the behavior of many gases under many conditions. It was first stated by Émile Clapeyron in 1834 as a combination of the empirical Boyles law, Charless law and it can also be derived microscopically from kinetic theory, as was achieved by August Krönig in 1856 and Rudolf Clausius in 1857. The state of an amount of gas is determined by its pressure, volume, the modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature, in SI units, P is measured in pascals, V is measured in cubic metres, n is measured in moles, and T in kelvins. R has the value 8.314 J/ ≈2 cal/, how much gas is present could be specified by giving the mass instead of the chemical amount of gas. Therefore, a form of the ideal gas law may be useful. The chemical amount is equal to the mass of the gas divided by the molar mass. By replacing n with m/M and subsequently introducing density ρ = m/V, we get, defining the specific gas constant Rspecific as the ratio R/M, P = ρ R specific T. This form of the gas law is very useful because it links pressure, density. Alternatively, the law may be written in terms of the specific volume v and it is common, especially in engineering applications, to represent the specific gas constant by the symbol R. In such cases, the gas constant is usually given a different symbol such as R ¯ to distinguish it. In any case, the context and/or units of the gas constant should make it clear as to whether the universal or specific gas constant is being referred to. KB =R/NA The number density contrasts to the formulation, which uses n, the number of moles and V. This relation implies that R = NAkB, where NA is Avogadros constant, in extreme conditions the principles of statistical mechanics may break down as some of the assumptions relating a real life example to an ideal gas become untrue. In SI units, P is measured in pascals, V in cubic metres, Y is a dimensionless number, KB has the value 1. 38·10−23 J/K in SI units. According to the assumptions of the theory of gases, we assumed that there are no inter molecular attractions between the molecules of an ideal gas its potential energy is zero. Hence, all the energy possessed by the gas is kinetic energy, E =32 R T This is the kinetic energy of one mole of a gas
61.
Maxwell relations
–
Maxwells relations are a set of equations in thermodynamics which are derivable from the symmetry of second derivatives and from the definitions of the thermodynamic potentials. These relations are named for the nineteenth-century physicist James Clerk Maxwell, the structure of Maxwell relations is a statement of equality among the second derivatives for continuous functions. It follows directly from the fact that the order of differentiation of a function of two variables is irrelevant. It is seen that for every thermodynamic potential there are n/2 possible Maxwell relations where n is the number of variables for that potential. The thermodynamic square can be used as a mnemonic to recall, the usefulness of these relations lies in their quantifying entropy changes, which are not directly measurable, in terms of measurable quantities like temperature, volume, and pressure. Maxwell relations are based on simple partial differentiation rules, in particular the differential of a function. This leads to the identity d P d V = d T d S. The physical meaning of identity can be seen by noting that the two sides are the equivalent ways of writing the work done in an infinitesimal Carnot cycle. An equivalent way of writing the identity is ∂ ∂ =1, the Maxwell relations now follow directly. For example, T = ∂ ∂ = ∂ ∂ = V, the critical step is the penultimate one. The other Maxwell relations follow in similar fashion, for example, S = ∂ ∂ = ∂ ∂ = − V. The above are not the only Maxwell relationships, for example, if we have a single-component gas, then the number of particles N is also a natural variable of the above four thermodynamic potentials. The Maxwell relationship for the enthalpy with respect to pressure and particle number would then be, in addition, there are other thermodynamic potentials besides the four that are commonly used, and each of these potentials will yield a set of Maxwell relations. Each equation can be re-expressed using the relationship z =1 / z which are also known as Maxwell relations. Table of thermodynamic equations Thermodynamic equations
62.
Onsager reciprocal relations
–
Reciprocal relations occur between different pairs of forces and flows in a variety of physical systems. For example, consider fluid systems described in terms of temperature, matter density, perhaps surprisingly, the heat flow per unit of pressure difference and the density flow per unit of temperature difference are equal. This equality was shown to be necessary by Lars Onsager using statistical mechanics as a consequence of the reversibility of microscopic dynamics. Onsagers reciprocity in the thermoelectric effect manifests itself in the equality of the Peltier, similarly, the so-called direct piezoelectric and reverse piezoelectric coefficients are equal. Experimental verifications of the Onsager reciprocal relations were collected and analyzed by D. G, in this classical review, chemical reactions are considered as cases with meager and inconclusive evidence. Further theoretical analysis and experiments support the relations for chemical kinetics with transport. For his discovery of these relations, Lars Onsager was awarded the 1968 Nobel Prize in Chemistry. Some authors have even described Onsagers relations as the Fourth law of thermodynamics, the basic thermodynamic potential is internal energy. The conservation of mass is expressed locally by the fact that the flow of mass density ρ satisfies the continuity equation, ∂ ρ ∂ t + ∇ ⋅ J ρ =0, where J ρ is the mass flux vector. In the absence of matter flows, Fouriers law is written, J u = − k ∇ T. Onsagers contribution was to demonstrate not only is L α β positive semi-definite, it is also symmetric. In other words, the cross-coefficients L u ρ and L ρ u are equal, the fact that they are at least proportional follows from simple dimensional analysis. The rate of production for the above simple example uses only two entropic forces, and a 2x2 Onsager phenomenological matrix. The expression for the approximation to the fluxes and the rate of entropy production can very often be expressed in an analogous way for many more general. Let x 1, x 2, …, x n denote fluctuations from equilibrium values in several thermodynamic quantities, and let S be the entropy
63.
Bridgman's thermodynamic equations
–
The equations are named after the American physicist Percy Williams Bridgman. The extensive variables of the system are fundamental, only the entropy S, the volume V and the four most common thermodynamic potentials will be considered. Many thermodynamic equations are expressed in terms of partial derivatives, for example, the expression for the heat capacity at constant pressure is, C P = P which is the partial derivative of the enthalpy with respect to temperature while holding pressure constant. For example from the equations below we have, P = C P and P =1 Dividing, a Complete Collection of Thermodynamic Formulas
64.
Table of thermodynamic equations
–
This article is a summary of common equations and quantities in thermodynamics. SI units are used for temperature, not Celsius or Fahrenheit. Many of the definitions below are used in the thermodynamics of chemical reactions. The equations in this article are classified by subject, S = k B, where kB is the Boltzmann constant, and Ω denotes the volume of macrostate in the phase space or otherwise called thermodynamic probability. D S = δ Q T, for reversible processes only Below are useful results from the Maxwell–Boltzmann distribution for a gas. The distribution is valid for atoms or molecules constituting ideal gases, corollaries of the non-relativistic Maxwell–Boltzmann distribution are below. For quasi-static and reversible processes, the first law of thermodynamics is, d U = δ Q − δ W where δQ is the heat supplied to the system and δW is the work done by the system
65.
Thermodynamic potential
–
A thermodynamic potential is a scalar quantity used to represent the thermodynamic state of a system. The concept of thermodynamic potentials was introduced by Pierre Duhem in 1886, Josiah Willard Gibbs in his papers used the term fundamental functions. One main thermodynamic potential that has an interpretation is the internal energy U. It is the energy of configuration of a system of conservative forces. Expressions for all other thermodynamic potentials are derivable via Legendre transforms from an expression for U. In thermodynamics, certain forces, such as gravity, are disregarded when formulating expressions for potentials. Five common thermodynamic potentials are, where T = temperature, S = entropy, p = pressure, the Helmholtz free energy is often denoted by the symbol F, but the use of A is preferred by IUPAC, ISO and IEC. Ni is the number of particles of type i in the system, for the sake of completeness, the set of all Ni are also included as natural variables, although they are sometimes ignored. These five common potentials are all energy potentials, but there are also entropy potentials, the thermodynamic square can be used as a tool to recall and derive some of the potentials. Gibbs energy is the capacity to do non-mechanical work, enthalpy is the capacity to do non-mechanical work plus the capacity to release heat. Helmholtz free energy is the capacity to do mechanical plus non-mechanical work, thermodynamic potentials are very useful when calculating the equilibrium results of a chemical reaction, or when measuring the properties of materials in a chemical reaction. Just as in mechanics, the system will tend towards lower values of potential and at equilibrium, under these constraints, the thermodynamic potentials can also be used to estimate the total amount of energy available from a thermodynamic system under the appropriate constraint. In particular, When the entropy and external parameters of a system are held constant. This follows from the first and second laws of thermodynamics and is called the principle of minimum energy, the following three statements are directly derivable from this principle. When the temperature and external parameters of a system are held constant. When the pressure and external parameters of a system are held constant. When the temperature, pressure and external parameters of a system are held constant. The variables that are constant in this process are termed the natural variables of that potential
66.
Thermodynamic free energy
–
The thermodynamic free energy is the amount of work that a thermodynamic system can perform. The concept is useful in the thermodynamics of chemical or thermal processes in engineering, the free energy is the internal energy of a system minus the amount of energy that cannot be used to perform work. This unusable energy is given by the entropy of a multiplied by the temperature of the system. Like the internal energy, the energy is a thermodynamic state function. Energy is a generalization of free energy, since energy is the ability to do work which is free energy, free energy is that portion of any first-law energy that is available to perform thermodynamic work, i. e. work mediated by thermal energy. Free energy is subject to loss in the course of such work. Since first-law energy is conserved, it is evident that free energy is an expendable. Several free energy functions may be formulated based on system criteria, free energy functions are Legendre transformations of the internal energy. The Helmholtz free energy has a theoretical importance since it is proportional to the logarithm of the partition function for the canonical ensemble in statistical mechanics. The historically earlier Helmholtz free energy is defined as A = U − TS, where U is the energy, T is the absolute temperature. Its change is equal to the amount of work done on, or obtainable from. Thus its appellation work content, and the designation A from Arbeit, the Gibbs free energy is given by G = H − TS, where H is the enthalpy. Historically, these terms have been used inconsistently. In physics, free energy most often refers to the Helmholtz free energy, denoted by A, while in chemistry, since both fields use both functions, a compromise has been suggested, using A to denote the Helmholtz function and G for the Gibbs function. While A is preferred by IUPAC, G is sometimes still in use, the use of the words “latent heat” implied a similarity to latent heat in the more usual sense, it was regarded as chemically bound to the molecules of the body. In the adiabatic compression of a gas, the heat remained constant. During the early 19th century, the concept of perceptible or free caloric began to be referred to as “free heat” or heat set free. In 1824, for example, the French physicist Sadi Carnot, in his famous “Reflections on the Motive Power of Fire”, an increasing number of books and journal articles do not include the attachment “free”, referring to G as simply Gibbs energy
67.
Free entropy
–
A thermodynamic free entropy is an entropic thermodynamic potential analogous to the free energy. Also known as a Massieu, Planck, or Massieu–Planck potentials, in statistical mechanics, free entropies frequently appear as the logarithm of a partition function. The Onsager reciprocal relations in particular, are developed in terms of entropic potentials, in mathematics, free entropy means something quite different, it is a generalization of entropy defined in the subject of free probability. A free entropy is generated by a Legendre transform of the entropy, the different potentials correspond to different constraints to which the system may be subjected. The most common examples are, where Note that the use of the terms Massieu and Planck for explicit Massieu-Planck potentials are somewhat obscure, in particular Planck potential has alternative meanings. The most standard notation for a potential is ψ, used by both Planck and Schrödinger. Free entropies where invented by French engineer Francois Massieu in 1869, and actually predate Gibbss free energy. S = S By the definition of a differential, d S = ∂ S ∂ U d U + ∂ S ∂ V d V + ∑ i =1 s ∂ S ∂ N i d N i. From the equations of state, d S =1 T d U + P T d V + ∑ i =1 s d N i. The differentials in the equation are all of extensive variables. The above differentials are not all of extensive variables, so the equation may not be directly integrated, from d Φ we see that Φ = Φ. The above differentials are not all of extensive variables, so the equation may not be directly integrated, from d Ξ we see that Ξ = Ξ. Thermodynamics and an Introduction to Thermostatistics, new York, John Wiley & Sons
68.
Internal energy
–
It keeps account of the gains and losses of energy of the system that are due to changes in its internal state. The internal energy of a system can be changed by transfers of matter or heat or by doing work, when matter transfer is prevented by impermeable containing walls, the system is said to be closed. Then the first law of thermodynamics states that the increase in energy is equal to the total heat added plus the work done on the system by its surroundings. If the containing walls pass neither matter nor energy, the system is said to be isolated, the first law of thermodynamics may be regarded as establishing the existence of the internal energy. The internal energy is one of the two cardinal state functions of the variables of a thermodynamic system. The internal energy of a state of a system cannot be directly measured. Such a chain, or path, can be described by certain extensive state variables of the system, namely, its entropy, S, its volume, V. The internal energy, U, is a function of those, sometimes, to that list are appended other extensive state variables, for example electric dipole moment. Customarily, thermodynamic descriptions include only items relevant to the processes under study, Thermodynamics is chiefly concerned only with changes in the internal energy, not with its absolute value. The internal energy is a function of a system, because its value depends only on the current state of the system. It is the one and only cardinal thermodynamic potential, through it, by use of Legendre transforms, are mathematically constructed the other thermodynamic potentials. These are functions of variable lists in which some extensive variables are replaced by their conjugate intensive variables, Legendre transformation is necessary because mere substitutive replacement of extensive variables by intensive variables does not lead to thermodynamic potentials. Mere substitution leads to a less informative formula, an equation of state, though it is a macroscopic quantity, internal energy can be explained in microscopic terms by two theoretical virtual components. One is the kinetic energy due to the microscopic motion of the systems particles. The other is the energy associated with the microscopic forces, including the chemical bonds. If thermonuclear reactions are specified as a topic of concern, then the static rest mass energy of the constituents of matter is also counted. There is no simple relation between these quantities of microscopic energy and the quantities of energy gained or lost by the system in work, heat. The SI unit of energy is the joule, sometimes it is convenient to use a corresponding density called specific internal energy which is internal energy per unit of mass of the system in question
69.
Enthalpy
–
Enthalpy /ˈɛnθəlpi/ is a measurement of energy in a thermodynamic system. It is the thermodynamic quantity equivalent to the heat content of a system. It is equal to the energy of the system plus the product of pressure. Enthalpy is defined as a function that depends only on the prevailing equilibrium state identified by the systems internal energy, pressure. The unit of measurement for enthalpy in the International System of Units is the joule, but other historical, conventional units are still in use, such as the British thermal unit and the calorie. At constant pressure, the enthalpy change equals the energy transferred from the environment through heating or work other than expansion work, the total enthalpy, H, of a system cannot be measured directly. The same situation exists in classical mechanics, only a change or difference in energy carries physical meaning. Enthalpy itself is a potential, so in order to measure the enthalpy of a system, we must refer to a defined reference point, therefore what we measure is the change in enthalpy. The ΔH is a change in endothermic reactions, and negative in heat-releasing exothermic processes. For processes under constant pressure, ΔH is equal to the change in the energy of the system. This means that the change in enthalpy under such conditions is the heat absorbed by the material through a reaction or by external heat transfer. Enthalpies for chemical substances at constant pressure assume standard state, most commonly 1 bar pressure, standard state does not, strictly speaking, specify a temperature, but expressions for enthalpy generally reference the standard heat of formation at 25 °C. Enthalpy of ideal gases and incompressible solids and liquids does not depend on pressure, unlike entropy, real materials at common temperatures and pressures usually closely approximate this behavior, which greatly simplifies enthalpy calculation and use in practical designs and analyses. The word enthalpy stems from the Ancient Greek verb enthalpein, which means to warm in and it combines the Classical Greek prefix ἐν- en-, meaning to put into, and the verb θάλπειν thalpein, meaning to heat. The word enthalpy is often attributed to Benoît Paul Émile Clapeyron. This misconception was popularized by the 1927 publication of The Mollier Steam Tables, however, neither the concept, the word, nor the symbol for enthalpy existed until well after Clapeyrons death. The earliest writings to contain the concept of enthalpy did not appear until 1875, however, Gibbs did not use the word enthalpy in his writings. The actual word first appears in the literature in a 1909 publication by J. P. Dalton
70.
Helmholtz free energy
–
In thermodynamics, the Helmholtz free energy is a thermodynamic potential that measures the “useful” work obtainable from a closed thermodynamic system at a constant temperature and volume. The negative of the difference in the Helmholtz energy is equal to the amount of work that the system can perform in a thermodynamic process in which volume is held constant. If the volume is not held constant, part of work will be performed as boundary work. The Helmholtz energy is used for systems held at constant volume. Since in this case no work is performed on the environment, for a system at constant temperature and volume, the Helmholtz energy is minimized at equilibrium. The Helmholtz free energy was developed by Hermann von Helmholtz, a German physician and physicist, the IUPAC recommends the letter A as well as the use of name Helmholtz energy. In physics, the letter F can also be used to denote the Helmholtz energy, as Helmholtz energy is referred to as the Helmholtz function, Helmholtz free energy. For example, in research, Helmholtz free energy is often used since explosive reactions by their nature induce pressure changes. It is also used to define fundamental equations of state of pure substances. The Helmholtz energy is the Legendre transform of the energy, U. If the system is kept at fixed volume and is in contact with a bath at some constant temperature. Conservation of energy implies, Δ U bath + Δ U + W =0 The volume of the system is kept constant and this means that the volume of the heat bath does not change either and we can conclude that the heat bath does not perform any work. This implies that the amount of heat that flows into the bath is given by. This result seems to contradict the equation dA = -S dT - P dV, as keeping T and V constant seems to imply dA =0, to allow for spontaneous processes at constant T and V, one needs to enlarge the thermodynamical state space of the system. In case of a reaction, one must allow for changes in the numbers Nj of particles of each type j. This equation is then valid for both reversible and non-reversible uPT changes. In case of a change at constant T and V without electrical work. A system kept at constant volume, temperature, and particle number is described by the canonical ensemble, the fact that the system does not have a unique energy means that the various thermodynamical quantities must be defined as expectation values
71.
Gibbs free energy
–
Just as in mechanics, where the decrease in potential energy is defined as maximum useful work that can be performed, similarly different potentials have different meanings. The Gibbs energy is also the potential that is minimized when a system reaches chemical equilibrium at constant pressure and temperature. Its derivative with respect to the coordinate of the system vanishes at the equilibrium point. As such, a reduction in G is a condition for the spontaneity of processes at constant pressure and temperature. The Gibbs free energy, originally called available energy, was developed in the 1870s by the American scientist Josiah Willard Gibbs. The initial state of the body, according to Gibbs, is supposed to be such that the body can be made to pass from it to states of dissipated energy by reversible processes. In his 1876 magnum opus On the Equilibrium of Heterogeneous Substances, according to the second law of thermodynamics, for systems reacting at STP, there is a general natural tendency to achieve a minimum of the Gibbs free energy. A quantitative measure of the favorability of a reaction at constant temperature and pressure is the change ΔG in Gibbs free energy that is caused by the reaction. As a necessary condition for the reaction to occur at constant temperature and pressure, ΔG must be smaller than the non-PV work, ΔG equals the maximum amount of non-PV work that can be performed as a result of the chemical reaction for the case of reversible process. The equation can be seen from the perspective of the system taken together with its surroundings. First assume that the reaction at constant temperature and pressure is the only one that is occurring. Then the entropy released or absorbed by the system equals the entropy that the environment must absorb or release, the reaction will only be allowed if the total entropy change of the universe is zero or positive. This is reflected in a negative ΔG, and the reaction is called exergonic, if we couple reactions, then an otherwise endergonic chemical reaction can be made to happen. In traditional use, the term free was included in Gibbs free energy to mean available in the form of useful work, the characterization becomes more precise if we add the qualification that it is the energy available for non-volume work. However, a number of books and journal articles do not include the attachment free. This is the result of a 1988 IUPAC meeting to set unified terminologies for the scientific community. This standard, however, has not yet been universally adopted. Further, Gibbs stated, In this description, as used by Gibbs, ε refers to the energy of the body, η refers to the entropy of the body
72.
History of thermodynamics
–
The history of thermodynamics is a fundamental strand in the history of physics, the history of chemistry, and the history of science in general. The development of thermodynamics both drove and was driven by atomic theory and it also, albeit in a subtle manner, motivated new directions in probability and statistics, see, for example, the timeline of thermodynamics. The ancients viewed heat as related to fire. In 3000 BC, the ancient Egyptians viewed heat as related to origin mythologies, the Empedoclean element of fire is perhaps the principal ancestor of later concepts such as phlogin and caloric. Around 500 BC, the Greek philosopher Heraclitus became famous as the flux and fire philosopher for his proverbial utterance, Heraclitus argued that the three principal elements in nature were fire, earth, and water. Atomism is a part of todays relationship between thermodynamics and statistical mechanics. Ancient thinkers such as Leucippus and Democritus, and later the Epicureans, by advancing atomism, until experimental proof of atoms was later provided in the 20th century, the atomic theory was driven largely by philosophical considerations and scientific intuition. This view was supported by the arguments of Aristotle, but was criticized by Leucippus, from antiquity to the Middle Ages various arguments were put forward to prove or disapprove the existence of a vacuum and several attempts were made to construct a vacuum but all proved unsuccessful. This may have influenced by an earlier device which could expand and contract the air constructed by Philo of Byzantium. Around 1600, the English philosopher and scientist Francis Bacon surmised, Heat itself, its essence and quiddity is motion, in 1643, Galileo Galilei, while generally accepting the sucking explanation of horror vacui proposed by Aristotle, believed that natures vacuum-abhorrence is limited. Pumps operating in mines had proven that nature would only fill a vacuum with water up to a height of ~30 feet. Knowing this curious fact, Galileo encouraged his former pupil Evangelista Torricelli to investigate these supposed limitations, Torricelli did not believe that vacuum-abhorrence in the sense of Aristotles sucking perspective, was responsible for raising the water. Rather, he reasoned, it was the result of the pressure exerted on the liquid by the surrounding air, to prove this theory, he filled a long glass tube with mercury and upended it into a dish also containing mercury. Only a portion of the emptied, ~30 inches of the liquid remained. As the mercury emptied, and a vacuum was created at the top of the tube. The gravitational force on the heavy element Mercury prevented it from filling the vacuum, the theory of phlogiston arose in the 17th century, late in the period of alchemy. Its replacement by caloric theory in the 18th century is one of the markers of the transition from alchemy to chemistry. Phlogiston was a substance that was presumed to be liberated from combustible substances during burning
73.
History of entropy
–
Over the next two centuries, physicists investigated this puzzle of lost energy, the result was the concept of entropy. Clausius continued to develop his ideas of lost energy, and coined the term entropy, since the mid-20th century the concept of entropy has found application in the field of information theory, describing an analogous loss of data in information transmission systems. In 1803, mathematician Lazare Carnot published a work entitled Fundamental Principles of Equilibrium and this work includes a discussion on the efficiency of fundamental machines, i. e. pulleys and inclined planes. Lazare Carnot saw through all the details of the mechanisms to develop a general discussion on the conservation of mechanical energy, from this Lazare drew the inference that perpetual motion was impossible. Lazare Carnot died in exile in 1823 and he also discovered that this idealized efficiency was dependent only on the temperatures of the heat reservoirs between which the engine was working, and not on the types of working fluids. Any real heat engine could not realize the Carnot cycles reversibility and this loss of usable caloric was a precursory form of the increase in entropy as we now know it. Though formulated in terms of caloric, rather than entropy, this was an insight into the second law of thermodynamics. He then discusses the three categories into which heat Q may be divided, Heat employed in increasing the heat actually existing in the body, Heat employed in producing the interior work. Heat employed in producing the exterior work, in modern terminology, we think of this equivalence-value as entropy, symbolized by S. This equivalence-value was a formulation of entropy. Quantitatively, Clausius states the mathematical expression for this theorem is as follows and this was an early formulation of the second law and one of the original forms of the concept of entropy. These concepts were developed by James Clerk Maxwell and Max Planck. In 1877, Ludwig Boltzmann developed a statistical mechanical evaluation of the entropy S and it may be written as, S = k B ln Ω where kB denotes Boltzmanns constant and Ω denotes the number of microstates consistent with the given equilibrium macrostate. Boltzmann himself did not actually write this formula expressed with the named constant kB, Boltzmann saw entropy as a measure of statistical mixedupness or disorder. This concept was refined by J. Willard Gibbs, and is now regarded as one of the cornerstones of the theory of statistical mechanics. Erwin Schrödinger made use of Boltzmann’s work in his book What is Life, to explain why living systems have far fewer replication errors than would be predicted from Statistical Thermodynamics. He postulated a local decrease of entropy for living systems when represents the number of states that are prevented from randomly distributing, schrödinger’s separation of random and non-random energy states is one of the few explanations for why entropy could be low in the past, but continually increasing now. An analog to thermodynamic entropy is information entropy, in 1948, while working at Bell Telephone Laboratories electrical engineer Claude Shannon set out to mathematically quantify the statistical nature of lost information in phone-line signals
74.
History of perpetual motion machines
–
The history of perpetual motion machines dates back to the Middle Ages. For millennia, it was not clear whether perpetual motion devices were possible or not, despite this, many attempts have been made to construct such machines, continuing into modern times. Modern designers and proponents sometimes use other terms, such as overunity, there are some unsourced claims that a perpetual motion machine called the magic wheel appeared in 8th-century Bavaria. This historical claim appears to be unsubstantiated though often repeated, early designs of perpetual motion machines were done by Indian mathematician–astronomer Bhaskara II, who described a wheel that he claimed would run forever. A drawing of a perpetual motion machine appeared in the sketchbook of Villard de Honnecourt, the sketchbook was concerned with mechanics and architecture. Following the example of Villard, Peter of Maricourt designed a globe which, if it were mounted without friction parallel to the celestial axis. It was intended to serve as an automatic armillary sphere, leonardo da Vinci made a number of drawings of devices he hoped would make free energy. Leonardo da Vinci was generally against such devices, but drew, mark Anthony Zimara, a 16th-century Italian scholar, proposed a self-blowing windmill. Various scholars in this period investigated the topic, in 1607 Cornelius Drebbel in Wonder-vondt van de eeuwighe bewegingh dedicated a Perpetuum motion machine to James I of England. It was described by Heinrich Hiesserle von Chodaw in 1621, Robert Boyle devised the perpetual vase which was discussed by Denis Papin in the Philosophical Transactions for 1685. Johann Bernoulli proposed a fluid energy machine, in 1686, Georg Andreas Böckler, designed a self operating self-powered water mill and several perpetual motion machines using balls using variants of Archimedes screws. In 1712, Johann Bessler, investigated 300 different perpetual motion models, in the 1760s, James Cox and John Joseph Merlin developed Coxs timepiece. Cox claimed that the timepiece was a perpetual motion machine, but as the device is powered by changes in atmospheric pressure via a mercury barometer. In 1775, the Royal Academy of Sciences in Paris made the statement that the Academy will no longer accept or deal with proposals concerning perpetual motion, in 1812, Charles Redheffer, in Philadelphia, claimed to have developed a generator that could power other machines. Upon investigation, it was deduced that the power was being supplied from another connected machine, Robert Fulton exposed Redheffers schemes during an exposition of the device in New York City. Removing some concealing wooden strips, Fulton found a belt drive went through a wall to an attic. In the attic, a man was turning a crank to power the device, the device had an inclined plane over pulleys. At the top and bottom, there travelled an endless band of sponge, the whole stood over the surface of still water
75.
Philosophy of thermal and statistical physics
–
The philosophy of thermal and statistical physics is that part of the philosophy of physics whose subject matter is classical thermodynamics, statistical mechanics, and related theories. Its central questions include, What is entropy, and what does the law of thermodynamics say about it. Does either thermodynamics or statistical mechanics contain an element of time-irreversibility, if so, what does statistical mechanics tell us about the arrow of time. Thermodynamics is the study of the behaviour of physical systems under the influence of exchange of work. It is not concerned with the properties of these systems. At the very heart of contemporary thermodynamics lies the idea of thermodynamic equilibrium, in orthodox versions of thermodynamics, properties such as temperature and entropy are defined for equilibrium states only. The idea of the existence of states of equilibrium has been central. It has recently been dubbed the minus first law of thermodynamics, traditionally, thermodynamics has often been described as a theory of principle. This is a theory in which a few empirical generalisations are taken for granted, according to this view, there is a strong correspondence between three empirical facts and the first three laws of thermodynamics. There is a law, not discussed here. Hence thermal equilibrium between systems is a relation, and this is the substance of the zeroth law of thermodynamics. In simplest terms, the First Law states that the energy level of an isolated system is a constant. Hence energy in minus energy out equals the change in energy, the understanding of the First Law embodied in classical physics can be summarized by the saying, Energy can be neither created nor destroyed. This is the central postulate of statistical mechanics - that equivalent energy states cannot be distinguished, thus, as the number of energy states increases, the energy of the system will be spread among more and more states, thereby increasing the entropy of the system. The Second Law can be summarized by either of the following sayings, the entropy of the universe cannot decrease. Some wags have proposed the following summary of the First and Second Laws, The first law says you cant win, there are various interpretations of the Second Law, one being Boltzmanns H-theorem. James Clerk Maxwell, in an 1871 essay titled the Theory of Heat, proposed an experiment showing why the Second Law might just be a temporary condition. This thought experiment came to be called Maxwells Demon, in this way, the demons work would result in slow molecules on one side of the gated barrier, and heat on the other side
76.
Entropy (arrow of time)
–
Entropy is the only quantity in the physical sciences that requires a particular direction for time, sometimes called an arrow of time. As one goes forward in time, the law of thermodynamics says, the entropy of an isolated system can increase. Hence, from one perspective, entropy measurement is a way of distinguishing the past from the future, examples of such systems and phenomena include the formation of typical crystals, the workings of a refrigerator and living organisms, used in therodynamics. Entropy, like temperature, is a concept, yet, like temperature. Watching a movie, it is easy to determine whether it is being run forward or in reverse. When run in reverse, broken glasses spontaneously reassemble, smoke goes down a chimney, wood unburns, cooling the environment, no physical laws are broken in the reverse movie except the second law of thermodynamics, which reflects the time-asymmetry of entropy. An intuitive understanding of the irreversibility of certain physical phenomena allows one to make this determination, by contrast, all physical processes occurring at the atomic level, such as mechanics, do not pick out an arrow of time. Going forward in time, an atom might move to the left, whereas going backward in time the same atom might move to the right, the behavior of the atom is not qualitatively different in either case. It would, however, be an improbable event if a macroscopic amount of gas that originally filled a container evenly spontaneously shrunk to occupy only half the container. This, however, is linked to the thermodynamic arrow of time. The Second Law of Thermodynamics allows for the entropy to remain the same regardless of the direction of time, if the entropy is constant in either direction of time, there would be no preferred direction. However, the entropy can only be a constant if the system is in the highest possible state of disorder, such as a gas that always was, and always will be, uniformly spread out in its container. The existence of an arrow of time implies that the system is highly ordered in one time direction only. Thus this law is about the conditions rather than the equations of motion of our world. The Second Law of Thermodynamics is statistical in nature, and therefore its reliability arises from the number of particles present in macroscopic systems. T Symmetry is the symmetry of physical laws under a time reversal transformation, although in restricted contexts one may find this symmetry, the observable universe itself does not show symmetry under time reversal, primarily due to the second law of thermodynamics. The thermodynamic arrow is often linked to the arrow of time. According to the Big Bang theory, the Universe was initially very hot with energy distributed uniformly, for a system in which gravity is important, such as the universe, this is a low-entropy state
77.
Brownian ratchet
–
Detailed analysis by Feynman and others showed why it cannot actually do this. The device consists of a known as a ratchet that rotates freely in one direction but is prevented from rotating in the opposite direction by a pawl. The ratchet is connected by an axle to a wheel that is immersed in a fluid of molecules at temperature T1. The molecules constitute a heat bath in that they undergo random Brownian motion with a kinetic energy that is determined by the temperature. The device is imagined as being small enough that the impulse from a single molecular collision can turn the paddle, although such collisions would tend to turn the rod in either direction with equal probability, the pawl allows the ratchet to rotate in one direction only. The net effect of such random collisions would seem to be that the ratchet rotates continuously in that direction. The ratchets motion then can be used to do work on other systems, the energy necessary to do this work apparently would come from the heat bath, without any heat gradient. The reason is that the pawl, since it is at the temperature as the paddle, will also undergo Brownian motion, bouncing up. It therefore will intermittently fail by allowing a ratchet tooth to slip backward under the pawl while it is up, a simple but rigorous proof that no net motion occurs no matter what shape the teeth are was given by Magnasco. If, on the hand, T2 is smaller than T1, the ratchet will indeed move forward. In this case, though, the energy is extracted from the gradient between the two thermal reservoirs, and some waste heat is exhausted into the lower temperature reservoir by the pawl. In other words, the functions as a miniature heat engine. Conversely, if T2 is greater than T1, the device will rotate in the opposite direction, diodes are an electrical analog of the ratchet and pawl, and for the same reason cannot produce useful work by rectifying Johnson noise in a circuit at uniform temperature. Millonas as well as Mahato extended the notion to correlation ratchets driven by mean-zero nonequilibrium noise with a nonvanishing correlation function of odd order greater than one. The ratchet and pawl was first discussed as a Second Law-violating device by Gabriel Lippmann in 1900, in 1912, Polish physicist Marian Smoluchowski gave the first correct qualitative explanation of why the device fails, thermal motion of the pawl allows the ratchets teeth to slip backwards. Magnasco and Stolovitzky extended this analysis to consider the full ratchet device, a paper in 2000 by Derek Abbott, Bruce R. Davis and Juan Parrondo, reanalyzed the problem and extended it to the case of multiple ratchets, showing a link with Parrondos paradox. Léon Brillouin in 1950 discussed an electrical circuit analogue that uses a rectifier instead of a ratchet, the idea was the diode would rectify the Johnson noise thermal current flucuations produced by the resistor, generating a direct current which could be used to perform work. In the detailed analysis it was shown that the thermal fluctuations within the diode generate a force that cancels the voltage from rectified current fluctuations
78.
Maxwell's demon
–
In the thought experiment, a demon controls a small door between two chambers of gas. As individual gas molecules approach the door, the demon quickly opens and shuts the door so that fast molecules pass into the other chamber, while slow molecules remain in the first chamber. Because faster molecules are hotter, the demons behavior causes one chamber to warm up as the other cools, thus decreasing entropy, the thought experiment first appeared in a letter Maxwell wrote to Peter Guthrie Tait on 11 December 1867. It appeared again in a letter to John William Strutt in 1871, in his letters and books, Maxwell described the agent opening the door between the chambers as a finite being. William Thomson was the first to use the demon for Maxwells concept, in the journal Nature in 1874. The second law is expressed as the assertion that in an isolated system. Maxwell conceived a thought experiment as a way of furthering the understanding of the second law and his description of the experiment is as follows. He will thus, without expenditure of work, raise the temperature of B and lower that of A, in other words, Maxwell imagines one container divided into two parts, A and B. Both parts are filled with the gas at equal temperatures. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts, when a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. Likewise, when a molecule from B flies towards the trapdoor. The average speed of the molecules in B will have increased while in A they will have slowed down on average, since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. A heat engine operating between the thermal reservoirs A and B could extract useful work from this temperature difference. The essence of the argument is to show, by calculation. One of the most famous responses to this question was suggested in 1929 by Leó Szilárd, Szilárd pointed out that a real-life Maxwells demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. Since the demon and the gas are interacting, we must consider the total entropy of the gas, the expenditure of energy by the demon will cause an increase in the entropy of the demon, which will be larger than the lowering of the entropy of the gas. In 1960, Rolf Landauer raised an exception to this argument and he realized that some measuring processes need not increase thermodynamic entropy as long as they were thermodynamically reversible. He suggested these reversible measurements could be used to sort the molecules, however, due to the connection between thermodynamic entropy and information entropy, this also meant that the recorded measurement must not be erased
79.
Caloric theory
–
The caloric theory is an obsolete scientific theory that heat consists of a self-repellent fluid called caloric that flows from hotter bodies to colder bodies. Caloric was also thought of as a gas that could pass in. In the history of thermodynamics, the explanations of heat were thoroughly confused with explanations of combustion. After J. J. Becher and Georg Ernst Stahl introduced the theory of combustion in the 17th century. There is one version of the theory that was introduced by Antoine Lavoisier. Lavoisier developed the explanation of combustion in terms of oxygen in the 1770s, according to this theory, the quantity of this substance is constant throughout the universe, and it flows from warmer to colder bodies. Indeed, Lavoisier was one of the first to use a calorimeter to measure the changes during chemical reaction. In the 1780s, some believed that cold was a fluid, pierre Prévost argued that cold was simply a lack of caloric. Since heat was a substance in caloric theory, and therefore could neither be created nor destroyed. The introduction of the theory was also influenced by the experiments of Joseph Black related to the thermal properties of materials. Besides the caloric theory, another theory existed in the eighteenth century that could explain the phenomenon of heat. The two theories were considered to be equivalent at the time, but kinetic theory was the modern one, as it used a few ideas from atomic theory. Quite a number of successful explanations can be, and were and we can explain the cooling of a cup of tea in room temperature, caloric is self-repelling, and thus slowly flows from regions dense in caloric to regions less dense in caloric. We can explain the expansion of air under heat, caloric is absorbed into the air, sadi Carnot developed his principle of the Carnot cycle, which still forms the basis of heat engine theory, solely from the caloric viewpoint. However, one of the greatest apparent confirmations of the theory was Pierre-Simon Laplaces theoretical correction of Sir Isaac Newton’s calculation of the speed of sound. Newton had assumed an isothermal process, while Laplace, a calorist and he had found that boring a cannon repeatedly does not result in a loss of its ability to produce heat, and therefore no loss of caloric. This suggested that caloric could not be a conserved substance though the experimental uncertainties in his experiment were widely debated and his results were not seen as a threat to caloric theory at the time, as this theory was considered to be equivalent to the alternative kinetic theory. In fact, to some of his contemporaries, the added to the understanding of caloric theory