In thermodynamics, an isentropic process is an idealized thermodynamic process, both adiabatic and reversible. The work transfers of the system are frictionless, there is no transfer of heat or matter; such an idealized process is useful in engineering as a model of and basis of comparison for real processes. The word "isentropic" is though not customarily, interpreted in another way, reading it as if its meaning were deducible from its etymology; this customarily used definition. In this occasional reading, it means a process. For example, this could occur in a system where the work done on the system includes friction internal to the system, heat is withdrawn from the system in just the right amount to compensate for the internal friction, so as to leave the entropy unchanged; the second law of thermodynamics states that T s u r r d S ≥ δ Q, where δ Q is the amount of energy the system gains by heating, T s u r r is the temperature of the surroundings, d S is the change in entropy. The equal sign refers to a reversible process, an imagined idealized theoretical limit, never occurring in physical reality, with equal temperatures of system and surroundings.
For an isentropic process, which by definition is reversible, there is no transfer of energy as heat because the process is adiabatic, δQ = 0. In an irreversible process of transfer of energy as work, entropy is produced within the system. For reversible processes, an isentropic transformation is carried out by thermally "insulating" the system from its surroundings. Temperature is the thermodynamic conjugate variable to entropy, thus the conjugate process would be an isothermal process, in which the system is thermally "connected" to a constant-temperature heat bath; the entropy of a given mass does not change during a process, internally reversible and adiabatic. A process during which the entropy remains constant is called an isentropic process, written Δ s = 0 or s 1 = s 2; some examples of theoretically isentropic thermodynamic devices are pumps, gas compressors, turbines and diffusers. Most steady-flow devices operate under adiabatic conditions, the ideal process for these devices is the isentropic process.
The parameter that describes how efficiently a device approximates a corresponding isentropic device is called isentropic or adiabatic efficiency. Isentropic efficiency of turbines: η t = actual turbine work isentropic turbine work = W a W s ≅ h 1 − h 2 a h 1 − h 2 s. Isentropic efficiency of compressors: η c = isentropic compressor work actual compressor work = W s W a ≅ h 2 s − h 1 h 2 a − h 1. Isentropic efficiency of nozzles: η n = actual KE at nozzle exit isentropic KE at nozzle exit = V 2 a 2 V 2 s 2 ≅ h 1 − h 2 a h 1 − h 2 s. For all the above equations: h 1 is the specific enthalpy at the entrance state, h 2 a is the specific enthalpy at the exit state for the actual process, h 2 s is the specific enthalpy at the exit state for the isentropic process. Note: The isentropic assumptions are only applicable with ideal cycles. Real cycles have inherent losses due to compressor and turbine inefficiencies and the second law of thermodynamics. Real systems are not isentropic, but isentropic behavior is an adequate approximation for many calculation purposes.
In fluid dynamics, an isentropic flow is a fluid flow, both adiabatic and reversible. That is, no he
State of matter
In physics, a state of matter is one of the distinct forms in which matter can exist. Four states of matter are observable in everyday life: solid, liquid and plasma. Many other states are known to exist, such as glass or liquid crystal, some only exist under extreme conditions, such as Bose–Einstein condensates, neutron-degenerate matter, quark-gluon plasma, which only occur in situations of extreme cold, extreme density, high-energy; some other states remain theoretical for now. For a complete list of all exotic states of matter, see the list of states of matter; the distinction is made based on qualitative differences in properties. Matter in the solid state maintains a fixed volume and shape, with component particles close together and fixed into place. Matter in the liquid state maintains a fixed volume, but has a variable shape that adapts to fit its container, its particles move freely. Matter in the gaseous state has both variable shape, adapting both to fit its container, its particles are neither close together nor fixed in place.
Matter in the plasma state has variable volume and shape, but as well as neutral atoms, it contains a significant number of ions and electrons, both of which can move around freely. The term phase is sometimes used as a synonym for state of matter, but a system can contain several immiscible phases of the same state of matter. In a solid, constituent particles are packed together; the forces between particles are so strong that the particles cannot move but can only vibrate. As a result, a solid has a stable, definite shape, a definite volume. Solids can only cut. In crystalline solids, the particles are packed in a ordered, repeating pattern. There are various different crystal structures, the same substance can have more than one structure. For example, iron has a body-centred cubic structure at temperatures below 912 °C, a face-centred cubic structure between 912 and 1,394 °C. Ice has fifteen known crystal structures, or fifteen solid phases, which exist at various temperatures and pressures.
Glasses and other non-crystalline, amorphous solids without long-range order are not thermal equilibrium ground states. Solids can be transformed into liquids by melting, liquids can be transformed into solids by freezing. Solids can change directly into gases through the process of sublimation, gases can change directly into solids through deposition. A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a constant volume independent of pressure; the volume is definite if the pressure are constant. When a solid is heated above its melting point, it becomes liquid, given that the pressure is higher than the triple point of the substance. Intermolecular forces are still important, but the molecules have enough energy to move relative to each other and the structure is mobile; this means that the shape of a liquid is determined by its container. The volume is greater than that of the corresponding solid, the best known exception being water, H2O; the highest temperature at which a given liquid can exist is its critical temperature.
A gas is a compressible fluid. Not only will a gas conform to the shape of its container but it will expand to fill the container. In a gas, the molecules have enough kinetic energy so that the effect of intermolecular forces is small, the typical distance between neighboring molecules is much greater than the molecular size. A gas occupies the entire container in which it is confined. A liquid may be converted to a gas by heating at constant pressure to the boiling point, or else by reducing the pressure at constant temperature. At temperatures below its critical temperature, a gas is called a vapor, can be liquefied by compression alone without cooling. A vapor can exist in equilibrium with a liquid, in which case the gas pressure equals the vapor pressure of the liquid. A supercritical fluid is a gas whose temperature and pressure are above the critical temperature and critical pressure respectively. In this state, the distinction between liquid and gas disappears. A supercritical fluid has the physical properties of a gas, but its high density confers solvent properties in some cases, which leads to useful applications.
For example, supercritical carbon dioxide is used to extract caffeine in the manufacture of decaffeinated coffee. Like a gas, plasma does not have definite volume. Unlike gases, plasmas are electrically conductive, produce magnetic fields and electric currents, respond to electromagnetic forces. Positively charged nuclei swim in a "sea" of freely-moving disassociated electrons, similar to the way such charges exist in conductive metal, where this electron "sea" allows matter in the plasma state to conduct electricity. A gas is converted to a plasma in one of two ways. E.g. Either from a huge voltage difference between two points, or by exposing it to high temperatures. Heating matter to high temperatures causes electrons to leave the atoms, resulting in the presence of free electrons; this creates a so-called ionised plasma. At high temperatures, such as those present in stars, it is assumed that all electrons are "free", that a high-energy plasma is bare nuclei swimming in a
In statistical mechanics, entropy is an extensive property of a thermodynamic system. It is related to the number Ω of microscopic configurations that are consistent with the macroscopic quantities that characterize the system. Under the assumption that each microstate is probable, the entropy S is the natural logarithm of the number of microstates, multiplied by the Boltzmann constant kB. Formally, S = k B ln Ω. Macroscopic systems have a large number Ω of possible microscopic configurations. For example, the entropy of an ideal gas is proportional to the number of gas molecules N. Twenty liters of gas at room temperature and atmospheric pressure has N ≈ 6×1023. At equilibrium, each of the Ω ≈ eN configurations can be regarded as random and likely; the second law of thermodynamics states. Such systems spontaneously evolve towards the state with maximum entropy. Non-isolated systems may lose entropy, provided their environment's entropy increases by at least that amount so that the total entropy increases.
Entropy is a function of the state of the system, so the change in entropy of a system is determined by its initial and final states. In the idealization that a process is reversible, the entropy does not change, while irreversible processes always increase the total entropy; because it is determined by the number of random microstates, entropy is related to the amount of additional information needed to specify the exact physical state of a system, given its macroscopic specification. For this reason, it is said that entropy is an expression of the disorder, or randomness of a system, or of the lack of information about it; the concept of entropy plays a central role in information theory. Boltzmann's constant, therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin in the International System of Units; the entropy of a substance is given as an intensive property—either entropy per unit mass or entropy per unit amount of substance. The French mathematician Lazare Carnot proposed in his 1803 paper Fundamental Principles of Equilibrium and Movement that in any machine the accelerations and shocks of the moving parts represent losses of moment of activity.
In other words, in any natural process there exists an inherent tendency towards the dissipation of useful energy. Building on this work, in 1824 Lazare's son Sadi Carnot published Reflections on the Motive Power of Fire which posited that in all heat-engines, whenever "caloric" falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body, he made the analogy with that of. This was an early insight into the second law of thermodynamics. Carnot based his views of heat on the early 18th century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, on the contemporary views of Count Rumford who showed that heat could be created by friction as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, that "no change occurs in the condition of the working body".
The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy, its conservation in all processes. In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, gave this "change" a mathematical interpretation by questioning the nature of the inherent loss of usable heat when work is done, e.g. heat produced by friction. Clausius described entropy as the transformation-content, i.e. dissipative energy use, of a thermodynamic system or working body of chemical species during a change of state. This was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, James Clerk Maxwell gave entropy a statistical basis. In 1877 Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy to be proportional to the natural logarithm of the number of microstates such a gas could occupy.
Henceforth, the essential problem in statistical thermodynamics, i.e. according to Erwin Schrödinger, has been to determine the distribution of a given amount of energy E over N identical systems. Carathéodory linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability. There are two related definitions of entropy: the thermodynamic definition and the statistical mechanics definition; the classical thermodynamics definition developed first. In the classical thermodynamics viewpoint, the system is composed of large numbers of constituents and the state of the system is described by the average thermodynamic properties of those constituents.
Thermodynamic databases for pure substances
Thermodynamic databases contain information about thermodynamic properties for substances, the most important being enthalpy and Gibbs free energy. Numerical values of these thermodynamic properties are collected as tables or are calculated from thermodynamic datafiles. Data is expressed as temperature-dependent values for one mole of substance at the standard pressure of 101.325 kPa, or 100 kPa. Both of these definitions for the standard condition for pressure are in use. Thermodynamic data is presented as a table or chart of function values for one mole of a substance. A thermodynamic datafile is a set of equation parameters from which the numerical data values can be calculated. Tables and datafiles are presented at a standard pressure of 1 bar or 1 atm, but in the case of steam and other industrially important gases, pressure may be included as a variable. Function values depend on the state of aggregation of the substance, which must be defined for the value to have any meaning; the state of aggregation for thermodynamic purposes is the standard state, sometimes called the reference state, defined by specifying certain conditions.
The normal standard state is defined as the most stable physical form of the substance at the specified temperature and a pressure of 1 bar or 1 atm. However, since any non-normal condition could be chosen as a standard state, it must be defined in the context of use. A physical standard state is one that exists for a time sufficient to allow measurements of its properties; the most common physical standard state is one, stable thermodynamically. It has no tendency to transform into any other physical state. If a substance can exist but is not thermodynamically stable, it is called a metastable state. A non-physical standard state is one whose properties are obtained by extrapolation from a physical state. Metastable liquids and solids are important because some substances can persist and be used in that state indefinitely. Thermodynamic functions that refer to conditions in the normal standard state are designated with a small superscript °; the relationship between certain physical and thermodynamic properties may be described by an equation of state.
It is difficult to measure the absolute amount of any thermodynamic quantity involving the internal energy, since the internal energy of a substance can take many forms, each of which has its own typical temperature at which it begins to become important in thermodynamic reactions. It is therefore the change in these functions, of most interest; the isobaric change in enthalpy H above the common reference temperature of 298.15 K is called the high temperature heat content, the sensible heat, or the relative high-temperature enthalpy, called henceforth the heat content. Different databases designate this term in different ways. All of these terms mean the molar heat content for a substance in its normal standard state above a reference temperature of 298.15 K. Data for gases is for the hypothetical ideal gas at the designated standard pressure; the SI unit for enthalpy is J/mol, is a positive number above the reference temperature. The heat content has been measured and tabulated for all known substances, is expressed as a polynomial function of temperature.
The heat content of an ideal gas is independent of pressure, but the heat content of real gases varies with pressure, hence the need to define the state for the gas and the pressure. Note that for some thermodynamic databases such as for steam, the reference temperature is 273.15 K. The heat capacity C is the ratio of heat added to the temperature increase. For an incremental isobaric addition of heat: Cp is therefore the slope of a plot of temperature vs. isobaric heat content. The SI units for heat capacity are J/; when heat is added to a condensed-phase substance, its temperature increases until a phase change temperature is reached. With further addition of heat, the temperature remains constant while the phase transition takes place; the amount of substance that transforms is a function of the amount of heat added. After the transition is complete, adding more heat increases the temperature. In other words, the enthalpy of a substance changes isothermally; the enthalpy change resulting from a phase transition is designated ΔH.
There are four types of enthalpy changes resulting from a phase transition. To wit: Enthalpy of transformation; this applies to the transformations from one solid phase to another, such as the transformation from α-Fe to γ -Fe. The transformation is designated ΔHtr. Enthalpy of fusion or melting; this is designated ΔHm. Enthalpy of vaporization; this is designated ΔHv. Enthalpy of sublimation; this is designated ΔHs. Cp is infinite at phase transition temperatures. At the Curie temperature, Cp shows a sharp discontinuity. Values of ΔH are given for the transition at the normal standard state temperature for the two states, if so, are designa
Introduction to entropy
Entropy is an important concept in the branch of physics known as thermodynamics. The idea of "irreversibility" is central to the understanding of entropy. Everyone has an intuitive understanding of irreversibility. If one watches a movie of everyday life running forward and in reverse, it is easy to distinguish between the two; the movie running in reverse shows impossible things happening – water jumping out of a glass into a pitcher above it, smoke going down a chimney, water in a glass freezing to form ice cubes, crashed cars reassembling themselves, so on. The intuitive meaning of expressions such as "you can't unscramble an egg", or "you can't take the cream out of the coffee" is that these are irreversible processes. No matter how long you wait, the cream won't jump out of the coffee into the creamer. In thermodynamics, one says that the "forward" processes – pouring water from a pitcher, smoke going up a chimney, etc. – are "irreversible": they cannot happen in reverse. All real physical processes involving systems in everyday life, with many atoms or molecules, are irreversible.
For an irreversible process in an isolated system, the thermodynamic state variable known as entropy is never decreasing. In everyday life, there may be processes in which the increase of entropy is unobservable zero. In these cases, a movie of the process run in reverse will not seem unlikely. For example, in a 1-second video of the collision of two billiard balls, it will be hard to distinguish the forward and the backward case, because the increase of entropy during that time is small. In thermodynamics, one says that this process is "reversible", with an entropy increase, zero; the statement of the fact that the entropy of an isolated system never decreases is known as the second law of thermodynamics. Classical thermodynamics is a physical theory which describes a "system" in terms of the thermodynamic variables of the system or its parts; some thermodynamic variables are familiar: temperature, volume. Entropy is a thermodynamic variable, less familiar and not as understood. A "system" is any region of space containing matter and energy: A cup of coffee, a glass of icewater, an automobile, an egg.
Thermodynamic variables do not give a "complete" picture of the system. Thermodynamics makes no assumptions about the microscopic nature of a system and does not describe nor does it take into account the positions and velocities of the individual atoms and molecules which make up the system. Thermodynamics deals with matter in a macroscopic sense; this is an important quality, because it means that reasoning based on thermodynamics is unlikely to require alteration as new facts about atomic structure and atomic interactions are found. The essence of thermodynamics is embodied in the four laws of thermodynamics. Thermodynamics provides little insight into what is happening at a microscopic level. Statistical mechanics is a physical theory, it explains thermodynamics in terms of the possible detailed microscopic situations the system may be in when the thermodynamic variables of the system are known. These are known as "microstates" whereas the description of the system in thermodynamic terms specifies the "macrostate" of the system.
Many different microstates can yield the same macrostate. It is important to understand that statistical mechanics does not define temperature, entropy, etc, they are defined by thermodynamics. Statistical mechanics serves to explain thermodynamics in terms of microscopic behavior of the atoms and molecules in the system. In statistical mechanics, the entropy of a system is described as a measure of how many different microstates there are that could give rise to the macrostate that the system is in; the entropy of the system is given by Ludwig Boltzmann's famous equation: S = k log where S is the entropy of the macrostate, k is Boltzmann's constant, W is the total number of possible microstates that might yield the macrostate. The concept of irreversibility stems from the idea that if you have a system in an "unlikely" macrostate it will soon move to the "most likely" macrostate and the entropy S will increase. A glass of warm water with an ice cube in it is unlikely to just happen, it must have been created, the system will move to a more macrostate in which the ice cube is or melted and the water is cooled.
Statistical mechanics shows that the number of microstates which give ice and warm water is much smaller than the number of microstates that give the reduced ice mass and cooler water. The concept of thermodynamic entropy arises from the second law of thermodynamics; this law of entropy increase quantifies the reduction in the capacity of a system for change or determines whether a thermodynamic process may occur. For example, heat always flows from a region of higher temperature to one with lower temperature until temperature becomes uniform. Entropy is calculated in two ways, the first is the entropy change to a system containing a sub-system which undergoes heat transfer to its surroundings, it is based on the macroscopic relationship between heat flow into the sub-system and the temperature at which it occurs summed over the boundary of that sub-system. The second calculates the absolute entropy of a system based on the microscopic behaviour of its individual particles; this is based on the natural logarithm of the number of microstates possible in a p
An isobaric process is a thermodynamic process in which the pressure stays constant: ΔP = 0. The heat transferred to the system does work, but changes the internal energy of the system; this article uses the chemistry sign convention for work, where positive work is work done on the system. Using this convention, by the first law of thermodynamics, Q = Δ U − W where W is work, U is internal energy, Q is heat. Pressure-volume work by the closed system is defined as: W = − ∫ p d V where Δ means change over the whole process, whereas d denotes a differential. Since pressure is constant, this means that W = − p Δ V. Applying the ideal gas law, this becomes W = − n R Δ T assuming that the quantity of gas stays constant, e.g. there is no phase transition during a chemical reaction. According to the equipartition theorem, the change in internal energy is related to the temperature of the system by Δ U = n c V Δ T,where cV is specific heat at a constant volume. Substituting the last two equations into the first equation produces: Q = n c V Δ T + n R Δ T = n Δ T = n c P Δ T,where cP is specific heat at a constant pressure, the cV+R was brought to cP by Mayer's relation.
To find the molar specific heat capacity of the gas involved, the following equations apply for any general gas, calorically perfect. The property γ is either called the heat capacity ratio; some published sources might use k instead of γ. Molar isochoric specific heat: c V = R γ − 1. Molar isobaric specific heat: c p = γ R γ − 1; the values for γ are γ = 7/5 for diatomic gases like air and its major components, γ = 5/3 for monatomic gases like the noble gases. The formulas for specific heats would reduce in these special cases: Monatomic: c V = 3 2 R and c P = 5 2 R Diatomic: c V = 5 2 R and c P = 7 2 R An isobaric process is shown on a P–V diagram as a straight horizontal line, connecting the initial and final thermostatic states. If the process moves towards the right it is an expansion. If the process moves towards the left it is a compression; the motivation for the specific sign conventions of thermodynamics comes from early development of heat engines. When designing a heat engine, the goal is to have the system deliver work output.
The source of energy in a heat engine, is a heat input. If the volume compresses W < 0. That is, during isobaric compression the gas does negative work, or the environment does positive work. Restated, the environment does positive work on the gas. If the volume expands W > 0. That is, during isobaric expansion the gas does positive work, or equivalently, the environment does negative work. Restated, the gas does positive work on the environment. If heat is added to the system Q > 0. That is, during isobaric expansion/heating, positive heat is added to the gas, or equivalently, the environment receives negative heat. Restated, the gas receives positive heat from the environment. If the system rejects heat Q < 0. That is, during isobaric compression/cooling, negative heat is added to the gas, or equivalently, the environment receives positive heat. Restated, the environment receives positive heat from the gas. An isochoric process is described by the equation Q = ΔU, it would be convenient to have a similar equation for isobaric processes.
Substituting the second equation into the first yields Q = Δ U + Δ = Δ The quantity U + pV is a state function so that it can be given a name. It is called enthalpy, is denoted as H. Therefore, an isobaric process can be more succinctly described as Q = Δ H. Enthalpy and isochoric specific heat capacity are useful mathematical constructs
The Joule expansion is an irreversible process in thermodynamics in which a volume of gas is kept in one side of a thermally isolated container, with the other side of the container being evacuated. The partition between the two parts of the container is opened, the gas fills the whole container; the Joule expansion, treated as a thought experiment involving ideal gases, is a useful exercise in classical thermodynamics. It provides a convenient example for calculating changes in thermodynamic quantities, including the resulting increase in entropy of the universe that results from this inherently irreversible process. An actual Joule expansion experiment involves real gases; this type of expansion is named after James Prescott Joule who used this expansion, in 1845, in his study for the mechanical equivalent of heat, but this expansion was known long before Joule e.g. by John Leslie, in the beginning of the 19th century, studied by Joseph-Louis Gay-Lussac in 1807 with similar results as obtained by Joule.
The Joule expansion should not be confused with the Joule-Thompson effect. The process begins with gas under some pressure, P i, at temperature T i, confined to one half of a thermally isolated container; the gas occupies an initial volume V i, mechanically separated from the other part of the container, which has a volume V 0, is under near zero pressure. The tap between the two halves of the container is suddenly opened, the gas expands to fill the entire container, which has a total volume of V f = V i + V 0. A thermometer inserted into the compartment on the left measures the temperature of the gas before and after the expansion; the system in this experiment consists of both compartments. Because this system is thermally isolated, it cannot exchange heat with its surroundings. Since the system's total volume is kept constant, the system cannot perform work on its surroundings; as a result, the change in internal energy, Δ U, is zero. Internal energy consists of internal potential energy; when the molecular motion is random, Temperature is the measure of the internal kinetic energy.
In this case, the internal kinetic energy is called heat. If the chambers have not reached equilibrium, there will be some kinetic energy of flow, not detectable by a thermometer. Thus, a change in temperature indicates a change in kinetic energy, some of this change will not appear as heat until and unless thermal equilibrium is reestablished; when heat is transferred into kinetic energy of flow, this causes a decrease in temperature. In practice, the simple two-chamber free expansion experiment incorporates a'porous plug' through which the expanding air must flow to reach the lower pressure chamber; the purpose of this plug is to inhibit directional flow, thereby quickening the reestablishment of thermal equilibrium. Since the total internal energy does not change, the stagnation of flow in the receiving chamber converts kinetic energy of flow back into random motion so that the temperature climbs to its predicted value. If the initial air temperature is low enough that non-ideal gas properties cause condensation, some internal energy is converted into latent heat in the liquid products.
Thus, at low temperatures the Joule expansion process provides information on intermolecular forces. If the gas is ideal, both the initial and final conditions follow the Ideal Gas Law, so that P i V i = n R T i and after the tap is opened, P f V f = n R T f. Here n is the number of moles of R is the molar ideal gas constant; because the internal energy does not change and the internal energy of an ideal gas is a function of temperature, the temperature of the gas does not change. This implies that P i V i = P f V f = n R