{{short description|Physical law for entropy and heat}} {{Thermodynamics|laws}} The '''second law of thermodynamics''' is a [[physical law]] based on universal experience concerning [[heat]] and [[Energy transformation|energy interconversions]]. One simple statement of the law is that heat always moves from hotter objects to colder objects (or "downhill"), unless [[potential energy|potential]] or [[kinetic energy|kinetic]] [[energy]], in whatever form it may take, is supplied to reverse the direction of [[heat flow]]. Another definition is: "Not all heat energy can be converted into [[Work (thermodynamics)|work]] in a [[cyclic process]]."{{cite book |last=Reichl |first=Linda |author-link=Linda Reichl |date=1980 |title=A Modern Course in Statistical Physics |url= |location= |publisher=Edward Arnold |page=9 |isbn=0-7131-2789-9}}Young, H. D; Freedman, R. A. (2004). ''University Physics'', 11th edition. Pearson. p. 764. The second law of thermodynamics in other versions establishes the concept of [[entropy]] as a physical property of a [[thermodynamic system]]. It can be used to predict whether processes are forbidden despite obeying the requirement of [[conservation of energy]] as expressed in the [[first law of thermodynamics]] and provides necessary criteria for [[spontaneous process]]es. The second law may be formulated by the observation that the entropy of [[isolated system]]s left to spontaneous evolution cannot decrease, as they always arrive at a state of [[thermodynamic equilibrium]] where the entropy is highest at the given internal energy.{{cite web|url=http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node38.html#SECTION05224000000000000000|title=5.2 Axiomatic Statements of the Laws of Thermodynamics|publisher=[[Massachusetts Institute of Technology]]|website=www.web.mit.edu}} An increase in the combined entropy of system and surroundings accounts for the [[irreversibility]] of natural processes, often referred to in the concept of the [[arrow of time]].{{Cite book |last=Carroll |first=Sean |title=[[From Eternity to Here: The Quest for the Ultimate Theory of Time]] |date=2010 |isbn=978-0-525-95133-9}} Historically, the second law was an [[Empirical evidence|empirical finding]] that was accepted as an [[axiom]] of [[thermodynamics|thermodynamic theory]]. [[Statistical mechanics]] provides a microscopic explanation of the law in terms of [[probability distribution]]s of the states of large assemblies of [[atom]]s or [[molecule]]s. The second law has been expressed in many ways. Its first formulation, which preceded the proper definition of entropy and was based on [[caloric theory]], is [[Carnot's theorem (thermodynamics)|Carnot's theorem]], formulated by the French scientist [[Nicolas Léonard Sadi Carnot|Sadi Carnot]], who in 1824 showed that the efficiency of conversion of heat to work in a heat engine has an upper limit.{{cite book | last1=Jaffe | first1=R.L. | last2=Taylor | first2=W. | title=The Physics of Energy | publisher=Cambridge University Press |location=Cambridge UK | year=2018 | isbn=978-1-107-01665-1 | url=https://books.google.com/books?id=drZDDwAAQBAJ | page=150,n259, 772, 743}}{{cite web|url=http://news.mit.edu/2010/explained-carnot-0519|title=Explained: The Carnot Limit|author=David L. Chandler|date=2011-05-19}} The first rigorous definition of the second law based on the concept of entropy came from German scientist [[Rudolf Clausius]] in the 1850s and included his statement that heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time. The second law of thermodynamics allows the definition of the concept of [[thermodynamic temperature]], relying also on the [[zeroth law of thermodynamics]]. ==Introduction== [[File:Heat flow hot to cold.png|thumb|upright|Heat flowing from hot water to cold water]] The [[first law of thermodynamics]] provides the definition of the [[internal energy]] of a [[thermodynamic system]], and expresses its change for a [[closed system]] in terms of [[Work (thermodynamics)|work]] and [[heat]].[[Max Planck|Planck, M.]] (1897/1903), pp. 40–41. It can be linked to the law of [[conservation of energy]].Munster A. (1970), pp. 8–9, 50–51. The second law is concerned with the direction of natural processes.{{harvnb|Mandl|1988}} It asserts that a natural process runs only in one sense, and is not reversible. For example, when a path for conduction or radiation is made available, heat always flows spontaneously from a hotter to a colder body. Such [[Phenomenon|phenomena]] are accounted for in terms of [[entropy|entropy change]].[[Max Planck|Planck, M.]] (1897/1903), pp. 79–107.Bailyn, M. (1994), Section 71, pp. 113–154. If an isolated system containing distinct subsystems is held initially in internal thermodynamic equilibrium by internal partitioning by impermeable walls between the subsystems, and then some operation makes the walls more permeable, then the system spontaneously evolves to reach a final new internal thermodynamic equilibrium, and its total entropy, S, increases. In a [[Reversible process (thermodynamics)|reversible]] or [[Quasistatic process|quasi-static]], idealized process of transfer of energy as heat to a [[closed system|closed]] thermodynamic system of interest, (which allows the entry or exit of energy – but not transfer of matter), from an auxiliary thermodynamic system, an infinitesimal increment (\mathrm d S) in the entropy of the system of interest is defined to result from an infinitesimal transfer of heat (\delta Q) to the system of interest, divided by the common thermodynamic temperature (T) of the system of interest and the auxiliary thermodynamic system:Bailyn, M. (1994), p. 120. : \mathrm dS = \frac{\delta Q}{T} \,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \text {(closed system; idealized, reversible process)}. Different notations are used for an infinitesimal amount of heat (\delta) and infinitesimal change of entropy (\mathrm d) because entropy is a [[function of state]], while heat, like work, is not. For an actually possible infinitesimal process without exchange of mass with the surroundings, the second law requires that the increment in system entropy fulfills the [[Clausius Theorem|inequality]]{{cite book | last=Mortimer | first=R.G. | title=Physical Chemistry | publisher=Elsevier Science | year=2008 | isbn=978-0-12-370617-1 | url=https://books.google.com/books?id=5CXWAQAACAAJ | page=120}}{{cite book | last=Fermi | first=E. | title=Thermodynamics | publisher=Dover Publications | series=Dover Books on Physics | year=2012 | isbn=978-0-486-13485-7 | url=https://books.google.com/books?id=xCjDAgAAQBAJ | page=48}} : \mathrm dS > \frac{\delta Q}{T_\text{surr}} \,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \text {(closed system; actually possible, irreversible process).} This is because a general process for this case (no mass exchange between the system and its surroundings) may include work being done on the system by its surroundings, which can have frictional or viscous effects inside the system, because a chemical reaction may be in progress, or because heat transfer actually occurs only irreversibly, driven by a finite difference between the system temperature ({{math|''T''}}) and the temperature of the surroundings ({{math|''T''surr}}).Adkins, C.J. (1968/1983), p. 75. Note that the equality still applies for pure heat flow (only heat flow, no change in chemical composition and mass), : \mathrm dS = \frac{\delta Q}{T} \,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \text {(actually possible quasistatic irreversible process without composition change).} which is the basis of the accurate determination of the absolute entropy of pure substances from measured heat capacity curves and entropy changes at phase transitions, i.e. by calorimetry.Oxtoby, D. W; Gillis, H.P., [[Laurie Butler|Butler, L. J.]] (2015).''Principles of Modern Chemistry'', Brooks Cole. p. 617. {{ISBN|978-1305079113}} Introducing a set of internal variables \xi to describe the deviation of a thermodynamic system from a chemical equilibrium state in physical equilibrium (with the required well-defined uniform pressure ''P'' and temperature ''T''), one can record the equality : \mathrm dS = \frac{\delta Q}{T} - \frac{1}{T} \sum_{j} \, \Xi_{j} \,\delta \xi_j \,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \text {(closed system; actually possible quasistatic irreversible process).} The second term represents work of internal variables that can be perturbed by external influences, but the system cannot perform any positive work via internal variables. This statement introduces the impossibility of the reversion of evolution of the thermodynamic system in time and can be considered as a formulation of ''the second principle of thermodynamics'' – the formulation, which is, of course, equivalent to the formulation of the principle in terms of entropy.Pokrovskii V.N. (2005) Extended thermodynamics in a discrete-system approach, Eur. J. Phys. vol. 26, 769–781.{{Cite journal | doi=10.1155/2013/906136|title = A Derivation of the Main Relations of Nonequilibrium Thermodynamics| journal=ISRN Thermodynamics| volume=2013| pages=1–9|year = 2013|last1 = Pokrovskii|first1 = Vladimir N.|doi-access=free}} The [[zeroth law of thermodynamics]] in its usual short statement allows recognition that two bodies in a relation of thermal equilibrium have the same temperature, especially that a test body has the same temperature as a reference thermometric body.{{cite book|author=J. S. Dugdale|title=Entropy and its Physical Meaning|url=https://archive.org/details/entropyitsphysic00dugd|url-access=limited|publisher=Taylor & Francis|year=1996|isbn=978-0-7484-0569-5|page=[https://archive.org/details/entropyitsphysic00dugd/page/n23 13]|quote=This law is the basis of temperature.}} For a body in thermal equilibrium with another, there are indefinitely many empirical temperature scales, in general respectively depending on the properties of a particular reference thermometric body. The second law allows{{clarify|date=August 2018}} a distinguished temperature scale, which defines an absolute, [[thermodynamic temperature]], independent of the properties of any particular reference thermometric body.[[Mark Zemansky|Zemansky, M.W.]] (1968), pp. 207–209.Quinn, T.J. (1983), p. 8. ==Various statements of the law== The second law of thermodynamics may be expressed in many specific ways,{{cite web|title=Concept and Statements of the Second Law|url=http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node37.html|access-date=2010-10-07 |publisher=web.mit.edu}} the most prominent classical statements{{sfnp|Lieb|Yngvason|1999}} being the statement by [[Rudolf Clausius]] (1854), the statement by [[William Thomson, 1st Baron Kelvin|Lord Kelvin]] (1851), and the statement in axiomatic thermodynamics by [[Constantin Carathéodory]] (1909). These statements cast the law in general physical terms citing the impossibility of certain processes. The Clausius and the Kelvin statements have been shown to be equivalent.{{sfnp|Rao|2004|p=213}} ===Carnot's principle=== The historical origin[[Nicolas Léonard Sadi Carnot|Carnot, S.]] (1824/1986). of the second law of thermodynamics was in [[Nicolas Léonard Sadi Carnot|Sadi Carnot]]'s theoretical analysis of the flow of heat in steam engines (1824). The centerpiece of that analysis, now known as a [[Carnot engine]], is an ideal [[heat engine]] fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in their own internal states of thermodynamic equilibrium. It represents the theoretical maximum efficiency of a heat engine operating between any two given thermal or heat reservoirs at different temperatures. Carnot's principle was recognized by Carnot at a time when the [[caloric theory]] represented the dominant understanding of the nature of heat, before the recognition of the [[first law of thermodynamics]], and before the mathematical expression of the concept of entropy. Interpreted in the light of the first law, Carnot's analysis is physically equivalent to the second law of thermodynamics, and remains valid today. Some samples from his book are: ::...''wherever there exists a difference of temperature, motive power can be produced.''Carnot, S. (1824/1986), p. 51. ::The production of motive power is then due in steam engines not to an actual consumption of caloric, but ''to its transportation from a warm body to a cold body ...''Carnot, S. (1824/1986), p. 46. ::''The motive power of heat is independent of the agents employed to realize it; its quantity is fixed solely by the temperatures of the bodies between which is effected, finally, the transfer of caloric.''Carnot, S. (1824/1986), p. 68. In modern terms, Carnot's principle may be stated more precisely: ::The efficiency of a quasi-static or reversible Carnot cycle depends only on the temperatures of the two heat reservoirs, and is the same, whatever the working substance. A Carnot engine operated in this way is the most efficient possible heat engine using those two temperatures.[[Clifford Truesdell|Truesdell, C.]] (1980), Chapter 5.Adkins, C.J. (1968/1983), pp. 56–58.Münster, A. (1970), p. 11.Kondepudi, D., [[Ilya Prigogine|Prigogine, I.]] (1998), pp.67–75.Lebon, G., Jou, D., Casas-Vázquez, J. (2008), p. 10.Eu, B.C. (2002), pp. 32–35. ===Clausius statement=== The German scientist [[Rudolf Clausius]] laid the foundation for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work.{{sfnp|Clausius|1850}} His formulation of the second law, which was published in German in 1854, is known as the ''Clausius statement'':
Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.{{sfnp|Clausius|1854|p=86}}
The statement by Clausius uses the concept of 'passage of heat'. As is usual in thermodynamic discussions, this means 'net transfer of energy as heat', and does not refer to contributory transfers one way and the other. Heat cannot spontaneously flow from cold regions to hot regions without external work being performed on the system, which is evident from ordinary experience of [[refrigeration]], for example. In a refrigerator, heat is transferred from cold to hot, but only when forced by an external agent, the refrigeration system. ===Kelvin statements=== [[William Thomson, 1st Baron Kelvin|Lord Kelvin]] expressed the second law in several wordings. ::It is impossible for a self-acting machine, unaided by any external agency, to convey heat from one body to another at a higher temperature. ::It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects.{{sfnp|Thomson|1851}} ===Equivalence of the Clausius and the Kelvin statements=== [[Image:Deriving Kelvin Statement from Clausius Statement.svg|thumb|Derive Kelvin Statement from Clausius Statement]] Suppose there is an engine violating the Kelvin statement: i.e., one that drains heat and converts it completely into work (The drained heat is fully converted to work.) in a cyclic fashion without any other result. Now pair it with a reversed [[Carnot engine]] as shown by the right figure. The [[Heat engine#Efficiency|efficiency]] of a normal heat engine is ''η'' and so the efficiency of the reversed heat engine is 1/''η''. The net and sole effect of the combined pair of engines is to transfer heat \Delta Q = Q\left(\frac{1}{\eta}-1\right) from the cooler reservoir to the hotter one, which violates the Clausius statement. This is a consequence of the [[first law of thermodynamics]], as for the total system's energy to remain the same; \text{Input}+\text{Output}=0 \implies (Q + Q_c) - \frac{Q}{\eta} = 0 , so therefore Q_c=Q\left( \frac{1}{\eta}-1\right) , where (1) the sign convention of heat is used in which heat entering into (leaving from) an engine is positive (negative) and (2) \frac{Q}{\eta} is obtained by [[Heat engine#Efficiency|the definition of efficiency]] of the engine when the engine operation is not reversed. Thus a violation of the Kelvin statement implies a violation of the Clausius statement, i.e. the Clausius statement implies the Kelvin statement. We can prove in a similar manner that the Kelvin statement implies the Clausius statement, and hence the two are equivalent. ===Planck's proposition=== Planck offered the following proposition as derived directly from experience. This is sometimes regarded as his statement of the second law, but he regarded it as a starting point for the derivation of the second law. ::It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the raising of a weight and cooling of a heat reservoir.[[Max Planck|Planck, M.]] (1897/1903), p. 86.Roberts, J.K., Miller, A.R. (1928/1960), p. 319. ===Relation between Kelvin's statement and Planck's proposition=== It is almost customary in textbooks to speak of the "Kelvin–Planck statement" of the law, as for example in the text by [[Dirk ter Haar|ter Haar]] and [[Harald Wergeland|Wergeland]].[[Dirk ter Haar|ter Haar, D.]], [[Harald Wergeland|Wergeland, H.]] (1966), p. 17. This version, also known as the '''heat engine statement''', of the second law states that ::It is impossible to devise a [[thermodynamic cycle|cyclically]] operating device, the sole effect of which is to absorb energy in the form of heat from a single [[heat reservoir|thermal reservoir]] and to deliver an equivalent amount of [[Work (physics)|work]].{{cite book|last=Rao|first=Y. V. C.|title=Chemical Engineering Thermodynamics|publisher=Universities Press|isbn=978-81-7371-048-3|page=158|year=1997}} ===Planck's statement=== Planck stated the second law as follows. ::Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased. In the limit, i.e. for reversible processes, the sum of the entropies remains unchanged.[[Max Planck|Planck, M.]] (1897/1903), p. 100.[[Max Planck|Planck, M.]] (1926), p. 463, translation by Uffink, J. (2003), p. 131.Roberts, J.K., Miller, A.R. (1928/1960), p. 382. This source is partly verbatim from Planck's statement, but does not cite Planck. This source calls the statement the principle of the increase of entropy. Rather like Planck's statement is that of Uhlenbeck and Ford for ''irreversible phenomena''. ::... in an irreversible or spontaneous change from one equilibrium state to another (as for example the equalization of temperature of two bodies A and B, when brought in contact) the entropy always increases.[[George Uhlenbeck|Uhlenbeck, G.E.]], Ford, G.W. (1963), p. 16. ===Principle of Carathéodory=== [[Constantin Carathéodory]] formulated thermodynamics on a purely mathematical axiomatic foundation. His statement of the second law is known as the Principle of Carathéodory, which may be formulated as follows:[[Constantin Carathéodory|Carathéodory, C.]] (1909).
In every neighborhood of any state S of an adiabatically enclosed system there are states inaccessible from S.Buchdahl, H.A. (1966), p. 68.
With this formulation, he described the concept of [[adiabatic accessibility]] for the first time and provided the foundation for a new subfield of classical thermodynamics, often called [[Ruppeiner geometry|geometrical thermodynamics]]. It follows from Carathéodory's principle that quantity of energy quasi-statically transferred as heat is a holonomic [[process function]], in other words, \delta Q=TdS.{{cite book |last=Sychev |first=V. V. |title=The Differential Equations of Thermodynamics |year=1991 |publisher=Taylor & Francis |isbn=978-1-56032-121-7}} Though it is almost customary in textbooks to say that Carathéodory's principle expresses the second law and to treat it as equivalent to the Clausius or to the Kelvin-Planck statements, such is not the case. To get all the content of the second law, Carathéodory's principle needs to be supplemented by Planck's principle, that isochoric work always increases the internal energy of a closed system that was initially in its own internal thermodynamic equilibrium.Münster, A. (1970), p. 45.{{sfnp|Lieb|Yngvason|1999|p=49}}[[Max Planck|Planck, M.]] (1926).Buchdahl, H.A. (1966), p. 69. {{clarify|date=February 2014}} ===Planck's principle=== In 1926, [[Max Planck]] wrote an important paper on the basics of thermodynamics.Uffink, J. (2003), pp. 129–132. He indicated the principle ::The internal energy of a closed system is increased by an adiabatic process, throughout the duration of which, the volume of the system remains constant.{{sfnp |Lieb|Yngvason|1999|p=49}} This formulation does not mention heat and does not mention temperature, nor even entropy, and does not necessarily implicitly rely on those concepts, but it implies the content of the second law. A closely related statement is that "Frictional pressure never does positive work."[[Clifford Truesdell|Truesdell, C.]], Muncaster, R.G. (1980). ''Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas, Treated as a Branch of Rational Mechanics'', Academic Press, New York, {{ISBN|0-12-701350-4}}, p. 15. Planck wrote: "The production of heat by friction is irreversible."[[Max Planck|Planck, M.]] (1897/1903), p. 81.[[Max Planck|Planck, M.]] (1926), p. 457, Wikipedia editor's translation. Not mentioning entropy, this principle of Planck is stated in physical terms. It is very closely related to the Kelvin statement given just above.Lieb, E.H., Yngvason, J. (2003), p. 149. It is relevant that for a system at constant volume and [[Mole (unit)|mole numbers]], the entropy is a monotonic function of the internal energy. Nevertheless, this principle of Planck is not actually Planck's preferred statement of the second law, which is quoted above, in a previous sub-section of the present section of this present article, and relies on the concept of entropy. A statement that in a sense is complementary to Planck's principle is made by Borgnakke and Sonntag. They do not offer it as a full statement of the second law: ::... there is only one way in which the entropy of a [closed] system can be decreased, and that is to transfer heat from the system.Borgnakke, C., Sonntag., R.E. (2009), p. 304. Differing from Planck's just foregoing principle, this one is explicitly in terms of entropy change. Removal of matter from a system can also decrease its entropy. ===Statement for a system that has a known expression of its internal energy as a function of its extensive state variables=== The second law has been shown to be equivalent to the [[internal energy]] ''U'' being a weakly [[convex function]], when written as a function of extensive properties (mass, volume, entropy, ...).{{cite book |last1=van Gool |first1=W. |last2=Bruggink |first2=J.J.C. (Eds) |title=Energy and time in the economic and physical sciences |publisher=North-Holland |year=1985 |pages=41–56 |isbn=978-0-444-87748-2}} {{clarify|date=February 2014}} ==Corollaries== ===Perpetual motion of the second kind=== {{main article|Perpetual motion}} Before the establishment of the second law, many people who were interested in inventing a perpetual motion machine had tried to circumvent the restrictions of [[first law of thermodynamics]] by extracting the massive internal energy of the environment as the power of the machine. Such a machine is called a "perpetual motion machine of the second kind". The second law declared the impossibility of such machines. ===Carnot theorem=== [[Carnot theorem (thermodynamics)|Carnot's theorem]] (1824) is a principle that limits the maximum efficiency for any possible engine. The efficiency solely depends on the temperature difference between the hot and cold thermal reservoirs. Carnot's theorem states: *All irreversible heat engines between two heat reservoirs are less efficient than a [[Carnot engine]] operating between the same reservoirs. *All reversible heat engines between two heat reservoirs are equally efficient with a Carnot engine operating between the same reservoirs. In his ideal model, the heat of caloric converted into work could be reinstated by reversing the motion of the cycle, a concept subsequently known as [[thermodynamic reversibility]]. Carnot, however, further postulated that some caloric is lost, not being converted to mechanical work. Hence, no real heat engine could realize the [[Carnot cycle]]'s reversibility and was condemned to be less efficient. Though formulated in terms of caloric (see the obsolete [[caloric theory]]), rather than [[entropy]], this was an early insight into the second law. ===Clausius inequality=== The [[Clausius theorem]] (1854) states that in a cyclic process : \oint \frac{\delta Q}{T_\text{surr}} \leq 0. The equality holds in the reversible case[http://scienceworld.wolfram.com/physics/ClausiusTheorem.html ''Clausius theorem''] at [[Wolfram Research]] and the strict inequality holds in the irreversible case, with ''T''surr as the temperature of the heat bath (surroundings) here. The reversible case is used to introduce the state function [[entropy]]. This is because in cyclic processes the variation of a state function is zero from state functionality. ===Thermodynamic temperature=== {{main article|Thermodynamic temperature}} For an arbitrary heat engine, the efficiency is: {{NumBlk|: |\eta = \frac {|W_n|}{q_H} = \frac{q_H+q_C}{q_H} = 1 - \frac{|q_C|}{|q_H|}|{{EquationRef|1}}}} where ''W''n is the net work done by the engine per cycle, ''q''''H'' > 0 is the heat added to the engine from a hot reservoir, and ''q''''C'' = - |''q''''C''| < 0{{cite book |last=Planck |first=M. |title=Treatise on Thermodynamics |page=§90 |quote=eq.(39) & (40) |publisher=Dover Publications |year=1945}}. is waste [[Heat|heat given off]] to a cold reservoir from the engine. Thus the efficiency depends only on the ratio |''q''''C''| / |''q''''H''|. [[Carnot theorem (thermodynamics)|Carnot's theorem]] states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures ''T''H and ''T''C must have the same efficiency, that is to say, the efficiency is a function of temperatures only: {{NumBlk|:|\frac{|q_C|}{|q_H|} = f(T_H,T_C).|{{EquationRef|2}}}} In addition, a reversible heat engine operating between temperatures ''T''1 and ''T''3 must have the same efficiency as one consisting of two cycles, one between ''T''1 and another (intermediate) temperature ''T''2, and the second between ''T''2 and ''T''3, where ''T1'' > ''T2'' > ''T3''. This is because, if a part of the two cycle engine is hidden such that it is recognized as an engine between the reservoirs at the temperatures ''T''1 and ''T''3, then the efficiency of this engine must be same to the other engine at the same reservoirs. If we choose engines such that work done by the one cycle engine and the two cycle engine are same, then the efficiency of each heat engine is written as the below. : \eta _1 = 1 - \frac{|q_3|}{|q_1|} = 1 - f(T_1, T_3), : \eta _2 = 1 - \frac{|q_2|}{|q_1|} = 1 - f(T_1, T_2), : \eta _3 = 1 - \frac{|q_3|}{|q_2|} = 1 - f(T_2, T_3). Here, the engine 1 is the one cycle engine, and the engines 2 and 3 make the two cycle engine where there is the intermediate reservoir at ''T''2. We also have used the fact that the heat q_2 passes through the intermediate thermal reservoir at T_2 without losing its energy. (I.e., q_2 is not lost during its passage through the reservoir at T_2.) This fact can be proved by the following. : \begin{align} & {{\eta }_{2}}=1-\frac{|{{q}_{2}}|}{|{{q}_{1}}|}\to |{{w}_{2}}|=|{{q}_{1}}|-|{{q}_{2}}|,\\ & {{\eta }_{3}}=1-\frac{|{{q}_{3}}|}{|{{q}_{2}}^{*}|}\to |{{w}_{3}}|=|{{q}_{2}}^{*}|-|{{q}_{3}}|,\\ & |{{w}_{2}}|+|{{w}_{3}}|=(|{{q}_{1}}|-|{{q}_{2}}|)+(|{{q}_{2}}^{*}|-|{{q}_{3}}|),\\ & {{\eta}_{1}}=1-\frac{|{{q}_{3}}|}{|{{q}_{1}}|}=\frac{(|{{w}_{2}}|+|{{w}_{3}}|)}{|{{q}_{1}}|}=\frac{(|{{q}_{1}}|-|{{q}_{2}}|)+(|{{q}_{2}}^{*}|-|{{q}_{3}}|)}{|{{q}_{1}}|}.\\ \end{align} In order to have the consistency in the last equation, the heat q_2 flown from the engine 2 to the intermediate reservoir must be equal to the heat q_2^* flown out from the reservoir to the engine 3. Then : f(T_1,T_3) = \frac{|q_3|}{|q_1|} = \frac{|q_2| |q_3|} {|q_1| |q_2|} = f(T_1,T_2)f(T_2,T_3). Now consider the case where T_1 is a fixed reference temperature: the temperature of the [[triple point]] of water as 273.16 Kelvin; T_1 = 273.16 K. Then for any ''T''2 and ''T''3, : f(T_2,T_3) = \frac{f(T_1,T_3)}{f(T_1,T_2)} = \frac{273.16 \text{ K} \cdot f(T_1,T_3)}{273.16 \text{ K} \cdot f(T_1,T_2)}. Therefore, if thermodynamic temperature ''T''* is defined by : T^* = 273.16 \text{ K} \cdot f(T_1,T) then the function ''f'', viewed as a function of thermodynamic temperatures, is simply : f(T_2,T_3) = f(T_2^*,T_3^*) = \frac{T_3^*}{T_2^*}, and the reference temperature ''T''1* = 273.16 K × ''f''(''T''1,''T''1) = 273.16 K. (Any reference temperature and any positive numerical value could be used{{snd}}the choice here corresponds to the [[Kelvin]] scale.) ===Entropy=== {{main article|Entropy (classical thermodynamics)}} According to the [[Clausius theorem|Clausius equality]], for a ''reversible process'' : \oint \frac{\delta Q}{T}=0 That means the line integral \int_L \frac{\delta Q}{T} is path independent for reversible processes. So we can define a state function S called entropy, which for a reversible process or for pure heat transfer satisfies : dS = \frac{\delta Q}{T} With this we can only obtain the difference of entropy by integrating the above formula. To obtain the absolute value, we need the [[third law of thermodynamics]], which states that ''S'' = 0 at [[absolute zero]] for perfect crystals. For any irreversible process, since entropy is a state function, we can always connect the initial and terminal states with an imaginary reversible process and integrating on that path to calculate the difference in entropy. Now reverse the reversible process and combine it with the said irreversible process. Applying the [[Clausius inequality]] on this loop, with ''T''surr as the temperature of the surroundings, : -\Delta S+\int\frac{\delta Q}{T_{surr}}=\oint\frac{\delta Q}{T_{surr}} \leq 0 Thus, : \Delta S \ge \int \frac{\delta Q}{T_{surr}} where the equality holds if the transformation is reversible. Notice that if the process is an [[adiabatic process]], then \delta Q=0, so \Delta S \ge 0. ===Energy, available useful work=== {{See also|Exergy}} An important and revealing idealized special case is to consider applying the second law to the scenario of an isolated system (called the total system or universe), made up of two parts: a sub-system of interest, and the sub-system's surroundings. These surroundings are imagined to be so large that they can be considered as an ''unlimited'' heat reservoir at temperature ''TR'' and pressure ''PR'' {{snd}}so that no matter how much heat is transferred to (or from) the sub-system, the temperature of the surroundings will remain ''TR''; and no matter how much the volume of the sub-system expands (or contracts), the pressure of the surroundings will remain ''PR''. Whatever changes to ''dS'' and ''dSR'' occur in the entropies of the sub-system and the surroundings individually, according to the second law the entropy ''S''tot of the isolated total system must not decrease: : dS_{\mathrm{tot}}= dS + dS_R \ge 0 According to the [[first law of thermodynamics]], the change ''dU'' in the internal energy of the sub-system is the sum of the heat ''δq'' added to the sub-system, ''less'' any work ''δw'' done ''by'' the sub-system, ''plus'' any net chemical energy entering the sub-system ''d'' Σ''μiRNi'', so that: : dU = \delta q - \delta w + d\left(\sum \mu_{iR}N_i\right) where ''μ''''iR'' are the [[chemical potential]]s of chemical species in the external surroundings. Now the heat leaving the reservoir and entering the sub-system is : \delta q = T_R (-dS_R) \le T_R dS where we have first used the definition of entropy in classical thermodynamics (alternatively, in statistical thermodynamics, the relation between entropy change, temperature and absorbed heat can be derived); and then the Second Law inequality from above. It therefore follows that any net work ''δw'' done by the sub-system must obey : \delta w \le - dU + T_R dS + \sum \mu_{iR} dN_i It is useful to separate the work ''δw'' done by the subsystem into the ''useful'' work ''δwu'' that can be done ''by'' the sub-system, over and beyond the work ''pR dV'' done merely by the sub-system expanding against the surrounding external pressure, giving the following relation for the useful work (exergy) that can be done: : \delta w_u \le -d \left(U - T_R S + p_R V - \sum \mu_{iR} N_i \right) It is convenient to define the right-hand-side as the exact derivative of a thermodynamic potential, called the ''availability'' or ''[[exergy]]'' ''E'' of the subsystem, : E = U - T_R S + p_R V - \sum \mu_{iR} N_i The Second Law therefore implies that for any process which can be considered as divided simply into a subsystem, and an unlimited temperature and pressure reservoir with which it is in contact, : dE + \delta w_u \le 0 i.e. the change in the subsystem's exergy plus the useful work done ''by'' the subsystem (or, the change in the subsystem's exergy less any work, additional to that done by the pressure reservoir, done ''on'' the system) must be less than or equal to zero. In sum, if a proper ''infinite-reservoir-like'' reference state is chosen as the system surroundings in the real world, then the second law predicts a decrease in ''E'' for an irreversible process and no change for a reversible process. : dS_{tot} \ge 0 is equivalent to dE + \delta w_u \le 0 This expression together with the associated reference state permits a [[design engineer]] working at the macroscopic scale (above the [[thermodynamic limit]]) to utilize the second law without directly measuring or considering entropy change in a total isolated system. (''Also, see [[process engineer]]''). Those changes have already been considered by the assumption that the system under consideration can reach equilibrium with the reference state without altering the reference state. An efficiency for a process or collection of processes that compares it to the reversible ideal may also be found (''See [[Exergy efficiency|second law efficiency]]''.) This approach to the second law is widely utilized in [[engineering]] practice, [[environmental accounting]], [[systems ecology]], and other disciplines. ==Direction of spontaneous processes== The second law determines whether a proposed physical or chemical process is forbidden or may occur spontaneously. For [[isolated system]]s, no energy is provided by the surroundings and the second law requires that the entropy of the system alone must increase: Δ''S'' > 0. Examples of spontaneous physical processes in isolated systems include the following: * 1) [[Heat transfer|Heat can be transferred]] from a region of higher temperature to a lower temperature (but not the reverse). * 2) Mechanical energy can be converted to thermal energy (but not the reverse). * 3) A solute can move from a region of higher concentration to a region of lower concentration (but not the reverse). However, for some non-isolated systems which can exchange energy with their surroundings, the surroundings exchange enough heat with the system, or do sufficient work on the system, so that the processes occur in the opposite direction. This is possible provided the total entropy change of the system plus the surroundings is positive as required by the second law: Δ''S''tot = Δ''S'' + Δ''S''R > 0. For the three examples given above: * 1) Heat can be transferred from a region of lower temperature to a higher temperature in a [[refrigerator]] or in a [[heat pump]]. These machines must provide sufficient work to the system. * 2) Thermal energy can be converted to mechanical work in a [[heat engine]], if sufficient heat is also expelled to the surroundings. * 3) A solute can move from a region of lower concentration to a region of higher concentration in the biochemical process of [[active transport]], if sufficient work is provided by a concentration gradient of a chemical such as [[Adenosine triphosphate|ATP]] or by an [[electrochemical gradient]]. ===The second law in chemical thermodynamics=== For a [[spontaneous process|spontaneous chemical process]] in a closed system at constant temperature and pressure without non-''PV'' work, the Clausius inequality Δ''S'' > ''Q/T''surr transforms into a condition for the change in [[Gibbs free energy]] : \Delta G < 0 or d''G'' < 0. For a similar process at constant temperature and volume, the change in [[Helmholtz free energy]] must be negative, \Delta A < 0 . Thus, a negative value of the change in free energy (''G'' or ''A'') is a necessary condition for a process to be spontaneous. This is the most useful form of the second law of thermodynamics in chemistry, where free-energy changes can be calculated from tabulated enthalpies of formation and standard molar entropies of reactants and products. The chemical equilibrium condition at constant ''T'' and ''p'' without electrical work is d''G'' = 0. ==History== {{See also|History of entropy}} [[File:Sadi Carnot.jpeg|thumb|upright|Nicolas Léonard Sadi Carnot in the traditional uniform of a student of the [[École Polytechnique]]]] The first theory of the conversion of heat into mechanical work is due to [[Nicolas Léonard Sadi Carnot]] in 1824. He was the first to realize correctly that the efficiency of this conversion depends on the difference of temperature between an engine and its surroundings. Recognizing the significance of [[James Prescott Joule]]'s work on the conservation of energy, [[Rudolf Clausius]] was the first to formulate the second law during 1850, in this form: heat does not flow ''spontaneously'' from cold to hot bodies. While common knowledge now, this was contrary to the [[caloric theory]] of heat popular at the time, which considered heat as a fluid. From there he was able to infer the principle of Sadi Carnot and the definition of entropy (1865). Established during the 19th century, the [[Kelvin-Planck statement|Kelvin-Planck statement of the Second Law]] says, "It is impossible for any device that operates on a [[cyclic process|cycle]] to receive heat from a single [[heat reservoir|reservoir]] and produce a net amount of work." This was shown to be equivalent to the statement of Clausius. The [[ergodic hypothesis]] is also important for the [[Boltzmann]] approach. It says that, over long periods of time, the time spent in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e. that all accessible microstates are equally probable over a long period of time. Equivalently, it says that time average and average over the statistical ensemble are the same. There is a traditional doctrine, starting with Clausius, that entropy can be understood in terms of molecular 'disorder' within a [[Macroscopic bodies|macroscopic system]]. This doctrine is obsolescent.Denbigh, K.G., Denbigh, J.S. (1985). ''Entropy in Relation to Incomplete Knowledge'', Cambridge University Press, Cambridge UK, {{ISBN|0-521-25677-1}}, pp. 43–44.Grandy, W.T., Jr (2008). ''Entropy and the Time Evolution of Macroscopic Systems'', Oxford University Press, Oxford, {{ISBN|978-0-19-954617-6}}, pp. 55–58.[http://entropysite.oxy.edu Entropy Sites — A Guide] Content selected by [[Frank L. Lambert]] ===Account given by Clausius=== [[File:Clausius-1.jpg|thumb|upright|Rudolf Clausius]] In 1865, the German physicist [[Rudolf Clausius]] stated what he called the "second fundamental theorem in the [[mechanical theory of heat]]" in the following form:{{sfnp|Clausius|1867}} : \int \frac{\delta Q}{T} = -N where ''Q'' is heat, ''T'' is temperature and ''N'' is the "equivalence-value" of all uncompensated transformations involved in a cyclical process. Later, in 1865, Clausius would come to define "equivalence-value" as entropy. On the heels of this definition, that same year, the most famous version of the second law was read in a presentation at the Philosophical Society of Zurich on April 24, in which, in the end of his presentation, Clausius concludes:
The entropy of the universe tends to a maximum.
This statement is the best-known phrasing of the second law. Because of the looseness of its language, e.g. [[universe]], as well as lack of specific conditions, e.g. open, closed, or isolated, many people take this simple statement to mean that the second law of thermodynamics applies virtually to every subject imaginable. This is not true; this statement is only a simplified version of a more extended and precise description. In terms of time variation, the mathematical statement of the second law for an [[isolated system]] undergoing an arbitrary transformation is: : \frac{dS}{dt} \ge 0 where : ''S'' is the entropy of the system and : ''t'' is [[time]]. The equality sign applies after equilibration. An alternative way of formulating of the second law for isolated systems is: : \frac{dS}{dt} = \dot S_{i} with \dot S_{i} \ge 0 with \dot S_{i} the sum of the rate of [[entropy production]] by all processes inside the system. The advantage of this formulation is that it shows the effect of the entropy production. The rate of entropy production is a very important concept since it determines (limits) the efficiency of thermal machines. Multiplied with ambient temperature T_{a} it gives the so-called dissipated energy P_{diss}=T_{a}\dot S_{i}. The expression of the second law for closed systems (so, allowing heat exchange and moving boundaries, but not exchange of matter) is: : \frac{dS}{dt} = \frac{\dot Q}{T}+\dot S_{i} with \dot S_{i} \ge 0 Here : \dot Q is the heat flow into the system : T is the temperature at the point where the heat enters the system. The equality sign holds in the case that only reversible processes take place inside the system. If irreversible processes take place (which is the case in real systems in operation) the >-sign holds. If heat is supplied to the system at several places we have to take the algebraic sum of the corresponding terms. For open systems (also allowing exchange of matter): : \frac{dS}{dt} = \frac{\dot Q}{T}+\dot S+\dot S_{i} with \dot S_{i} \ge 0 Here \dot S is the flow of entropy into the system associated with the flow of matter entering the system. It should not be confused with the time derivative of the entropy. If matter is supplied at several places we have to take the algebraic sum of these contributions. ==Statistical mechanics== [[Statistical mechanics]] gives an explanation for the second law by postulating that a material is composed of atoms and molecules which are in constant motion. A particular set of positions and velocities for each particle in the system is called a [[microstate (statistical mechanics)|microstate]] of the system and because of the constant motion, the system is constantly changing its microstate. Statistical mechanics postulates that, in equilibrium, each microstate that the system might be in is equally likely to occur, and when this assumption is made, it leads directly to the conclusion that the second law must hold in a statistical sense. That is, the second law will hold on average, with a statistical variation on the order of 1/{{radic|''N''}} where ''N'' is the number of particles in the system. For everyday (macroscopic) situations, the probability that the second law will be violated is practically zero. However, for systems with a small number of particles, thermodynamic parameters, including the entropy, may show significant statistical deviations from that predicted by the second law. Classical thermodynamic theory does not deal with these statistical variations. ==Derivation from statistical mechanics== {{Further|H-theorem}} The first mechanical argument of the [[Kinetic theory of gases]] that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium was due to [[James Clerk Maxwell]] in 1860;{{Cite journal | last1 = Gyenis | first1 = Balazs | doi = 10.1016/j.shpsb.2017.01.001 | title = Maxwell and the normal distribution: A colored story of probability, independence, and tendency towards equilibrium | journal = Studies in History and Philosophy of Modern Physics | volume = 57 | pages = 53–65 | year = 2017| arxiv = 1702.01411 | bibcode = 2017SHPMP..57...53G | s2cid = 38272381 }} [[Ludwig Boltzmann]] with his [[H-theorem]] of 1872 also argued that due to collisions gases should over time tend toward the [[Maxwell–Boltzmann distribution]]. Due to [[Loschmidt's paradox]], derivations of the Second Law have to make an assumption regarding the past, namely that the system is [[Correlation and dependence|uncorrelated]] at some time in the past; this allows for simple probabilistic treatment. This assumption is usually thought as a [[boundary condition]], and thus the second Law is ultimately a consequence of the initial conditions somewhere in the past, probably at the beginning of the universe (the [[Big Bang]]), though [[Boltzmann brain|other scenarios]] have also been suggested.{{cite journal|last=Hawking|first=SW|title=Arrow of time in cosmology|journal=Phys. Rev. D|year=1985|volume=32|issue=10|pages=2489–2495|doi=10.1103/PhysRevD.32.2489|pmid=9956019|bibcode = 1985PhRvD..32.2489H }}{{cite book | last = Greene | first = Brian | author-link = Brian Greene | title = The Fabric of the Cosmos | url = https://archive.org/details/fabricofcosmossp00gree | url-access = registration | publisher = Alfred A. Knopf | year = 2004 | page = [https://archive.org/details/fabricofcosmossp00gree/page/171 171] | isbn = 978-0-375-41288-2}}{{cite journal|last=Lebowitz|first=Joel L.|title= Boltzmann's Entropy and Time's Arrow|journal=Physics Today|date=September 1993|volume=46|issue=9|pages=32–38|url=http://users.df.uba.ar/ariel/materias/FT3_2008_1C/papers_pdf/lebowitz_370.pdf|access-date=2013-02-22|doi=10.1063/1.881363|bibcode = 1993PhT....46i..32L }} Given these assumptions, in statistical mechanics, the Second Law is not a postulate, rather it is a consequence of the [[Statistical mechanics#Fundamental postulate|fundamental postulate]], also known as the equal prior probability postulate, so long as one is clear that simple probability arguments are applied only to the future, while for the past there are auxiliary sources of information which tell us that it was low entropy.{{citation needed|date=August 2012}} The first part of the second law, which states that the entropy of a thermally isolated system can only increase, is a trivial consequence of the equal prior probability postulate, if we restrict the notion of the entropy to systems in thermal equilibrium. The entropy of an isolated system in thermal equilibrium containing an amount of energy of E is: : S = k_{\mathrm B} \ln\left[\Omega\left(E\right)\right] where \Omega\left(E\right) is the number of quantum states in a small interval between E and E +\delta E. Here \delta E is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of \delta E. However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on \delta E. Suppose we have an isolated system whose macroscopic state is specified by a number of variables. These macroscopic variables can, e.g., refer to the total volume, the positions of pistons in the system, etc. Then \Omega will depend on the values of these variables. If a variable is not fixed, (e.g. we do not clamp a piston in a certain position), then because all the accessible states are equally likely in equilibrium, the free variable in equilibrium will be such that \Omega is maximized at the given energy of the isolated systemYoung, H. D; Freedman, R. A. (2004). ''University Physics'', 11th edition. Pearson. p. 731. as that is the most probable situation in equilibrium. If the variable was initially fixed to some value then upon release and when the new equilibrium has been reached, the fact the variable will adjust itself so that \Omega is maximized, implies that the entropy will have increased or it will have stayed the same (if the value at which the variable was fixed happened to be the equilibrium value). Suppose we start from an equilibrium situation and we suddenly remove a constraint on a variable. Then right after we do this, there are a number \Omega of accessible microstates, but equilibrium has not yet been reached, so the actual probabilities of the system being in some accessible state are not yet equal to the prior probability of 1/\Omega. We have already seen that in the final equilibrium state, the entropy will have increased or have stayed the same relative to the previous equilibrium state. Boltzmann's [[H-theorem]], however, proves that the quantity {{math|''H''}} increases monotonically as a function of time during the intermediate out of equilibrium state. ===Derivation of the entropy change for reversible processes=== The second part of the Second Law states that the entropy change of a system undergoing a reversible process is given by: : dS =\frac{\delta Q}{T} where the temperature is defined as: :\frac{1}{k_{\mathrm B} T}\equiv\beta\equiv\frac{d\ln\left[\Omega\left(E\right)\right]}{dE} See [[Microcanonical ensemble|here]] for the justification for this definition. Suppose that the system has some external parameter, ''x'', that can be changed. In general, the energy eigenstates of the system will depend on ''x''. According to the [[adiabatic theorem]] of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in. The generalized force, ''X'', corresponding to the external variable ''x'' is defined such that X dx is the work performed by the system if ''x'' is increased by an amount ''dx''. For example, if ''x'' is the volume, then ''X'' is the pressure. The generalized force for a system known to be in energy eigenstate E_{r} is given by: : X = -\frac{dE_{r}}{dx} Since the system can be in any energy eigenstate within an interval of \delta E, we define the generalized force for the system as the expectation value of the above expression: : X = -\left\langle\frac{dE_{r}}{dx}\right\rangle\, To evaluate the average, we partition the \Omega\left(E\right) energy eigenstates by counting how many of them have a value for \frac{dE_{r}}{dx} within a range between Y and Y + \delta Y. Calling this number \Omega_{Y}\left(E\right), we have: : \Omega\left(E\right)=\sum_{Y}\Omega_{Y}\left(E\right)\, The average defining the generalized force can now be written: : X = -\frac{1}{\Omega\left(E\right)}\sum_{Y} Y\Omega_{Y}\left(E\right)\, We can relate this to the derivative of the entropy with respect to ''x'' at constant energy ''E'' as follows. Suppose we change ''x'' to ''x'' + ''dx''. Then \Omega\left(E\right) will change because the energy eigenstates depend on ''x'', causing energy eigenstates to move into or out of the range between E and E+\delta E. Let's focus again on the energy eigenstates for which \frac{dE_{r}}{dx} lies within the range between Y and Y + \delta Y. Since these energy eigenstates increase in energy by ''Y dx'', all such energy eigenstates that are in the interval ranging from ''E'' – ''Y'' ''dx'' to ''E'' move from below ''E'' to above ''E''. There are : N_{Y}\left(E\right)=\frac{\Omega_{Y}\left(E\right)}{\delta E} Y dx\, such energy eigenstates. If Y dx\leq\delta E, all these energy eigenstates will move into the range between E and E+\delta E and contribute to an increase in \Omega. The number of energy eigenstates that move from below E+\delta E to above E+\delta E is given by N_{Y}\left(E+\delta E\right). The difference : N_{Y}\left(E\right) - N_{Y}\left(E+\delta E\right)\, is thus the net contribution to the increase in \Omega. Note that if ''Y dx'' is larger than \delta E there will be the energy eigenstates that move from below ''E'' to above E+\delta E. They are counted in both N_{Y}\left(E\right) and N_{Y}\left(E+\delta E\right), therefore the above expression is also valid in that case. Expressing the above expression as a derivative with respect to ''E'' and summing over ''Y'' yields the expression: : \left(\frac{\partial\Omega}{\partial x}\right)_{E} = -\sum_{Y}Y\left(\frac{\partial\Omega_{Y}}{\partial E}\right)_{x}= \left(\frac{\partial\left(\Omega X\right)}{\partial E}\right)_{x}\, The logarithmic derivative of \Omega with respect to ''x'' is thus given by: : \left(\frac{\partial\ln\left(\Omega\right)}{\partial x}\right)_{E} = \beta X +\left(\frac{\partial X}{\partial E}\right)_{x}\, The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and will thus vanish in the thermodynamic limit. We have thus found that: : \left(\frac{\partial S}{\partial x}\right)_{E} = \frac{X}{T}\, Combining this with : \left(\frac{\partial S}{\partial E}\right)_{x} = \frac{1}{T}\, gives: : dS = \left(\frac{\partial S}{\partial E}\right)_{x}dE+\left(\frac{\partial S}{\partial x}\right)_{E}dx = \frac{dE}{T} + \frac{X}{T} dx=\frac{\delta Q}{T}\, ===Derivation for systems described by the canonical ensemble=== If a system is in thermal contact with a heat bath at some temperature ''T'' then, in equilibrium, the probability distribution over the energy eigenvalues are given by the [[canonical ensemble]]: : P_{j}=\frac{\exp\left(-\frac{E_{j}}{k_{\mathrm B} T}\right)}{Z} Here ''Z'' is a factor that normalizes the sum of all the probabilities to 1, this function is known as the [[Partition function (statistical mechanics)|partition function]]. We now consider an infinitesimal reversible change in the temperature and in the external parameters on which the energy levels depend. It follows from the general formula for the entropy: : S = -k_{\mathrm B}\sum_{j}P_{j}\ln\left(P_{j}\right) that : dS = -k_{\mathrm B}\sum_{j}\ln\left(P_{j}\right)dP_{j} Inserting the formula for P_{j} for the canonical ensemble in here gives: : dS = \frac{1}{T}\sum_{j}E_{j}dP_{j}=\frac{1}{T}\sum_{j}d\left(E_{j}P_{j}\right) - \frac{1}{T}\sum_{j}P_{j}dE_{j}= \frac{dE + \delta W}{T}=\frac{\delta Q}{T} ===Initial conditions at the Big Bang=== As elaborated above, it is thought that the second law of thermodynamics is a result of the very low-entropy initial conditions at the [[Big Bang]]. From a statistical point of view, these were very special conditions. On the other hand, they were quite simple, as the universe - or at least the part thereof from which the [[observable universe]] developed - seems to have been extremely uniform.Carroll, S. (2017). The big picture: on the origins of life, meaning, and the universe itself. Penguin. This may seem somewhat paradoxical, since in many physical systems uniform conditions (e.g. mixed rather than separated gases) have high entropy. The paradox is solved once realizing that gravitational systems have [[Heat capacity#Negative heat capacity|negative heat capacity]], so that when gravity is important, uniform conditions (e.g. gas of uniform density) in fact have lower entropy compared to non-uniform ones (e.g. black holes in empty space).Greene, B. (2004). The fabric of the cosmos: Space, time, and the texture of reality. Knopf. Yet another approach is that the universe had high (or even maximal) entropy given its size, but as the universe grew it rapidly came out of thermodynamic equilibrium, its entropy only slightly increased compared to the increase in maximal possible entropy, and thus it has arrived at a very low entropy when compared to the much larger possible maximum given its later size.Davies, P. C. (1983). Inflation and time asymmetry in the universe. Nature, 301(5899), 398-400. As for the reason why initial conditions were such, one suggestion is that [[cosmological inflation]] was enough to wipe off non-smoothness, while another is that the universe was [[Hartle–Hawking state|created spontaneously]] where the mechanism of creation implies low-entropy initial conditions.[https://www.quantamagazine.org/physicists-debate-hawkings-idea-that-the-universe-had-no-beginning-20190606/ Physicists Debate Hawking's Idea That the Universe Had No Beginning. Wolchover, N. Quantmagazine, June 6, 2019. Retrieved 2020-11-28] ==Living organisms== There are two principal ways of formulating thermodynamics, (a) through passages from one state of thermodynamic equilibrium to another, and (b) through cyclic processes, by which the system is left unchanged, while the total entropy of the surroundings is increased. These two ways help to understand the processes of life. The thermodynamics of living organisms has been considered by many authors, such as [[What is Life?|Erwin Schrödinger]], [[Léon Brillouin]]{{cite book | last=Brillouin | first=L. | title=Science and Information Theory | publisher=Dover Publications, Incorporated | series=Dover Books on Physics | year=2013 | isbn=978-0-486-49755-6 | url=https://books.google.com/books?id=tPXVbiw_1P0C | access-date=26 March 2021 | page=}} and [[Life and Energy|Isaac Asimov]]. To a fair approximation, living organisms may be considered as examples of (b). Approximately, an animal's physical state cycles by the day, leaving the animal nearly unchanged. Animals take in food, water, and oxygen, and, as a result of [[metabolism]], give out breakdown products and heat. Plants [[Photosynthesis|take in radiative energy]] from the sun, which may be regarded as heat, and carbon dioxide and water. They give out oxygen. In this way they grow. Eventually they die, and their remains rot away, turning mostly back into carbon dioxide and water. This can be regarded as a cyclic process. Overall, the sunlight is from a high temperature source, the sun, and its energy is passed to a lower temperature sink, i.e. radiated into space. This is an increase of entropy of the surroundings of the plant. Thus animals and plants obey the second law of thermodynamics, considered in terms of cyclic processes. Furthermore, the ability of living organisms to grow and increase in complexity, as well as to form correlations with their environment in the form of adaption and memory, is not opposed to the second law – rather, it is akin to general results following from it: Under some definitions, an increase in entropy also results in an increase in complexity,{{cite journal | last1=Ladyman | first1=James | last2=Lambert | first2=James | last3=Wiesner | first3=Karoline | title=What is a complex system? | journal=European Journal for Philosophy of Science | publisher=Springer Science and Business Media LLC | volume=3 | issue=1 | date=19 June 2012 | issn=1879-4912 | doi=10.1007/s13194-012-0056-8 | pages=33–67| s2cid=18787276 }} and for a finite system interacting with finite reservoirs, an increase in entropy is equivalent to an increase in correlations between the system and the reservoirs.{{cite journal | last1=Esposito | first1=Massimiliano | last2=Lindenberg | first2=Katja |author-link2=Katja Lindenberg | last3=Van den Broeck | first3=Christian | title=Entropy production as correlation between system and reservoir | journal=New Journal of Physics | volume=12 | issue=1 | date=15 January 2010 | issn=1367-2630 | doi=10.1088/1367-2630/12/1/013013 | page=013013| arxiv=0908.1125 | bibcode=2010NJPh...12a3013E | doi-access=free }} Living organisms may be considered as open systems, because matter passes into and out from them. Thermodynamics of open systems is currently often considered in terms of passages from one state of thermodynamic equilibrium to another, or in terms of flows in the approximation of local thermodynamic equilibrium. The problem for living organisms may be further simplified by the approximation of assuming a steady state with unchanging flows. General principles of entropy production for such approximations are subject to an [[Non-equilibrium thermodynamics|unsettled current debate or research]]. ==Gravitational systems== Commonly, systems for which gravity is not important have a positive [[heat capacity]], meaning that their temperature rises with their internal energy. Therefore, when energy flows from a high-temperature object to a low-temperature object, the source temperature decreases while the sink temperature is increased; hence temperature differences tend to diminish over time. This is not always the case for systems in which the gravitational force is important: systems that are bound by their own gravity, such as stars, can have negative heat capacities. As they contract, both their total energy and their entropy decrease{{cite web |last1=Baez |first1=John |title=Can Gravity Decrease Entropy? |url=http://math.ucr.edu/home/baez/entropy.html |website=UC Riverside Department of Mathematics |publisher=University of California Riverside |access-date=7 June 2020 |date=7 August 2000 |quote=... gravitationally bound ball of gas has a negative specific heat!}} but [[Kelvin-Helmholtz mechanism|their internal temperature may increase]]. This can be significant for [[protostars]] and even gas giant planets such as [[Jupiter]]. As gravity is the most important force operating on cosmological scales, it may be difficult or impossible to apply the second law to the universe as a whole.Grandy, W.T. (Jr) (2008), p. 151. ==Non-equilibrium states== {{main article|Non-equilibrium thermodynamics}} The theory of classical or [[thermodynamic equilibrium|equilibrium thermodynamics]] is idealized. A main postulate or assumption, often not even explicitly stated, is the existence of systems in their own internal states of thermodynamic equilibrium. In general, a region of space containing a physical system at a given time, that may be found in nature, is not in thermodynamic equilibrium, read in the most stringent terms. In looser terms, nothing in the entire universe is or has ever been truly in exact thermodynamic equilibrium.Callen, H.B. (1960/1985), p. 15. For purposes of physical analysis, it is often enough convenient to make an assumption of [[thermodynamic equilibrium]]. Such an assumption may rely on trial and error for its justification. If the assumption is justified, it can often be very valuable and useful because it makes available the theory of thermodynamics. Elements of the equilibrium assumption are that a system is observed to be unchanging over an indefinitely long time, and that there are so many particles in a system, that its particulate nature can be entirely ignored. Under such an equilibrium assumption, in general, there are no macroscopically detectable [[thermal fluctuations|fluctuations]]. There is an exception, the case of [[critical point (thermodynamics)|critical states]], which exhibit to the naked eye the phenomenon of [[critical opalescence]]. For laboratory studies of critical states, exceptionally long observation times are needed. In all cases, the assumption of thermodynamic equilibrium, once made, implies as a consequence that no putative candidate "fluctuation" alters the entropy of the system. It can easily happen that a physical system exhibits internal macroscopic changes that are fast enough to invalidate the assumption of the constancy of the entropy. Or that a physical system has so few particles that the particulate nature is manifest in observable fluctuations. Then the assumption of thermodynamic equilibrium is to be abandoned. There is no unqualified general definition of entropy for non-equilibrium states.Lieb, E.H., Yngvason, J. (2003), p. 190. There are intermediate cases, in which the assumption of local thermodynamic equilibrium is a very good approximation,Gyarmati, I. (1967/1970), pp. 4-14.Glansdorff, P., Prigogine, I. (1971).[[Gottfried Wilhelm Leibniz Prize|Müller, I.]] (1985).[[Gottfried Wilhelm Leibniz Prize|Müller, I.]] (2003). but strictly speaking it is still an approximation, not theoretically ideal. For non-equilibrium situations in general, it may be useful to consider statistical mechanical definitions of other quantities that may be conveniently called 'entropy', but they should not be confused or conflated with thermodynamic entropy properly defined for the second law. These other quantities indeed belong to statistical mechanics, not to thermodynamics, the primary realm of the second law. The physics of macroscopically observable fluctuations is beyond the scope of this article. ==Arrow of time== {{See also|Arrow of time|Entropy (arrow of time)}} The second law of thermodynamics is a physical law that is not symmetric to reversal of the time direction. This does not conflict with symmetries observed in the fundamental laws of physics (particularly [[CPT symmetry]]) since the second law applies statistically on time-asymmetric boundary conditions.{{cite encyclopedia|first=Craig|last=Callender|url=https://plato.stanford.edu/archives/fall2011/entries/time-thermo/|title=Thermodynamic Asymmetry in Time|encyclopedia=Stanford Encyclopedia of Philosophy|date=29 July 2011}} The second law has been related to the difference between moving forwards and backwards in time, or to the principle that cause precedes effect ([[Arrow of time#Causal arrow of time|the causal arrow of time]], or [[causality]]).{{cite book | first = J.J.| last = Halliwell | title = Physical Origins of Time Asymmetry| publisher = Cambridge | year = 1994| isbn = 978-0-521-56837-1|display-authors=etal}} chapter 6 ==Irreversibility== Irreversibility in [[thermodynamic process]]es is a consequence of the asymmetric character of thermodynamic operations, and not of any internally irreversible microscopic properties of the bodies. Thermodynamic operations are macroscopic external interventions imposed on the participating bodies, not derived from their internal properties. There are reputed "paradoxes" that arise from failure to recognize this. ===Loschmidt's paradox=== {{main article|Loschmidt's paradox}} [[Loschmidt's paradox]], also known as the reversibility paradox, is the objection that it should not be possible to deduce an irreversible process from the time-symmetric dynamics that describe the microscopic evolution of a macroscopic system. In the opinion of [[Erwin Schrödinger|Schrödinger]], "It is now quite obvious in what manner you have to reformulate the law of entropy{{snd}}or for that matter, all other irreversible statements{{snd}}so that they be capable of being derived from reversible models. You must not speak of one isolated system but at least of two, which you may for the moment consider isolated from the rest of the world, but not always from each other."[[Erwin Schrödinger|Schrödinger, E.]] (1950), p. 192. The two systems are isolated from each other by the wall, until it is removed by the thermodynamic operation, as envisaged by the law. The thermodynamic operation is externally imposed, not subject to the reversible microscopic dynamical laws that govern the constituents of the systems. It is the cause of the irreversibility. The statement of the law in this present article complies with Schrödinger's advice. The cause–effect relation is logically prior to the second law, not derived from it. ===Poincaré recurrence theorem=== {{Main article|Poincaré recurrence theorem}} The [[Poincaré recurrence theorem]] considers a theoretical microscopic description of an isolated physical system. This may be considered as a model of a thermodynamic system after a thermodynamic operation has removed an internal wall. The system will, after a sufficiently long time, return to a microscopically defined state very close to the initial one. The Poincaré recurrence time is the length of time elapsed until the return. It is exceedingly long, likely longer than the life of the universe, and depends sensitively on the geometry of the wall that was removed by the thermodynamic operation. The recurrence theorem may be perceived as apparently contradicting the second law of thermodynamics. More obviously, however, it is simply a microscopic model of thermodynamic equilibrium in an isolated system formed by removal of a wall between two systems. For a typical thermodynamical system, the recurrence time is so large (many many times longer than the lifetime of the universe) that, for all practical purposes, one cannot observe the recurrence. One might wish, nevertheless, to imagine that one could wait for the Poincaré recurrence, and then re-insert the wall that was removed by the thermodynamic operation. It is then evident that the appearance of irreversibility is due to the utter unpredictability of the Poincaré recurrence given only that the initial state was one of thermodynamic equilibrium, as is the case in macroscopic thermodynamics. Even if one could wait for it, one has no practical possibility of picking the right instant at which to re-insert the wall. The Poincaré recurrence theorem provides a solution to Loschmidt's paradox. If an isolated thermodynamic system could be monitored over increasingly many multiples of the average Poincaré recurrence time, the thermodynamic behavior of the system would become invariant under time reversal. [[File:James-clerk-maxwell3.jpg|thumb|upright|James Clerk Maxwell]] ===Maxwell's demon=== {{unreferenced section|date=August 2018}} {{main article|Maxwell's demon}} [[James Clerk Maxwell]] imagined one container divided into two parts, ''A'' and ''B''. Both parts are filled with the same [[gas]] at equal temperatures and placed next to each other, separated by a wall. Observing the [[molecule]]s on both sides, an imaginary [[demon]] guards a microscopic trapdoor in the wall. When a faster-than-average molecule from ''A'' flies towards the trapdoor, the demon opens it, and the molecule will fly from ''A'' to ''B''. The average [[speed]] of the molecules in ''B'' will have increased while in ''A'' they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in ''A'' and increases in ''B'', contrary to the second law of thermodynamics. One response to this question was suggested in 1929 by [[Leó Szilárd]] and later by [[Léon Brillouin]]. Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. Maxwell's 'demon' repeatedly alters the permeability of the wall between ''A'' and ''B''. It is therefore performing [[thermodynamic operation]]s on a microscopic scale, not just observing ordinary spontaneous or natural macroscopic thermodynamic processes. ==Quotations== {{Wikiquote}} {{quote|The law that entropy always increases holds, I think, the supreme position among the [[Laws of science|laws of Nature]]. If someone points out to you that your pet theory of the [[universe]] is in disagreement with [[Maxwell's equations]] – then so much the worse for Maxwell's equations. If it is found to be contradicted by observation – well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.|Sir [[Arthur Stanley Eddington]], ''The Nature of the Physical World'' (1927)}} {{quote|There have been nearly as many formulations of the second law as there have been discussions of it.|Philosopher / Physicist [[Percy Williams Bridgman|P.W. Bridgman]], (1941)}} {{quote|Clausius is the author of the sibyllic utterance, "The energy of the universe is constant; the entropy of the universe tends to a maximum." The objectives of continuum thermomechanics stop far short of explaining the "universe", but within that theory we may easily derive an explicit statement in some ways reminiscent of Clausius, but referring only to a modest object: an isolated body of finite size.|[[Clifford Truesdell|Truesdell, C.]], Muncaster, R. G. (1980). ''Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas, Treated as a Branch of Rational Mechanics'', Academic Press, New York, {{ISBN|0-12-701350-4}}, p. 17.}} ==See also== {{colbegin|colwidth=20em}} *[[Zeroth law of thermodynamics]] *[[First law of thermodynamics]] *[[Third law of thermodynamics]] *[[Clausius–Duhem inequality]] *[[Fluctuation theorem]] *[[Heat death of the universe]] *[[History of thermodynamics]] *[[Jarzynski equality]] *[[Laws of thermodynamics]] *[[Maximum entropy thermodynamics]] *[[Quantum thermodynamics]] *[[Reflections on the Motive Power of Fire]] *[[Relativistic heat conduction]] *[[Thermal diode]] *[[Thermodynamic equilibrium]] {{colend}} ==References== {{reflist|20em}} ===Sources=== {{refbegin|50em}} * {{cite book | last=Adkins | first=C. J. | title=Equilibrium thermodynamics | publisher=Cambridge University Press | publication-place=Cambridge UK | year=1983 |edition=1st ed. 1968, 3rd | isbn=0-521-25445-0 | oclc=9132054 }} *[[Peter Atkins|Atkins, P.W.]], de Paula, J. (2006). ''Atkins' Physical Chemistry'', eighth edition, W.H. Freeman, New York, {{ISBN|978-0-7167-8759-4}}. *Attard, P. (2012). ''Non-equilibrium Thermodynamics and Statistical Mechanics: Foundations and Applications'', Oxford University Press, Oxford UK, {{ISBN|978-0-19-966276-0}}. *Baierlein, R. (1999). ''Thermal Physics'', Cambridge University Press, Cambridge UK, {{ISBN|0-521-59082-5}}. *Bailyn, M. (1994). ''A Survey of Thermodynamics'', American Institute of Physics, New York, {{ISBN|0-88318-797-3}}. *{{Cite book|title=Concepts in thermal physics|last1=Blundell|first1=Stephen J.|last2=Blundell|author-link1=Stephen Blundell|first2=Katherine M.|author-link2=Katherine Blundell|publisher=[[Oxford University Press]]|year=2010|isbn=9780199562107|edition=2nd|location=Oxford|oclc=607907330|doi=10.1093/acprof:oso/9780199562091.001.0001 |url=http://cds.cern.ch/record/1235139}} *[[Ludwig Boltzmann|Boltzmann, L.]] (1896/1964). ''Lectures on Gas Theory'', translated by S.G. Brush, University of California Press, Berkeley. *Borgnakke, C., Sonntag., R.E. (2009). ''Fundamentals of Thermodynamics'', seventh edition, Wiley, {{ISBN|978-0-470-04192-5}}. *Buchdahl, H.A. (1966). ''The Concepts of Classical Thermodynamics'', Cambridge University Press, Cambridge UK. *[[Percy Williams Bridgman|Bridgman, P.W.]] (1943). ''The Nature of Thermodynamics'', Harvard University Press, Cambridge MA. *[[Herbert Callen|Callen, H.B.]] (1960/1985). ''Thermodynamics and an Introduction to Thermostatistics'', (1st edition 1960) 2nd edition 1985, Wiley, New York, {{ISBN|0-471-86256-8}}. *{{cite journal|author=C. Carathéodory|author1-link=Constantin Carathéodory |title=Untersuchungen über die Grundlagen der Thermodynamik|year=1909|journal=Mathematische Annalen|volume=67|issue=3 |pages=355–386|url=http://gdz.sub.uni-goettingen.de/index.php?id=11&PPN=PPN235181684_0067&DMDID=DMDLOG_0033&L=1 |quote=Axiom II: In jeder beliebigen Umgebung eines willkürlich vorgeschriebenen Anfangszustandes gibt es Zustände, die durch adiabatische Zustandsänderungen nicht beliebig approximiert werden können. (p.363)|doi=10.1007/bf01450409|s2cid=118230148 }}. A translation may be found [http://neo-classical-physics.info/uploads/3/0/6/5/3065888/caratheodory_-_thermodynamics.pdf here]. Also a mostly reliable [https://books.google.com/books?id=xwBRAAAAMAAJ&q=Investigation+into+the+foundations translation is to be found] at Kestin, J. (1976). ''The Second Law of Thermodynamics'', Dowden, Hutchinson & Ross, Stroudsburg PA. *[[Nicolas Léonard Sadi Carnot|Carnot, S.]] (1824/1986). [http://www.worldcat.org/title/reflections-on-the-motive-power-of-fire-a-critical-edition-with-the-surviving-scientific-manuscripts-translated-and-edited-by-fox-robert/oclc/812944517&referer=brief_results ''Reflections on the motive power of fire''], Manchester University Press, Manchester UK, {{ISBN|0-7190-1741-6}}. [https://archive.org/stream/reflectionsonmot00carnrich#page/n7/mode/2up Also here.] *[[Sydney Chapman (mathematician)|Chapman, S.]], [[Thomas George Cowling|Cowling, T.G.]] (1939/1970). ''The Mathematical Theory of Non-uniform gases. An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases'', third edition 1970, Cambridge University Press, London. *{{cite journal|last=Clausius|first=R.|author1-link=Rudolf Clausius|title=Ueber Die Bewegende Kraft Der Wärme Und Die Gesetze, Welche Sich Daraus Für Die Wärmelehre Selbst Ableiten Lassen|journal=Annalen der Physik|year=1850|volume=79|issue=4|pages=368–397, 500–524|url=http://gallica.bnf.fr/ark:/12148/bpt6k15164w/f518.image|access-date=26 June 2012|doi=10.1002/andp.18501550403|bibcode = 1850AnP...155..500C |hdl=2027/uc1.$b242250|hdl-access=free}} Translated into English: {{cite journal|last=Clausius|first=R.|title=On the Moving Force of Heat, and the Laws regarding the Nature of Heat itself which are deducible therefrom|journal=London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science|date=July 1851| volume=2|series=4th|issue=VIII|pages=1–21; 102–119 |url=https://archive.org/stream/londonedinburghd02lond#page/1/mode/1up|access-date=26 June 2012 |doi=10.1080/14786445108646819}} * {{cite journal|last=Clausius|first=R.|author1-link=Rudolf Clausius|title=Über eine veränderte Form des zweiten Hauptsatzes der mechanischen Wärmetheorie|year=1854|journal=Annalen der Physik|volume=xciii|issue=12|pages=481–506|url=http://zfbb.thulb.uni-jena.de/servlets/MCRFileNodeServlet/jportal_derivate_00140956/18541691202_ftp.pdf|access-date=24 March 2014|doi=10.1002/andp.18541691202 |bibcode = 1854AnP...169..481C }} Translated into English: {{cite journal|last=Clausius|first=R.|title=On a Modified Form of the Second Fundamental Theorem in the Mechanical Theory of Heat|journal=London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science|date=July 1856| volume=2|series=4th|page=86 |url=https://www.biodiversitylibrary.org/item/20044#page/100/mode/1up|access-date=24 March 2014 }} Reprinted in: {{cite book|last=Clausius|first=R.|author1-link=Rudolf Clausius|title=The Mechanical Theory of Heat – with its Applications to the Steam Engine and to Physical Properties of Bodies|year=1867|publisher=John van Voorst|location=London|url=https://archive.org/details/mechanicaltheor04claugoog|quote=editions:PwR_Sbkwa8IC.|access-date=19 June 2012 }} *Denbigh, K. (1954/1981). ''The Principles of Chemical Equilibrium. With Applications in Chemistry and Chemical Engineering'', fourth edition, Cambridge University Press, Cambridge UK, {{ISBN|0-521-23682-7}}. *Eu, B.C. (2002). ''Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics'', Kluwer Academic Publishers, Dordrecht, {{ISBN|1-4020-0788-4}}. *[[Josiah Willard Gibbs|Gibbs, J.W.]] (1876/1878). On the equilibrium of heterogeneous substances, ''Trans. Conn. Acad.'', '''3''': 108–248, 343–524, reprinted in ''The Collected Works of J. Willard Gibbs, Ph.D, LL. D.'', edited by W.R. Longley, R.G. Van Name, Longmans, Green & Co., New York, 1928, volume 1, pp. 55–353. *Griem, H.R. (2005). ''Principles of Plasma Spectroscopy (Cambridge Monographs on Plasma Physics)'', Cambridge University Press, New York {{ISBN|0-521-61941-6}}. *Glansdorff, P., Prigogine, I. (1971). ''Thermodynamic Theory of Structure, Stability, and Fluctuations'', Wiley-Interscience, London, 1971, {{ISBN|0-471-30280-5}}. *{{cite book | last=Grandy | first=Walter T. | title=Entropy and the time evolution of macroscopic systems | publisher=Oxford University Press | publication-place=Oxford New York | year=2008 | isbn=978-0-19-954617-6 | oclc=190843367 |url=http://global.oup.com/academic/product/entropy-and-the-time-evolution-of-macroscopic-systems-9780199546176}} *Greven, A., Keller, G., Warnecke (editors) (2003). ''Entropy'', Princeton University Press, Princeton NJ, {{ISBN|0-691-11338-6}}. *[[Edward A. Guggenheim|Guggenheim, E.A.]] (1949). 'Statistical basis of thermodynamics', ''Research'', '''2''': 450–454. *[[Edward A. Guggenheim|Guggenheim, E.A.]] (1967). ''Thermodynamics. An Advanced Treatment for Chemists and Physicists'', fifth revised edition, North Holland, Amsterdam. *Gyarmati, I. (1967/1970) ''Non-equilibrium Thermodynamics. Field Theory and Variational Principles'', translated by E. Gyarmati and W.F. Heinz, Springer, New York. *[[Charles Kittel|Kittel, C.]], [[Herbert Kroemer|Kroemer, H.]] (1969/1980). ''Thermal Physics'', second edition, Freeman, San Francisco CA, {{ISBN|0-7167-1088-9}}. *Kondepudi, D., [[Ilya Prigogine|Prigogine, I.]] (1998). ''Modern Thermodynamics: From Heat Engines to Dissipative Structures'', John Wiley & Sons, Chichester, {{ISBN|0-471-97393-9}}. *Lebon, G., Jou, D., Casas-Vázquez, J. (2008). ''Understanding Non-equilibrium Thermodynamics: Foundations, Applications, Frontiers'', Springer-Verlag, Berlin, {{ISBN|978-3-540-74252-4}}. *{{cite journal|title=The Physics and Mathematics of the Second Law of Thermodynamics|last1=Lieb |first1=E. H. |last2=Yngvason |first2=J.|journal=Physics Reports|volume=310|issue=1|pages=1–96 |year=1999|doi=10.1016/S0370-1573(98)00082-9|arxiv = cond-mat/9708200 |bibcode = 1999PhR...310....1L|s2cid=119620408}} *Lieb, E.H., Yngvason, J. (2003). The Entropy of Classical Thermodynamics, pp. 147–195, Chapter 8 of ''Entropy'', Greven, A., Keller, G., Warnecke (editors) (2003). *{{cite book|first=F.|last=Mandl|title=Statistical physics|edition=second|year=1988|publisher=[[Wiley & Sons]]|isbn=978-0-471-91533-1}} *{{cite journal | last1 = Maxwell | first1 = J.C. | author-link = James Clerk Maxwell | year = 1867 | title = On the dynamical theory of gases | journal = Phil. Trans. R. Soc. Lond. | volume = 157 | pages = 49–88 | doi = 10.1098/rstl.1867.0004 | s2cid = 96568430 }} *[[Gottfried Wilhelm Leibniz Prize|Müller, I.]] (1985). ''Thermodynamics'', Pitman, London, {{ISBN|0-273-08577-8}}. *[[Gottfried Wilhelm Leibniz Prize|Müller, I.]] (2003). Entropy in Nonequilibrium, pp. 79–109, Chapter 5 of ''Entropy'', Greven, A., Keller, G., Warnecke (editors) (2003). *Münster, A. (1970), ''Classical Thermodynamics'', translated by E.S. Halberstadt, Wiley–Interscience, London, {{ISBN|0-471-62430-6}}. *[[Brian Pippard|Pippard, A.B.]] (1957/1966). ''Elements of Classical Thermodynamics for Advanced Students of Physics'', original publication 1957, reprint 1966, Cambridge University Press, Cambridge UK. *[[Max Planck|Planck, M.]] (1897/1903). [https://archive.org/stream/treatiseonthermo00planrich#page/100/mode/2up ''Treatise on Thermodynamics'', translated by A. Ogg, Longmans Green, London, p. 100.] *[[Max Planck|Planck. M.]] (1914). [https://archive.org/details/theoryofheatradi00planrich ''The Theory of Heat Radiation''], a translation by Masius, M. of the second German edition, P. Blakiston's Son & Co., Philadelphia. *[[Max Planck|Planck, M.]] (1926). Über die Begründung des zweiten Hauptsatzes der Thermodynamik, ''Sitzungsberichte der Preussischen Akademie der Wissenschaften: Physikalisch-mathematische Klasse'': 453–463. * Pokrovskii V.N. (2005) Extended thermodynamics in a discrete-system approach, Eur. J. Phys. vol. 26, 769–781. * {{Cite journal | doi=10.1155/2013/906136|title = A Derivation of the Main Relations of Nonequilibrium Thermodynamics| journal=ISRN Thermodynamics| volume=2013| pages=1–9|year = 2013|last1 = Pokrovskii|first1 = Vladimir N.|doi-access=free}} *Quinn, T.J. (1983). ''Temperature'', Academic Press, London, {{ISBN|0-12-569680-9}}. *{{cite book|first=Y.V.C.|last=Rao|title=An Introduction to thermodynamics|url=https://books.google.com/books?id=iYWiCXziWsEC&pg=PA213|date=2004|publisher=Universities Press|isbn=978-81-7371-461-0|page=213}} *Roberts, J.K., Miller, A.R. (1928/1960). ''Heat and Thermodynamics'', (first edition 1928), fifth edition, Blackie & Son Limited, Glasgow. *[[Erwin Schrödinger|Schrödinger, E.]] (1950). Irreversibility, ''Proc. R. Ir. Acad.'', '''A53''': 189–195. *[[Dirk ter Haar|ter Haar, D.]], [[Harald Wergeland|Wergeland, H.]] (1966). ''Elements of Thermodynamics'', Addison-Wesley Publishing, Reading MA. *{{cite journal|last=Thomson|first=W.|author-link=William Thomson, 1st Baron Kelvin|title=On the Dynamical Theory of Heat, with numerical results deduced from Mr Joule's equivalent of a Thermal Unit, and M. Regnault's Observations on Steam|journal=Transactions of the Royal Society of Edinburgh|year=1851|volume=XX|issue=part II|pages=261–268; 289–298|url=https://www.biodiversitylibrary.org/item/126047#page/295/mode/1up}} Also published in {{cite journal|last=Thomson|first=W.|title=On the Dynamical Theory of Heat, with numerical results deduced from Mr Joule's equivalent of a Thermal Unit, and M. Regnault's Observations on Steam|journal=Philos. Mag. |date=December 1852 |volume=IV |series=4 |issue=22 |page=13|url=https://archive.org/stream/londonedinburghp04maga#page/12/mode/2up |access-date=25 June 2012}} *[[William Thomson, 1st Baron Kelvin|Thomson, W.]] (1852). ''On the universal tendency in nature to the dissipation of mechanical energy'' Philosophical Magazine, Ser. 4, p. 304. *[[László Tisza|Tisza, L.]] (1966). ''Generalized Thermodynamics'', M.I.T Press, Cambridge MA. *[[Clifford Truesdell|Truesdell, C.]] (1980). ''The Tragicomical History of Thermodynamics 1822–1854'', Springer, New York, {{ISBN|0-387-90403-4}}. *Uffink, J. (2001). Bluff your way in the second law of thermodynamics, ''Stud. Hist. Phil. Mod. Phys.'', '''32'''(3): 305–394. *Uffink, J. (2003). Irreversibility and the Second Law of Thermodynamics, Chapter 7 of ''Entropy'', Greven, A., Keller, G., Warnecke (editors) (2003), Princeton University Press, Princeton NJ, {{ISBN|0-691-11338-6}}. *[[George Uhlenbeck|Uhlenbeck, G.E.]], Ford, G.W. (1963). ''Lectures in Statistical Mechanics'', American Mathematical Society, Providence RI. *[[Mark Zemansky|Zemansky, M.W.]] (1968). ''Heat and Thermodynamics. An Intermediate Textbook'', fifth edition, McGraw-Hill Book Company, New York. {{refend}} ==Further reading== *Goldstein, Martin, and Inge F., 1993. ''The Refrigerator and the Universe''. Harvard Univ. Press. Chpts. 4–9 contain an introduction to the Second Law, one a bit less technical than this entry. {{ISBN|978-0-674-75324-2}} *Leff, Harvey S., and Rex, Andrew F. (eds.) 2003. ''Maxwell's Demon 2 : Entropy, classical and quantum information, computing''. Bristol UK; Philadelphia PA: [[Institute of Physics]]. {{ISBN|978-0-585-49237-7}} *{{Cite book | first = J.J. | last = Halliwell | title = Physical Origins of Time Asymmetry| publisher = Cambridge | year = 1994| isbn = 978-0-521-56837-1}}(technical). *{{cite book |title=Reflections on the Motive Power of Heat and on Machines Fitted to Develop That Power |last=Carnot |first=Sadi |editor=[[Robert Henry Thurston|Thurston, Robert Henry]] |year=1890 |publisher=J. Wiley & Sons |location=New York }} ([https://books.google.com/books?id=tgdJAAAAIAAJ full text of 1897 ed.]) ([http://www.history.rochester.edu/steam/carnot/1943/ html] {{Webarchive|url=https://web.archive.org/web/20070818073812/http://www.history.rochester.edu/steam/carnot/1943/ |date=2007-08-18 }}) *Stephen Jay Kline (1999). ''The Low-Down on Entropy and Interpretive Thermodynamics'', La Cañada, CA: DCW Industries. {{ISBN|1-928729-01-0}}. *{{cite book | last1 = Kostic | first1 = M | year = 2011 | title = Revisiting The Second Law of Energy Degradation and Entropy Generation: From Sadi Carnot's Ingenious Reasoning to Holistic Generalization | journal = AIP Conf. Proc. | volume = 1411 | issue = 1| pages = 327–350 | doi = 10.1063/1.3665247 | isbn = 978-0-7354-0985-9 | bibcode = 2011AIPC.1411..327K | series = AIP Conference Proceedings | citeseerx = 10.1.1.405.1945 }} also at [https://web.archive.org/web/20130420222450/http://www.kostic.niu.edu/2ndLaw/Revisiting%20The%20Second%20Law%20of%20Energy%20Degradation%20and%20Entropy%20Generation%20-%20From%20Carnot%20to%20Holistic%20Generalization-4.pdf]. ==External links== *[[Stanford Encyclopedia of Philosophy]]: "[http://plato.stanford.edu/entries/statphys-statmech/ Philosophy of Statistical Mechanics]" – by Lawrence Sklar. *[http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node30.html ''Second law of thermodynamics''] in the MIT Course [http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/notes.html ''Unified Thermodynamics and Propulsion''] from Prof. Z. S. Spakovszky *[[E.T. Jaynes]], 1988, "[http://bayes.wustl.edu/etj/articles/ccarnot.pdf The evolution of Carnot's principle]," in G. J. Erickson and C. R. Smith (eds.)''Maximum-Entropy and Bayesian Methods in Science and Engineering'', Vol,.1: p. 267. *[http://neo-classical-physics.info/uploads/3/0/6/5/3065888/caratheodory_-_thermodynamics.pdf Caratheodory, C., "Examination of the foundations of thermodynamics," trans. by D. H. Delphenich] * [https://www.bbc.co.uk/programmes/p004y2bm The Second Law of Thermodynamics], BBC Radio 4 discussion with John Gribbin, Peter Atkins & Monica Grady (''In Our Time'', Dec. 16, 2004) * [http://mdpi.org/entropy/papers/e6010001.pdf "The Second Law Mystique"], Alexey Nikulov and Daniel Sheehan, ''[[Entropy (journal)|Entropy]]'', 2004 * [https://www.journals.uchicago.edu/doi/abs/10.1086/663835 The Journal of the International Society for the History of Philosophy of Science, 2012] {{DEFAULTSORT:Second Law Of Thermodynamics}} [[Category:Equations of physics]] [[Category:Laws of thermodynamics|2]] [[Category:Non-equilibrium thermodynamics]] [[Category:Philosophy of thermal and statistical physics]]