Characterization of Nonideal Solutions

J.M. Honig , in Thermodynamics (Third Edition), 2007

3.9.6 Entropy Changes in Chemical Reactions

Entropy changes accompanying a particular chemical reaction Σ iviAi→ 0 are handled by writing

(3.9.8) Δ S ˜ T , R X 0 = i v i S ˜ i , T 0

The individual molar entropies under standard conditions of the various participating species are generally available in tabular form; these are determined as shown in Section 1.17. Alternatively, they are found via an empirical equation of state, as in Eqs. (1.13.10), (1.13.12), followed by an integration; the temperature of the reaction is a parameter. As explained in Section 1.17, if nuclear effects may be excluded, or any frozen-in disorder remains undisturbed (or any nonequilibreated condition is not altered), one may set the entropy of all participants in the reaction, whether they be elements or compounds, equal to zero at T = 0. This then establishes an absolute scale for the tabulated entropies of the reagents and products at any temperature T. The reader is asked to set up strategies for determining Δ S ˜ T , R X for reactions under other than standard conditions; see Exercise 3.9.7. As an example one may cite the reaction

(3.9.9) Z n ( s ) + 1 2 O 2 ( g ) ZnO ( s ) ,

for which S ˜ 298.15 0 (J/Kmol) = 41.63, 205.03, 43.64 and C ˜ P , 298.15 0 (J/Kmol) = 25.40, 29.35, 40.25 for Zn, O2, and ZnO respectively. Thus, for the reaction as written, Δ S ˜ 298.15 0 = - 41.63 - 102.52 + 43.64 = - 100.51 J / K mol . Also, ignoring any change of heat capacity with temperature we calculate at 100°C, Δ S ˜ 398.15 0 = Δ S ˜ 298.15 0 + Δ C ˜ 298.15 0 ln(398.15/298.15) = –100.51 + (40.25 – 14.67 – 25.40)ln 1.134 = –100.56 J/Kmol, a very small change in the entropy over a 100 °C interval.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123738776500058

Molecular Physics

Ruslan P. Ozerov , Anatoli A. Vorobyev , in Physics for Chemists, 2007

3.5.4 Reduced amount of heat: entropy

Write two expressions for the thermal efficiency of the ideal thermal engine eid = 1 – (Q 1/Q 2) and eid = 1 – (T/T 2). Since these two equations are equal then, Q 1 / Q 2 = T 1 / T 2 or Q 1 / Q 2 = Q 2 / T 2.

Consider an amount of Q 1 obtained by a working body from a heater positive (Q 1 > 0) and the heat amount yielded by the working body to the cooler heat negative (Q 2 < 0). Then the equation can be written as Q 1 / T 1 = – (Q 2 / T 2) or, for the Carnot cycle,

(3.5.12) Q 1 T 1 + Q 2 T 2 = 0.

The ratio of the amount of heat to the temperature is referred to as the reduced amount of heat. So for the Carnot heat engine, the sum of the reduced amount of heat is zero.

Since the Carnot cycle is reversible, any reversible cycle can be presented as a sequence of elementary (Carnot) cycles. Ten such cycles consisting of isotherms crossed by adiabats are depicted in Figure 3.20.

Figure 3.20. Carnot microcycles.

The larger the amount of such microcycles used, the better the degree of approximation. For each microcycle the eq. (3.5.12) is valid. Therefore, for any reversible cycle, we can write

(3.5.13) Σ i = 1 N ( Δ Q 1 i T 1 i + Δ Q 2 i T 2 i ) 0.

Notice that the amount of heat ΔQ 1i obtained by a working body in the ith cycle is approximately equal to the amount of heat ΔQ 2,I , which is yielded by a working body in the preceding cycle (i–1). The precise equality can be reached at an infinitely large amount of cycles with infinitely thin Carnot microcycles. Therefore, all the heat flows into the system along the internal parts of cycles compensating each other; instead of a sum we can write an integral

(3.5.14) δ Q T = 0

(for reversible cycles) where δQ is the small amount of heat obtained or given off by a working body. This equation is referred to as a Clausius equation. Since the integral is taken over a closed counter, it can be taken from any point and in any direction (since the cycle is reversible) (Figure 3.21).

Figure 3.21. The cycle and the work done; entropy.

This allows one to present it as a sum of two integrals ∫1a2δQ/T+∫2b1δQ/T=0. Thus the integral from ∫12δQ/T = 0 between any two points of 1 and 2 equilibrium states does not depend on the form of a trajectory along which a thermodynamic reversible process is going on. As the choice of points 1 and 2 is arbitrary, we can assert that the integral describing the change of value (designated by the letter S) is a function of a system's state. Suggested by Clausius, this value is referred to as entropy (measured in J/K units). For a reversible cycle δ S = 0.

From the expression obtained, it follows that for reversible processes ∫12δQ/T does not depend on the form of a line (trajectory) representing this process on the graph; it is defined only by the points describing initial and final equilibrium states. Thus, the elementary reduced amount of heat is the full differential of the S-function dependent only on the system's state, i.e.,

(3.5.15) d S = δ Q T

Therefore, the entropy change Δ S at the transition from the first equilibrium state (1) to another equilibrium state and (2) at reversible process is determined by equality

(3.5.16) Δ S = S 2 - S 1 = 1 2 δ Q T .

Thus, entropy is only a function of an equilibrium state of a thermodynamic system. It does not depend, in this case, on particular processes leading the system into a given equilibrium state.

Entropy, as well as potential energy, can be defined only to some constant. This is a consequence of the fact that the formula (3.5.16) does not allow us to define the absolute entropy value but only its difference.

For the determination of entropy change at quasi-equilibrium, processes one can use a property of its additivity, i.e.,

(3.5.17) Δ S = Σ i = 1 N Δ S i

where ΔSi is the entropy change at the i-th quasi-equilibrium process. N is the number of such processes.

The entropy change will vary depending on the process if the initial and final positions of the thermodynamic system do not coincide. At quasi-equilibrium (sufficiently slow) heating of the system from temperature T 1 to temperature T 2, we obtain the following.

(1)

For an isochoric process: δQ = vC vm dT (where Cvm is a molar heat capacity at constant volume):

Δ S v = 1 2 δ Q T = v C v m T 1 T 2 d T T = v C v m ln T 2 T 1 .

(2)

For an isobaric process: δQ = vCpmdT (where Cpm is a molar heat capacity at constant pressure). ΔSp = 1 2 d Q / T =v T 1 T 2 d T / T =vCpm 1n(T 2/T 1) (for reversible process). Since Cpm >Cvm .then ΔSpSv .

If the thermodynamic system is an ideal gas in which simultaneously the quasi-equilibrium changes of several parameters take place, we can use the first thermodynamic law (3.4.3) in the form

δ Q = v C v m + p d V then , δ Q T = n C v m d T T + v R d V V .

Since for ideal gas

p d V = v R T d V V

then,

Δ S 1 2 δ Q T = v C v m T 1 T 2 d T T + v R V 1 V 2 d V V .

Executing the integration we obtain:

(3.5.18) Δ S = v C v m ln T 2 T 1 + v R ln V 2 V 1 .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444528308500052

THE SECOND LAW OF THERMODYNAMICS

George B. Arfken , ... Joseph Priest , in International Edition University Physics, 1984

24.5 Entropy

In this section we introduce a new physical property—entropy. The concept of entropy may not seem intuitively clear to you at first. This is partly because entropy is a new concept and partly because thermodynamics does not suggest any microscopic interpretation of quantities like temperature and entropy. We introduce entropy because it is a very useful thermodynamic property. For example, we can state the second law of thermodynamics in terms of entropy (Sections 24.6).

In Chapter 21 we introduced the differential form of the first law of thermodynamics:

(24.14) d Q = d U + d W

The internal energy is a property of a physical system, and this makes it possible to express dU in terms of appropriate thermodynamic variables and their changes. The work dW can also be expressed in terms of thermodynamic variables and their changes. For example, for a fluid system undergoing a reversible change of volume, dW can be expressed in terms of the pressure (P) and the change of volume(dV),

(24.15) d W = p d V

Is it possible to express dQ in a similar fashion? Yes, provided that the heat exchange is reversible. For a reversible process d Q can be expressed in terms of the temperature (T) and the entropy change dS. The relation between dQ and dS is

(24.16) d Q = T d s

We can rewrite Eq. 24.16 as

(24.17) d S = d Q T

to stress that it is a definition of the entropy change of a system that absorbs heat dQ. Remember that Eqs. 24.16 and 24.17 are valid only for reversible processes. The total entropy change in a reversible process is obtained by integrating Eq. 24.17 to obtain the total entropy change, ΔS.

(24.18) Δ S S f S f = i f d Q T

where Sf and Si are the entropies of the final and initial states, respectively. A special case of Eq. 24.18 is noteworthy. If the process is isothermal, the right-hand side of Eq. 24.18 becomes

(24.19) i f d Q T = 1 T i f d Q = Q T

where Q is the total heat absorbed by the system. Thus, for an isothermal process the entropy change is

(24.20) Δ S = Q T

The units of entropy and entropy change are joules per kelvin (J/K). As the following example demonstrates, entropy is a property of a system.

Example 3

Entropy Change for Carnot Cycle

If entropy is a property of a system it must be unchanged in any cyclic process. Let's check to see if entropy is unchanged in the Carnot cycle. Figure 24.5 shows the Carnot cycle for a gas-piston system. There is no entropy change along either of the adiabats (S = constant) because dQ = 0 and thus

d S = d Q T = 0

Along the high-temperature isotherm (1 → 2), heat QH is absorbed. The entropy change is given by Eq. 24.20, with Q = QH and T = TH.

Q = Q H and T = T H Entropy change along T H  isotherm = Q H T H

Along the low-temperature isotherm (3 → 4), heat QL is ejected at T = TL. Equation 24.20 applies, with T = TL and Q = -QL. The minus sign enters because an amount of heat QL is ejected along the low-temperature isotherm.

Entropy change along T L  isotherm = Q L T L

The minus sign enters because the entropy decreases when heat is ejected. The total entropy change over the Carnot cycle is

Δ S cycle = d Q T = Q H T H Q L T L

However, Eq. 24.9 relates QH/TH and QL/TL for the Carnot cvcle,

(24.9) Q H T H = Q L T L = constant

Therefore,

Q H T H Q H T L = 0

and

(24.21) Δ S cycle = 0

The entropy change for the Carnot cycle is zero. This is because entropy is a property of a physical system. The entropy depends on the thermodynamic state and hence is unchanged in any cyclic process.

Entropy and Disorder

Eq. 24.18 provides a way to measure entropy changes, but does not explain what entropy itself measures. Entropy is important because it is useful, but it will help our understanding if we can relate entropy to some aspect of our day-to-day experience. Entropy is a measure of disorder. If the entropy of a system increases, then its microscopic structure becomes more disordered. The examples that follow should help you to understand the idea that entropy measures disorder.

Example 4

Entropy Change for Melting

Consider a 1-kg chunk of ice at its normal melting point. If we add the heat of fusion, 334 kJ, the ice melts. What is the change in entropy? Because this change of phase takes place at constant temperature and is reversible, we can use Eq. 24.20 to obtain

Δ S = S liquid S solid = L f T m

Taking L f = 334 k J , T m = 273.15  k gives Δ S = 1.22  k J /K

The value of ΔS is positive showing that melting results in an entropy increase. This entropy increase is also in accord with the idea that entropy measures disorder. The molecular structure of the final state (liquid) is more disordered than the molecular structure of the initial state (solid). In a crystalline solid the molecules form an ordered, periodic structure. In a liquid only a short-range order is evident. By short-range order we mean that any given molecule is able to influence only the behavior of nearby molecules. Melting results in an entropy increase and molecular disordering. Entropy measures disorder.

The computed value, ΔS = 1.22 kJ/K obtained in Example 4 will mean more if we compare it with the entropy changes for other processes. To provide a basis for comparison we compute entropy changes for two other processes involving 1 kg of water: (a) raising its temperature isobarically to the boiling point and (b) vaporizing it.

Example 5

Isobaric Entropy Increase

If heat is added to a liquid at constant pressure, the liquid's temperature rises until it reaches the boiling point. The heat added in changing the temperature by an amount dT is

d Q = m C p d T

where Cp is the specific heat at constant pressure. The corresponding entropy change is

d S = d Q T = m C p d T T

For water Cp is nearly constant between the melting and boiling points. If Cp is treated as constant, the total entropy change between the melting and boiling points is

(24.22) Δ S = m C p T m T b d T T = m p 1 n ( T b T m )

For 1 kg of water, mCp = 4.19 kJ/K and ln (373/273) ≃ 0.312. These data give

Δ S 1.31 k J /K

This change in entropy is only slightly larger than the entropy change of melting. Raising the temperature of liquid water increases the molecular disorder by increasing the average molecular speed. The chance that a given molecule will influence the behavior of a nearby molecule decreases as their relative speeds increase—the molecules pass too swiftly to communicate effectively. Thus, as the temperature increases, molecular interactions diminish and disorder rises.

Once water is brought to its boiling point, it can be vaporized by adding still more heat. The entropy change in going from the liquid to the vapor phase is readily found by the same arguments that gave us ΔS = Lf/T in Example 4. In place of Lf we now have Lv , the heat of vaporization, and the temperature T is the boiling point, T>· The entropy change is

(24.23) Δ S = S vapor S liquid = L v T

For 1 kg of water at 1 atm, Lv = 2260 kJ, Tb = 373 K, and ΔS ≃ 6.05 kJ/K. The entropy change for the liquid–vapor transition is roughly five times the entropy change for melting. What disruption at the molecular level has caused this entropy increase? We can answer this question in part by noting that at the normal boiling point, 1 kg of liquid water occupies a volume of about 10−3 m3. At the same pressure and temperature the vapor occupies a volume of approximately 1.7 m3—1700 times the volume of the liquid! The collection of water molecules, which started as a comparatively dense liquid, has spread out into a less dense vapor. The molecules in the vapor phase are greatly disorganized because they now have a larger volume in which to roam. This is somewhat like the disorder that would result if a swarm of bees left the confines of its hive and filled a room with a volume 1700 times that of the hive.

Ideal Gas Entropy

An expression for the entropy of an ideal gas (Sections 22.2) is readily derived and is very useful. We first obtain an expression for dS = dQ/T for an ideal gas. Using the first law we can write dQ = dll + P dV, obtaining for dS

(24.24) d S = d Q T = d U + p d V T

For an ideal gas

(24.25) d U = m C v d T

and

(24.26) P V = n R T

The specific heat Cv is a constant. The expression for dS converts to

(24.27) d S = m C v d T T + n R d V V

This relation can be integrated to give the entropy change in a process that carries the ideal gas from ( T1, V1 ) to (T2, V2 ). Thus,

S 1 S 2 d S = m C v T 1 T 2 d T T + n R V 1 v 2 d V V

By carrying out the indicated integrations, we get

(24.28) S 2 S 1 = m C v 1 n ( T 2 T 1 ) + n R 1 n ( V 2 V 1 )

Notice that the entropy change has been expressed in terms of the temperature and volume of the two states, showing that entropy is a property of a physical system. This equation is valid for any series of reversible processes in which the system starts at (T1,V1 ) and ends at (T2,V2 ). An example of such a process would be an isothermal volume change (V1 →V2 ) with T1 , constant followed by an isovolumic temperature change (T1 → T2 with V2 ) constant).

We also have stressed that the relation

(24.17) d S = d Q T

is valid only for reversible processes. The following discussion of the free expansion of a gas illustrates this fact, and the idea that entropy measures disorder.

Free Expansion of a Gas

Consider the experiment represented in Figure 24.11. A vessel with rigid insulated walls is divided by a partition. Initially, one side contains an ideal gas and the other side is a vacuum. Consider what happens when the partition ruptures, allowing the gas to expand and fill the entire container. The insulation prevents heat exchange. Thus, dQ = 0 for each stage of the expansion. The rigid walls guarantee that no work is done on the system, that is, there can be no change in the volume of the system and thus no P dV work by the surroundings. The process is called a free expansion because of the absence of any external work as gas moves into the evacuated region. We therefore have dQ = 0 and dW = 0 at each stage of the expansion. It follows from the first law that dU = 0; the internal energy of the gas does not change. For an ideal gas U = mCvT; the internal energy depends on T alone. If U remains constant, so must the temperature; thus T2 = T1. The final volume of the gas V2 is clearly greater than the initial volume V1 , so that ln(V2/V1 is a positive number. From Eq. 24.28 we then have

Figure 24.11. Initially, (a) one portion of the rigid insulated container is a vacuum and an ideal gas occupies the other section. After the partition ruptures, (b) the gas expands to fill the container. The insulation and rigid walls guarantee that the system is isolated

S 2 S 1 = n R 1 n ( V 2 V 1 ) > 0

That is, the final entropy exceeds the initial entropy. The gas has expanded but its temperature has not changed. The gas spreads out over a larger volume, resulting in further disorder. The predicted entropy increase is in agreement with the idea that entropy measures disorder. It all seems very straightforward, but is it? Let us retreat momentarily to the point where we prescribed that the container was insulated. We noted that the insulation guaranteed that dQ = 0 at each stage of the expansion. From

d S = d Q T

it might seem that we should conclude that dS = 0 at each stage, and therefore that the entropy of the gas remains constant. But this is not so! The entropy does change. The expression dS = dQ/T does not apply to the free expansion, because the free expansion is an irreversible process. It cannot be reversed by any infinitesimal change in the surroundings, and as we said earlier, the relation dS = dQ/T holds only for reversible transformations.

The result

S 2 S 1 = n R 1 n ( V 2 V 1 ) > 0

obtained via Eq. 24.28 is valid here even though the process leading from state 1 to state 2 is irreversible. This is because entropy is a property of the thermodynamic state. The entropy change ( S2 – S1) is independent of the process that carries the system from one state to another.

Questions

5.

(a) Give an example of an irreversible process that occurs spontaneously in nature (other than the ones mentioned in the text), (b) Indicate how the process you chose increases disorder.

6.

A piece of steel absorbs 10 kJ of heat and undergoes a 10-C° increase in temperature. It then returns to its initial state by ejecting 10 kJ of heat. (a) What is the change in entropy for the overall process? (b) Is the process necessarily reversible?

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120598588500297

The First and Second Laws of Thermodynamics

J. Bevan Ott , Juliana Boerio-Goates , in Chemical Thermodynamics: Principles and Applications, 2000

2.2c The Second Law Expressed in Terms of an Entropy Change

Equation (2.38) relates an entropy change to the flow of an infinitesimal quantity of heat in a reversible process. Earlier in this chapter, we have shown that in the reversible process, the flow of work δ w is a minimum for the reversible process. s Since w and q are related through the first law expression

d U = δ q + δ w

or

Δ U = q + w .

and dU or ΔU is independent of the path, q, or δq must be a maximum for the reversible process. The conclusion that we reach when we compare a spontaneous or naturally occurring process with a reversible process is that the heat flows are related by

δ q nat < δ q rev .

When this relationship is substituted into equation (2.38), we get

(2.39) d S > δ q nat T

We can generalize by combining equations (2.38) and (2.39) to obtain

(2.40) d S δ q T .

In equation (2.40), the equality applies to the reversible process and the inequality to the spontaneous or natural process.

For an adiabatic process, δq = 0, and equation (2.40) becomes

d S 0 ( adiabatic process ) .

Integration gives

Δ S 0 ( adiabaticprocess ) .

The universe is the pre-eminent example of an adiabatic process. That is, heat cannot flow in or out of the universe. The result is that the above equations apply to the universe and we can write

(2.41) Δ S 0 ( universe ) .

Today, the Second Law, as applied to chemical systems, is firmly associated with the concept of entropy as expressed in the 1865 statement of Clausius and given mathematically by equation (2.41).

In summary, we have seen how the introduction of the idealized Carnot engine leads to the definition of the thermodynamic temperature, an equation for calculating an entropy change from the flow of heat in a reversible process, and to the mathematical formulation of the Second Law in terms of entropy changes. Furthermore, in the next chapter we will apply the Carnot cycle while using an ideal gas as the working fluid to show that the thermodynamic (Kelvin) temperature scale and the ideal gas (Absolute) temperature scale are the same. However, it is possible to obtain equations (2.38) and (2.41) and show the equivalence of the thermodynamic and Absolute temperature scales without relying upon such idealized devices. This approach, which relies upon a fundamental mathematical theorem known as the Carathéodory theorem, is a most intellectually stimulating and satisfying exercise that leads to a much deeper understanding of the Second Law and the significance of entropy. We now present this derivation and encourage the reader to follow it through, and in the process, gain a deeper appreciation for the Second Law of Thermodynamics.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780125309905500031

Macroscopic Thermodynamics

Nils Dalarsson , ... Leonardo Golubović , in Introductory Statistical Thermodynamics, 2011

Solution

Let us first recall the result for the total entropy change (9.118) generalized to an arbitrary number of n steps, i.e.,

(9.121) Δ S ( n ) = m c w j = 0 n 1 ( ln T j + 1 T j + T j T j + 1 1 ) .

As the number of steps becomes very large, the ratio Tj +1/Tj approaches unity, and we can use the mathematical approximation

(9.122) ln ( 1 + x ) x , x 0 ,

to obtain

(9.123) ln T j + 1 T j = ln T j T j + 1 = ln [ 1 + ( T j T j + 1 1 ) ] ( T j T j + 1 1 ) = T j T j + 1 + 1.

Substituting (9.123) into (9.121), we obtain

(9.124) lim n Δ S ( n ) 0.

Thus, the entropy change for the entire system tends to zero, when the heating process is divided into a very large number of very small heating steps.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123849564000094

Magnetic Properties of Perovskite Manganites and Their Modifications

V. Markovich , ... H. Szymczak , in Handbook of Magnetic Materials, 2014

2.3.1 (La,Ca)MnO3

It was shown (Guo et al., 1997 ) that a magnetic entropy change larger than that of gadolinium is observed in polycrystalline La 0.7Ca0.3MnO3 (or La2/3Ca1/3MnO3) manganites. This large magnetic entropy change is due to the abrupt reduction of magnetization and is related to a sharp volume change at T C. Another mechanism describing large magnetic entropy change in this group of manganites has extrinsic character related to the method of the sample preparation (Bebenin, 2011; Ulyanov et al., 2008; Szymczak et al. 2008, 2010a). This suggestion was confirmed (Park et al., 2011) by Mössbauer effect studies of La0.8Ca0.2MnO3 doped with Fe.

For this group of manganites, it is very difficult to determine whether the phase transition is of first or second order. Conventional magnetization experiments are not conclusive in this case. For example, measurements performed, using both thermal expansion and magnetic susceptibility methods, on La2/3Ca1/3MnO3 manganites indicate second-order phase transition (Zhao et al., 1997). To the same conclusion led experiments utilizing simultaneously heat capacity and thermal expansion data (Souza et al., 2005). But using the same experimental methods, Gordon et al. (2001) had concluded that in La0.65Ca0.35MnO3 manganite, the ferromagnetic ordering is a thermodynamic first-order transition, broadened by a distribution in T C. This result is in agreement with magnetization and specific heat data that show (Kim et al., 2002) that in La1   x Ca x MnO3 for x  =   0.4, a tricritical point exists that separates first-order (x  <   0.4) from second-order (x  >   0.4) transitions. Heffner et al. (1996) had used nonstandard experimental technique to understand the origin of phase transition in La0.67Ca0.33MnO3. According to their zero-field muon spin relaxation and resistivity experiments, this phase transition is of second order. At the same time, unusual relaxational dynamics suggests an existence of a kind of unconventional glassy state in these manganites. To solve the discussed controversy surrounding the nature of the paramagnetic–ferromagnetic phase transition in La0.7Ca0.3MnO3, Loudon and Midgley (2006) had used transmission electron microscopy. Figure 1.9. presents the nucleation and growth of the ferromagnetic phase in the paramagnetic one. The sample was cooled through its Curie temperature and phase domains are formed first at a grain boundary (running between the arrows in Fig. 1.9a) and then spread into the bulk of the sample. The observation of the coexistence of ferromagnetic and paramagnetic phases indicates undoubtedly a primarily first-order transition. However, there is also continuous loss of magnetization that precedes the phase transition. Another way to increase magnetocaloric effect in La0.7Ca0.3MnO3 is to apply combined magnetic entropy change, due to both magnetic field and hydrostatic pressure. This idea was confirmed by the first measurement of barocaloric effect in La0.7Ca0.3MnO3 (Szymczak et al., 2010b).

Figure 1.9. Fresnel images taken from video as the specimen was cooled through its Curie temperature at a rate of 2   K/min starting at 243   K: (a) 0   s, (b) 7   s, (c) 13   s, (d) 20   s,(e) 27   s, (f) 33   s, (g) 40   s, (h) 47   s. A grain boundary runs between the arrows in (a). Magnetic domain walls appear as bright and dark lines.

Courtesy of Loudon and Midgley (2006), reproduced with permission from American Physical Society.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444632913000015

Open Systems

Robert F. Sekerka , in Thermal Physics, 2015

5.7 Entropy of Chemical Reaction

Before leaving this chapter, we show how the formalism developed for open systems can be used to treat chemically closed systems in which the mole numbers can vary by means of chemical reactions. Then we proceed to calculate the entropy due to a chemical reaction. See Chapter 12 for a more complete treatment of chemical reactions that includes heats of reaction and detailed conditions for equilibrium.

We begin with Eq. (5.10) and write

(5.120) d N i = d int N i + d ext N i ,

where dext N i denotes changes in N i due to exchanges of chemical species with the external environment and dint N i denotes changes due to chemical reactions internal to the system. For simplicity, we treat only one chemical reaction, which we write in the symbolic form

(5.121) i ν i A i = 0 ,

where A i is the symbol (such as C, CO, CO2, H, H2, etc.) of the chemical species i and ν i is its stoichiometric coefficient in the reaction. We regard ν i to be negative for reactants and positive for products. For example, reaction of carbon and oxygen to form carbon monoxide, namely

(5.122) C + ( 1 / 2 ) O 2 CO

could be written in the form of Eq. (5.121) with A 1 = C, A 2 =O2, A 3 = CO and ν 1 = −1, ν 2 = −1/2, ν 3 = 1. We can therefore write

(5.123) d int N i = ν i d Ñ ,

where Ñ is a progress variable that represents the extent to which the reaction has taken place. Equation (5.10) therefore becomes

(5.124) d U = T d S p d V + i = 1 κ μ i ν i d Ñ + i = 1 κ μ i d ext N i .

A special case of Eq. (5.124) is a chemically closed system for which dext N i = 0, in which case it becomes

(5.125) d U = T d S p d V + i = 1 κ μ i ν i d Ñ .

Equation (5.125) replaces Eq. (3.47) when there is a chemical reaction. Combining Eq. (5.125) with the first law d U = δ Q δ W and eliminating dU, we obtain

(5.126) δ Q T + p d V δ W T i = 1 κ μ i ν i T d Ñ = d S .

Subtracting δQ/T s from both sides of Eq. (5.126) and applying the second law in the form of Eq. (3.4), we obtain

(5.127) δ Q 1 T 1 T s + p d V δ W T i = 1 κ μ i ν i T d Ñ = d S δ Q T s 0 ,

where the inequality holds for natural irreversible processes and the equal sign holds for an idealized reversible process. Comparison with Eq. (3.52) reveals an additional term that can represent irreversible entropy production due to chemical reaction.

If only quasistatic work is done so that δ W = p d V , and T = T s so there is no entropy production due to irreversible heat transfer, Eq. (5.127) becomes

(5.128) d S = δ Q T i = 1 κ μ i ν i T d Ñ 0 .

For a reversible process, the equal sign holds in Eq. (5.128) and Eq. (3.6) also holds, so dS = δQ/T, which would require the second term on the right-hand side of Eq. (5.128) to vanish. For d Ñ 0 , this would require i κ μ i ν i = 0 , which turns out to be the condition that the reaction is in equilibrium. For an irreversible process, the inequality sign in Eq. (5.128) holds, so

(5.129) d S δ Q T = i = 1 κ μ i ν i T d Ñ > 0 ,

which results in entropy production due to an irreversible chemical reaction. In that case, Eq. (3.6) would no longer hold. Such a reaction will continue until equilibrium is reached or until at least one of the reactants in the system is used up, which will occur when d Ñ = 0 .

In their book Modern Thermodynamics, Kondepudi and Prigogine [16 ] break the entropy change d S into external and internal parts by writing 15 dS =dext S +dint S, where dext S = δQ/T and dint S ≥ 0. The inequality applies to a natural irreversible process and the equality applies to an idealized reversible process. This leads to

(5.130) d int S = i = 1 κ μ i ν i T d Ñ 0 .

This interpretation is consistent with our more general Eqs. (5.127) and (5.128) in the special case of T s = T (no irreversible heat flow) and no irreversible work.

For a cyclic process,

(5.131) 0 = d S = d ext S + d int S ,

which requires

(5.132) d ext S = δ Q T = d int S 0 .

Equation (5.132) is in agreement with Eq. (3.15) for a cyclic process during which T = T r . When Eq. (5.130) holds, we also have

(5.133) d int S = i = 1 κ μ i ν i T d Ñ 0 .

Since S depends on U, V, and Ñ for this system, these quantities must return to their original values for a cyclic process. This means that any chemical reaction that takes place during part of a cycle must be reversed during another part of the cycle. If the inequality holds in Eq. (5.133), the chemical reaction is irreversible and entropy is produced; this requires heat to be exchanged with the system in such a way that Eq. (5.132) holds, so an equal amount of entropy is extracted from the system.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128033043000053

Protein Folding

Maurice Eftink , Susan Pedigo , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

IV.B Basic Thermodynamic Relationships

Table I gives some widely accepted relationships for describing the variation of Δ G o un for a two-state N  U transition with temperature, chemical denaturant, pH, or pressure as the perturbations. One of the equations in Table I, when combined with those above and Eqs.(1–3), can be used to describe data as a function of the denaturing condition. The thermodynamic parameters related to the relationships in Table I are briefly described below.

TABLE I. Relationships Describing Two-State Transitions in Proteins

Temperature

(7a) Δ G un T = Δ H un o T Δ S un o

(7b) Δ G un T = Δ H o , un o + Δ C p T T o T Δ S o , un o + Δ C p ln T / T o

where
  ΔH o o, un is the enthalpy change at T  = T o
  ΔS o un is the entropy change at T  = T o
  ΔC p is the change in heat capacity upon unfolding.
Chemical Denaturants

(8) Δ G un [ d ] = Δ G o , un o m d

(linear extrapolation model)
where
  ΔG o o, un is the free energy change in the absence of d.
m  =   δΔG un /δ[d].
pH

(9) Δ G un pH = Δ G o , un o RT · ln 1 + H + / K a , U n / 1 + H + / K a , N n

where
  ΔG o o, un is the free energy change at neutral pH.
K a,U is the acid dissociation constant of a residue in the unfolded state.
K a,N is the acid dissociation constant of a residue in the native state.
Pressure

(10) Δ G un P = Δ G o , un o Δ V un P o P

where
  ΔV un   =   volume change for N  U transition.
P o   =   reference pressure.

For a two-state transition, A  B (or N  U for the unfolding of a native, N, to an unfolded, U, state of a protein) the mole fractions of the N and U states are given as X N   =   1/Q, X U   =   exp(−ΔG un /RT)/Q, where Q  =   1   +   exp(−ΔG un /RT) and the function for ΔG un is taken from above the average fluorescence signal, F calc   =   X i (F i   + x δ F i x ), where x is a generalized perturbant.

1.

Thermal unfolding: ΔHo un and ΔS o un are the enthalpy and entropy changes for a two-state unfolding reaction. Both Δ H o un and ΔS o un may be temperature dependent, when the heat capacity change, ΔC p, has a nonzero value. In this case, Eq. (7b) in Table I (the Gibbs-Helmholtz equation) should be used, where the ΔH o o,un and ΔS o o,un are values at some defined reference temperature, T o (e.g., 0° or 20   °C). 6,7 The heat capacity change for unfolding of proteins is typically found to be positive and to be related to the increase in solvent exposure of apolar side chains upon unfolding. That is, a positive ΔC p is a result of the hydrophobic effect. A consequence is that the ΔG o un (T) for unfolding of a protein will have a parabolic dependence on temperature and will show both high-temperature and low-temperature induced unfolding. 8

2.

Denaturant-induced unfolding: The empirical relationship in Table I for chemical denaturation includes ΔG o o, un , the free energy change for unfolding in the absence of denaturant, and m, the denaturant susceptibility parameter (=–δΔ G un /δ[d]), where [d] is the molar concentration of added chemical denaturant. 10 Through an empirical relationship, the given equation appears to adequately describe the pattern for denaturant-induced unfolding of a number of proteins. The ΔG o o, un value is a direct measure of the stability of a protein at the ambient solvent conditions, which can be moderate temperature and pH (e.g., 20   °C and pH 7). The m value also provides structural insights, as m values have been suggested to correlate with the change in solvent accessible apolar surface area upon unfolding of a protein. 11 For example, a relatively large m value (i.e., a high susceptibility of the unfolding reaction to denaturant concentration) indicates that there is a large change in the exposure of apolar side chains on unfolding, which might be the case for a protein that has an extensive core of apolar side chains that are exposed upon denaturation.

3.

Acid-induced unfolding: The relationship for acid-induced unfolding assumes that there are n equivalent acid dissociating groups on a protein that all have the same pK a, U in the unfolded state and that they are all perturbed to have a pK a, N in the N state. If the pK a, N is more than 2   pH units lower than pK a, U , then the equation simplifies with the denominator of the right term going to unity. The simplest relationship for acid-induced unfolding includes ΔG o o, un , the free energy of unfolding at neutral pH; n, the number of perturbed acid dissociating residues; and their pK a, U in the unfolded state. Presumably, n should be an integer and pK a, U should be approximately equal to the values for such amino acids as glutamate, aspartate (e.g., pK a, U should be about 4 to 4.3) or histidine (e.g., pK a, U should be around 6.5).

4.

Pressure-induced unfolding: In the relationship for pressure, P, induced unfolding of proteins, ΔG o o, un is again the value of the free energy change at 1 atmosphere pressure and ΔV un   = V U   V N is the difference in volume of the unfolded and native states. Pressure-induced unfolding studies require a specialized high pressure cell. 12,13

5.

Dissociation/unfolding of oligomeric proteins: Oligomeric proteins are interesting as models for understanding intermolecular protein-protein interactions. A general question for oligomeric proteins, including the simplest dimeric (D) proteins, is whether the protein unfolds in a two-state manner, D    2 U, or whether there is an intermediate state, which might be either an altered dimeric state, D′, or a folded (or partially folded) monomer species, M. Models for these two situations are as follows:

(11a) D D 2 U

(11b) D 2 M 2 U

For a D    2 U model, the relationships between the observed spectroscopic signal, S exp ; the mole fraction of dimer, X D , and unfolded monomer, X U ; and the unfolding equilibrium constant (K un   =   [U]2/[D]) will be given by Eq. (5) and

(12) X U = K u n 2 + 8 K u n [ P ] 0 1 / 2 K u n 4 [ P ] 0 ; X D = 1 X U

where [P]0 is the total protein concentration (expressed as monomeric form), where S i is the relative signal of species i and where K un will depend on the perturbant as given by one of the above equations. That is, the transition should depend on the total subunit concentration, [P]0, and on any other perturbation axis.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105006141

Statistical Mechanics

W.A. Wassam Jr. , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

V.C.3 Onsager's Linear Phenomenological Theory

As indicated earlier, linear nonequilibrium thermodynamics is based on the following postulates: (i) A Gibbsian expression for the entropy change dS(t) is valid for systems out of equilibrium. (ii) The entropy production is given by a bilinear form in the drives forces and flows. (iii) The flows can be expressed as a linear combination of the driving forces. (iv) The phenomenological coefficients satisfy Onsager's reciprocity relations.

In the proof of the reciprocity relations L jk   = L kj , Onsager wrote the linear phenomenological equations for the case of observables with a discrete index in the form

(337) d O j ( t + Δ t | O ) ¯ / dt = k L jk S ( O ) / O k ,

where d O j ( t + Δ t | O ) ¯ / d t is the phenomenological time derivative

(338) d O j ( t + Δ t | O ) ¯ / dt = [ O j ( t + Δ t | O ) ¯ O j ( t | O ) ¯ ] / Δ t ,

with O j (t  +   ΔtO′) representing the average value of the observable O j at time t  +   Δt given that the set of observables O possess the values O′ at time t.

To proceed further, Onsager did not take the Gibbsian path. Instead, he adopted Boltzmann's definition of entropy and Einstein's theory of fluctuations. Nonetheless, Onsager was led to the following expression for the phenomenological coefficient L jk :

(339) L j k = ( 1 k B Δ t ) [ O j ( t + Δ t ) O k ( t ) ¯ O j ( t ) O k ( t ) ¯ ] ,

where the quantity O j ( t + Δ t ) O k ( t ) ¯ is an averaged quantity intended to represent the correlation between the events O j (t  +   Δt) and O k (t). In defining the average O j ( t + Δ t ) O k ( t ) ¯ , Onsager made an assumption reminiscent of a basic assumption made in Brownian motion theory. More specifically, Onsager assumed that stochastic averaging is equivalent to ensemble averaging. Making use of this assumption and arguments based on the time-reversal invariance of the microscopic equations of motion, Onsager concluded that Eq. (339) implies L jk   = L kj .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105007274

Magnetocaloric effect in transition metal-based compounds

Luana Caron , in Handbook of Magnetic Materials, 2020

2 Thermodynamics of the magnetocaloric effect

The magnetocaloric effect is characterized by two quantities which can be inferred in a very straightforward manner from its definition: the isothermal entropy change and the adiabatic temperature change. In order to understand how a change in applied magnetic field gives rise to these changes it is easier to think in terms of order and disorder of the system, i.e., its entropy. In a magnetic material there are three main sources of entropy: the lattice, the magnetic moments and the electrons.

S t o t a l ( T , H ) = S m a g ( T , H ) + S l a t t i c e ( T ) + S e l e c t r o n i c ( T )

where S is the entropy, T is the temperature, and H is the applied magnetic field.

Consider a magnetic material where the moments are disordered and that is held under adiabatic conditions. If a magnetic field large enough to align these moments is applied, this field will then decrease the magnetic contribution to the total entropy. Because the system is held under adiabatic conditions it cannot exchange heat with its surroundings (or mass for that matter). This means that the total entropy will have to remain constant which in turn requires the two other contributions (lattice and electronic) to compensate the change in magnetic entropy. Since the electronic contribution to the total entropy is (usually) much smaller than the other two, the lattice entropy will have to increase as will the temperature of the system.

The isothermal and isobaric entropy change can be derived by considering the Gibbs potential or free energy and the first law of thermodynamics. The Gibbs free energy is given by:

(1) G = U T S + p V M H

where U is the internal energy, p the pressure, V the volume and M the magnetization. The internal energy U = U(S, V, M) is a function of the extensive variables entropy S, volume V and magnetization M, while its derivative is given by the first law of thermodynamics:

(2) d U = T d S p d V + H d M

where HdM is the work done on the material by the applied magnetic field H in order to change its magnetization M (see Chapter 2 in Coey, 2010) and temperature T, pressure p and magnetic field H are the intensive variables. From the first order derivative of the Gibbs free energy with respect to the intensive variables we obtain the extensive variables, which are the quantities either measured (V and M) or calculated (S) from experiments. The second order mixed derivatives of the Gibbs free energy yield the Maxwell relations, from which the most interesting is the one which relates the entropy change to the change in magnetization:

(3) S H T , p = M T H , p

Using this Maxwell relation the entropy change due to the application of a magnetic field in an isothermic and isobaric process can be obtained:

(4) Δ S T , p ( T ) = H 1 H 2 M T H , p d H

From this equation it is clear that the maximum value of the entropy change will be around the temperature where ∂M/∂T has a maximum, i.e., the transition temperature (Fig. 1). Also, this equation in principle gives a simple way (in terms of experimental setup) to determine the entropy change from magnetization measurements.

Fig. 1

Fig. 1. Sketch representing the (A) temperature dependent magnetization at different magnetic fields for a FM-PM (second order) phase transition and (B) the total entropy under different applied fields indicating the isothermal magnetic entropy change and the adiabatic temperature change. Notice that ΔS M (T) and ΔT ad (T) have opposite signs.

The other experimental method used to determine the entropy change consists in measuring the specific heat at constant pressure C p , which is given by:

(5) C p = δ Q d T p

where δQ is the heat absorbed by the system. From the second law of thermodynamics we have:

(6) d S = δ Q T

which combined give:

(7) C p = T d S d T p

Under isothermal and isobaric conditions:

(8) d S ( T ) H , p = C ( T ) H , p T d T

(9) S ( T ) H , p = T 0 T C ( T ) H , p T d T + S ( 0 ) H , p

And, finally the entropy change is given by:

(10) Δ S M ( T ) Δ H , p = 0 T [ C ( T ) H 2 C ( T ) H 1 ] p T d T

where we assume that S(0) H,p is the same for the fields H 1 and H 2.

Having the entropy curves obtained from specific heat data, it is straightforward to calculate the adiabatic temperature change ΔT ad from the isentropic line connecting S(H 2, T) and S(H 1, T), as derived by Pecharsky and Gschneidner (1999):

(11) Δ T a d ( T ) [ T ( S ) H 2 T ( S ) H 1 ] S

where, as for Eq. (10), it is assumed that the zero temperature entropies are the same for fields H 1 and H 2 and the error due to the unknown zero temperature entropy is neglected.

The adiabatic temperature change can also be obtained from a combination of specific heat and magnetization measurements:

(12) Δ T a d = H 2 H 1 T C H , p M T H , p d H

which is obtained from the Maxwell relation and Eq. (7). Clearly, if the C H,p curves are available, it is simpler to calculate the adiabatic temperature change using Eq. (11) directly. However, field-dependent heat capacity measurement setups are still seldom found, and a common simplification is to assume that the specific heat is field independent. This allows the use of C p at zero field and magnetization measurements, two methods much more widely found when compared to field-dependent calorimetry, for the calculation of the adiabatic temperature change.

The magnetocaloric effect can also be described from a microscopic view using statistical mechanics. A detailed description of the thermodynamics of the magnetocaloric effect using this approach can be found in the reference book by Tishin and Spichkin (2016).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1567271920300032