Lecture 11: Fundamental equation, absolute S, third law

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Topics covered: Fundamental equation, absolute S, third law

Instructor/speaker: Moungi Bawendi, Keith Nelson 

The following content is provided under a creative commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR NELSON: All right, well last time we finished our sort of introduction to entropy which is a difficult topic, but I hope we got to the point where we have some working understanding of physically what it's representing for us, and also an ability to just calculate it.

So we went through at the end of the last lecture, a few examples where we just calculated changes in entropy for simple processes like heating and cooling something or going through a phase transition where that process, that sort of process is relatively simple because there's no temperature change.

While the ice is melting, for example, you're putting heat into it, but the temperature is staying at zero degree Celsius. And we saw, how to do these calculations, we need define reversible paths. So it was extremely straightforward to calculate the entropy of say ice melting at zero degrees Celsius, because there the process is reversible, because that's the melting temperature.

But if we wanted to calculate the change in entropy of ice melting, you know at, once it had already been cooled to ten degrees above the melting point, to ten degrees Celsius, then in order to find a reversible path, we had to say, OK, let's first cool it down to zero degrees Celsius. Then let's have it melt, and then let's warm the liquid back up to ten degrees Celsius so we could construct the sequence of reversible steps that would get from the same starting point to the same end point, and we could calculate the change in entropy through that sort of sequence.

So now that we've got at least some experience doing calculations of delta S and we're just thinking a little bit about entropy, what I'd like to do is to try to relate the state variables together in a useful way. And and the immediate problem that I'd like to address is the fact that right now we have kind of a cumbersome expression for energy.

So you know we have u, we look at du, right it's dq plus dw, and you know, I don't like those. They're path specific, and it would be nice to be able to do a calculation of changes in energy that didn't depend on path. You know it's a state function. So in principle it seems like it sure ought to be possible, and yet so far when we've actually gone through calculations of du, we've had to go and consider the path and get the heat and get the work and so forth. And then we found special cases, you know an ideal gas where the temperature, where the change is only a function of temperature and so forth, where we could write this as a function of state variables, but nothing general that really allows us to do the calculation under all circumstances.

So let's think about how to make this better. So, and what I mean by that is you've seen examples like, you know, some special examples you saw awhile back. The case where du was Cv dT minus Cv and this Joule coefficient d v. But you know you still need to find those coefficients for each system. This isn't a general equation that tells us how energy changes in terms of only functions of state. Because of things like this and this -- what I'd really like is to be able to you know write du equals something. And that something, you know, it can have T and p and whatever else I need. It can have S, H, and of course differentials of any of those quantities. dT or dp or dS, dH, you name it, but all state variables.

That's what I'd much rather have. And then, you know, if I want to, if I've got something like that and I want to find out how the energy changes as a function of volume, so I'll calculate du/dV With respect to some selected variable hold constant, I can do it.

Right now, in a general sense, that's cumbersome. I've got to figure out how to do that for each particular case that I want to treat. So, let's see how we could construct such a thing.

Let's consider just a reversible process, at constant pressure. So, OK, I've got one and I'm going to in some path wind up at state two and I'll write du is dq, it's reversible in this case, minus p dV. And from the second law, we know that we can write dq reversible as T dS. dq over T is dS or entropy. So, we can write du is T dS minus p dV.

That's so important, we'll circle it with colored chalk. That's how important it is. You know it's a dramatic moment. So let's look at what we have here. Here's du and over on this side we have T, we have S, we have p and we have V. Suddenly and simply, it's only functions of state. Well that was pretty easy. So, what that's telling us is that we can write u this way, and you know, this is generally true. We got to this by considering a reversible constant pressure process. But we know u is a state variable right. So this result is going to be generally applicable.

And it tells us a couple of things too. It tells us that in some sense, the natural variables for u are these, right, it's a function of S and V. Those are natural variables in the sense that then it written as functions of those variables, we only have state quantities on the right-hand side. Very, very valuable expression.

And of course coming out of that then, we can take derivatives and at least for those particular variables, we can see that du/dS at constant V is minus p. And du/dV at constant S is T. All right, those fall right out. Now we can have a similar set of steps for H, for the enthalpy, so let's just look at that. So H, of course, it's u plus pV, so dH is just du plus d(pV), and now there's our expression for du. We're going to use it this way. So it's T dS minus p dV plus p dV plus V dp. And of course these are going to cancel. So we can write dH is T dS plus V dp. Also important enough that we'll highlight it a little bit.

So again let's look at what we've got on the right-hand side here, T, S, V, p, only quantities that are functions of state. And of course, we can take the corresponding derivative, so let's also be explicit here, that means that were writing H as a function of the variables S and p. And dH/dS at constant p is T, and dH/dp at constant S is V.

So now we've got a couple of really surprisingly simple expressions that we can use to describe u and H in terms of only state variables. All right? We also can go from these expressions, using the chain rule, to expressions for particularly useful expressions for the entropy as a function of temperature. So, you know, from du is T dS minus p dV. We can rewrite this as dS is one over T du plus p over T dV.

And now we can go back, you know, if we can go back to our writing of u in terms of, as a function of T and V, right. So we can write here du as a function of T and V, Cv dT plus du/dV at constant T dV. The reason we're doing that is now we can rewrite dS is one over T times Cv dT, and that's the only temperature dependence we're going to have.

The other part is going to be a function of volume. So it's, we've got p over T plus du/dV at constant T dV, and what this says then is that dS/dT at constant V is just Cv over T. Very useful, not surprising because of the relation between heat at constant volume and Cv, right. And of course dS is just dq reversible over T, but this is telling us, in general, how the entropy changes with temperature at constant volume.

We can go through the exact same procedure with the H to look at how entropy varies with temperature at constant pressure, and we'll get exactly an analogous set of steps that will be Cp over T, right. So also dS/dT at constant pressure is Cp over T. OK?

Now I want to carry our discussion a little bit further and look at entropy a little more carefully and in particular, how it varies with temperature. And here's what I really want to look at. You know, we've talked about when we look at changes in u and changes in H, and we've done this under lots of circumstances at this point. And at various times I've emphasized, and I'm sure Professor Bawendi did too, that when we look at these quantities we can only define changes in them. There's not an absolute scales for energy or for enthalpy.

We can set the zero in a particular problem, arbitrarily. And so, for example, when we talked about thermochemistry, we defined heats of formation, and then we said, well, the heat of formation of an element in its natural state at room temperature and pressure we'll call zero. We called it zero. If we wanted to put some number on it, and put energy in it, we could have done that. We defined the zero.

So far, that's, well not just so far, that's always the way it will be for quantities like energy and enthalpy. Entropy is different. So let's just see how that works. So, let's consider the entropy, we'll consider as a function of temperature and pressure. First let's just see how it varies with pressure. We're going to see -- what we'll do is consider its variation to both pressure and temperature, and the objective is to say all right, if I've got some substance at any arbitrary temperature and pressure, can I define and calculate an absolute number for the entropy? Not just a change in entropy, unlike the cases with delta u and delta H, but an absolute number that says in absolute terms the entropy of this substance at room temperature and pressure or whatever temperature and pressure is this amount.

Something that I can do by choice of a zero for energy or for u or H, but here, I want to look for an absolute answer. All right, so let's start by looking at the pressure dependence. So we're going to start with du is T dS minus p dV, so dS is du plus p dV over T.

Now let's look, T being constant. OK, and now let's specify a little bit. I want to make it something as tractable as possible. Let's go to an ideal gas. So then at constant temperature, that says du is equal to zero. So dS at constant temperature is just p over T dV. And in the case of an ideal gas, that's nR dV over V.

And at constant temperature, that means that d(nRT), which is the same as d(pV) is equal to zero, but this is p dV plus V dp. So this says that dV over V, that I've got there, is the same thing as negative dp over p. Right, so I can write that dS as constant temperature is minus nR dp over p. So that's great. That says now if I know the entropy at some particular pressure, I can calculate how it changes as a function of pressure, right.

If I know S at some standard pressure that we can define, then S at some arbitrary pressure, is just S of p naught and T minus the integral from p naught to p of nR dp over p. All right, which is to say it's S of p naught T minus nR log of p over p naught.

Right, now normally we'll define p naught as equal to one bar. And often you'll see this simply written as nR log p. I don't particularly like to do that because of course, then, formally speaking were looking at something that's written that has units inside as the argument of a log. Of course it's understood when you see that, and you're likely to see it in various places, it's understood when you see that the quantity p is always supposed to be divided by one bar, and the units then are taken care of.

For one mole, we can write the molar quantities S of p and T, is S, S naught of T minus R log p over p naught. All right, so that's our pressure dependence. What about that? We still don't really have a formulation for calculating this, or you know, defining it or whatever we're going to do to allow us to know it.

Well, let's just consider the entropy as a function of temperature, starting all the way down at zero degrees Kelvin, and going up to whatever temperature we want to consider. Now, we certainly do know how to calculate delta S for all that because we've seen how to calculate delta S if you just heat something up, and we've seen how to calculate delta S when something under goes a phase transition, right. Presumably, if we're starting at zero Kelvin, we're starting in a solid state. As we heat it up, depending on the material, it may melt at some temperature. If we keep keep heating it up, it'll boil at some temperature, but we know how to treat all of that, right.

So let's just consider something that undergoes that set of changes. So, we've got some substance A, solid, zero degrees Kelvin, one bar. Here's process one. It goes to A, it's a solid at the melting temperature and one bar. Process two is it turns into a liquid at the melting temperature and one bar. Process three is we heat it up some more, up to the boiling temperature at one bar. Process four is it evaporates, so now it's a gas at the boiling temperature and one bar. Finally, we heat it up some more, so now it's a gas at temperature T and one bar.

And if we wanted to, we can go further. We can make it a gas at temperature and whatever pressure we want. That part we already know how to take care of, right. Well, let's look at what happens to S, all right. S, a molar enthalpy at T and p, where we're going to finally end up, is, well it's s zero at zero Kelvin and one bar or one bar is implied by the superscripts here.

And then we have delta S for step one, and delta S for step two and so on. So all right, let's, we can label this six so we to all the way to delta S for step six. Well, so we can do that. It's S of the material at T and p is S naught at zero Kelvin, plus, here's for process one. We heat it up from zero Kelvin up to the melting point. Cp of the solid more heat capacity, divided by T dT. We can calculate delta S for heating something up, right.

Plus, now we've got the heat of fusion to melt the stuff, so it's just delta H naught of fusion, divided by Tm right. We saw that last time. In other words, remember, we're just looking at q reversible over T to get delta S, and it's just given by the heat of fusion. All right, then let's go from the melting point to the boiling point. So it's Cp now it's the heat capacity, the molar heat capacity of the liquid, divided by T dT. We're heating up the liquid.

And then there's vaporization. Delta H of vaporization over T at the boiling point. Then we can go from the boiling point to our final temperature T. Now it's the molar heat capacity of the gas over T dT minus R log p over p naught. OK, so that's everything, and these are all things that we know how to do. Just about.

OK, this one we're going to have to think about, but all the changes we know how to calculate, right. So if we plot this, S, and let's just do this as a function of temperature. I don't have pressure in here explicitly. Well, it's going to change as I warm up the solid, soon we're really starting at zero Kelvin. This stuff is all positive, right, so the change in entropy is going to be positive. Entropy is going to increase as this happens, and then there's a change right at some fixed temperature as the material melts.

So here is step one. Here is step two, right, this must be the melting temperature. And then there's another heating step. Well, strictly speaking, I'm going to run out of space here if I'm not careful, so I'm going to be a little more careful here. One, two, three. I'm heating it up a little more. Entropy is still increasing, right. So I've done this. I've done this. Now I've heated up the liquid. Now, I'm going to boil the liquid, so it's going to have some change in entropy. This must be my boiling point, and now there's some further change in the gas, and that gets me to whatever my final temperature is, right, that I'm going to reach. Four and five, great.

So there is monotonic increase in the entropy. OK, so we're there, except for this value. That one stinking little number -- S naught at zero Kelvin. That's the only thing we don't know so far. So, for this we need some additional input. We got some input of the sort that we need in 1905 from Nernst. Nernst deduced that as you go down from zero Kelvin for any process, the change in entropy gets smaller and smaller. It approaches zero.

Now, that actually was certainly an important advance, but it was superseded by such an important advance that I'm not even going to reward it by placing it on the blackboard. Forget highlight, color, forget it.

Because Planck, six years later, in 1911, deduced a stronger statement which is extremely useful, and it's the following. What he showed is that as temperature approaches zero Kelvin, for a pure substance in it's crystalline state, S is zero. A much stronger statement, right. A stronger statement than the idea that changes in S get very small as you approach zero Kelvin. No, he's saying we can make a statement about that absolute number S goes to zero as temperature goes to zero. For a pure substance in it's crystalline state.

So that is monumentally important. So as T goes to zero Kelvin, S goes to zero, for every pure substance in its, and I'll sort of interject here, perfect crystalline state. That's really an amazing result. So what it's saying is I'm down at zero Kelvin. Minimally, I've somehow cooled it as much as I possibly could, and I've got my perfect crystal lattice. It could be an atomic crystal like this, or, you know, it could be molecules. But they're all exactly where they belong in their locations in the crystal, and the absolute entropy is something I can define and it's zero.

So S equals zero. Perfect, pure crystal, all right. OK, well this came out of a microscopic description of entropy that I briefly alluded to last lecture, and again we'll go into in more detail in a few lectures hence. But the result that I mentioned, the general result, was that S was R over Na Avogadro's number, times the log of this omega number of microscopic states available to the system that I'm considering.

Now normally for a macroscopic system, I've got just an astronomical number of microscopic states. You know, that could mean in a liquid, different little configurations of the molecules around each other. They're all different states. Huge amounts of possible states, and the gas even more. But if I go to zero Kelvin, and I've got a perfect crystal, every atom, every molecule is exactly in its place. How many possible different states is that? It's one. There aren't any more possible states.

I've localized every identical lateral molecule in its particular place, and it's done. And you know, if I start worrying about the various things that would matter under ordinary conditions, right, you know, maybe at higher temperature, I'd have some molecules in excited vibration levels or maybe electronic levels.

Maybe if it's hydrogen, maybe everything isn't in the ground state. It's not all in the 1s orbital but in higher levels. Then there's be lots of states available, right, even of only one atom in the whole crystal is excited, well there's one state that would have it be this atom. A different one would have it be this atom, and so forth. Already, there'd be an enormous number of states, but at zero Kelvin, there's no thermal lexitation of any of that stuff. Things are in the lowest states, and they're in their proper positions. There's only one state for the whole system, so that's why the entropy is zero in a perfect crystal. At zero degrees Kelvin.

Now, there are things that may appear to violate that. Now of course you can make measurements of entropy, right, so it can be verified that this is the case. There are some things that would appear to violate that result.

You know, you can make measurements of entropies, and for example let's take carbon monoxide crystal, CO. So let's say this is a crystal lattice, it's diatomic molecules. It's carbon oxygen carbon oxygen carbon oxygen. All there in perfect place in the crystal lattice. Well it turns out when you form the crystal every now and then -- you know, let's put it in color. Let's put it in the color that we'll use to signify something in some way evil.

No insult to people who like that color. I kind of like it in fact. OK, so you know, you're making the crystal, cooling it, started out maybe in the gas phase, start cooling it, starts crystallizing. Gets colder and colder, but you know, carbon monoxide is pretty easy for those things to flip sides. And even in the crystalline state, even though it's a crystal, so the molecules center of mass are all where they belong, still at ordinary temperatures they will be able to rotate a bit.

So even in the crystalline state, when it's originally formed, not at zero Kelvin, there's thermal energy around. These things can to get knocked around, and the orientations can change. Now you start cooling it, and you know, by and large they'll all go into the proper orientation. Right, that's the lowest energy state, but you know, there are all sorts of kinetic things involved. There it takes time for the flipping, depending on how long, how slowly it was cooled, and so forth.

May never happen, and then it's cooled, and then anything that's left in the other orientation is frozen in there. There's no thermal energy anymore. It can't find a way to reorient. That's it. All right, let's say we're down to zero Kelvin, and out of the whole crystal, we've got a mole of molecules. One of them is in the wrong orientation. Now how many states do we have like that that would be possible? We'd have a mole of states, right. It could be this one. It could be that one, right. Or in general, of course really there's a whole distribution of them, and they could be anywhere, and pairs of them could be, and it doesn't take long to get to really large numbers.

So the entropy won't be zero. Entropy of a perfect crystal would mean it's perfectly ordered. So that's zero. But things like that not withstanding, and of course it's the same if you have a mixed crystal. In a sense I've described a mixed crystal where the mixture is a mixture of carbon monoxide pointing this way, and carbon monoxide pointing this way. But a real mixed crystal with two different constituents, well of course, then you have all the possible configurations where, you know, they could be here and here and here, and then you could move one of them around and so forth, there are zillions and zillions of states. But for a pure crystal, in perfect form, you really have only one configuration, and your entropy is therefore zero.

OK, so now, we can go back and we can do this. And at least in principle, even for things that don't form perfect crystals, we could calculate the change in entropy going from the perfect ordered crystal to something else with some degree of disorder and keep going and change the temperature and do all these things.

So the real point is that this is extremely powerful because given this, we really can calculate absolute numbers for the entropies of substances, at ordinary, not just at zero Kelvin, but using this which, you know, this is a really very straightforward procedure. And in fact these things are really quite easy to measure.

You know, you do calorimetry, you can measure those delta H of fusions, right, delta H of vaporization. You can measure the heat capacities, the things in the calorimeter. You'll see how much heat is needed to raise the temperature a degree. That give you your heat capacity for the gas or the liquid or the solid.

So in fact, it's extremely practical to make all those measurements, and you can easily find those values of the heat capacities and the delta H's tabulated for a huge number of substances. So this is, in fact, the practical procedure then of protocol for calculating absolute entropies of all sorts of materials at ordinary temperatures and pressures. Very important.

Now, one of those corollaries to this law is that in fact it's impossible to reduce the temperature of any substance, any system, all the way to absolute, exact, no approximations, zero Kelvin. Because you can't quite get down to zero Kelvin. And there are various ways that you can see that this must be the case.

But here's one way to think about it. So, let's just write that first. All right, can't get quite down to zero Kelvin. All right, let's consider a mole of an ideal gas. So p is RT over V. And let's start at T1 and V1, and now let's bring it down to some lower temperature, T2 in some spontaneous process. We'll make it adiabatic so it's like an irreversible expansion. I just want to calculate what delta S would be there in terms of T and V.

So, well, du is T dS minus p dV. dS is du over T plus p over T dV. But we can also write du is Cv dT in this case, right? So that says that p over T, that's R over V, and we can write, dS is Cv dT over T plus R dV over V. It's Cv, I'll write it as d(log T) plus R d(log V), and so delta S, Cv log of T2 minus log of T1, if T2 is my final temperature, plus R log of V2 minus log of V1, okay. Or Cv log of T2 over T1 plus R log of V2 over V1.

Well, what does it mean when T2 is zero? Well, I don't know what it means. This turns into negative infinity. So we're going to write it again as our either evil or at least unattainable color. What's going to happen? I mean you could say, well, we can counteract it by having this go to plus infinity, right. Make the volume infinite. In other words, have the expansion be in through an infinite volume. Of course that's impossible. That would be the only way to counteract the divergence of this term.

In practice, you really cannot get to absolute zero, but it is possible to get extremely close. That's doable experimentally. It's possible to get down to some micro or nano Kelvin temperatures, right. In fact, our MIT physicist, Wolfgang Ketterle, by bringing atoms and molecules down to extremely low temperatures, was able to see them all reach the very lowest possible quantum state available to them. All sorts of unusual properties emerge in that kind of state, where the atoms and molecules behave coherently. You can make matter waves, right, and see interferences among them because of the fact that they're all in this lowest quantum state.

So it is possible to get extremely low temperatures, but never absolute zero. Here's another way to think about it. You could consider what happens to the absolute entropy, starting at T equals zero, right.

So we already saw that, you know, that we can go, the first step will go from zero to some temperature of Cp over T for the solid dT if we start at zero Kelvin and warm up. Now, already we've got a problem. If this initial temperature really is zero, what happens? This diverge, right. Well, in fact, what that suggests is as you approach zero Kelvin, the heat capacity also approaches zero.

So, does this go to infinity as T approaches zero Kelvin? Well, not if Cp of the solid approaches zero as T approaches zero Kelvin. And in fact, we can measure Cp, heat capacities at very low temperatures, and what we find is, they do. They do go to zero as you approach zero Kelvin.

So in fact, Cp of T approaches zero as T approaches zero Kelvin. Good! But one thing that this says is, remember, that dT is dq at the heat in divided by Cp. And this is getting really, really small. It's going to zero as temperature goes to zero, right. Which means that this is enormous. What it says is even the tiniest amount the heat input leads to a significant change increase in the temperature.

So, this is another way of understanding why it just becomes impossible to lower the temperature of a system to absolute zero, because any kind of contact, and I mean any kind, like let's say you've got the system in some cryostat. Of course the walls of the cryostat aren't at zero Kelvin, but somewhere in here it's at zero Kelvin.

You can say, okay I won't won't make it in contact with the walls. You can try, but actually light, photons that emit from the walls go into the sample. You wouldn't think that heats something up very much, and it doesn't heat it up very much, but we're not talking about very much. It does heat it up by enough that you can't get the absolute zero. In other words, somehow it will be in contact with stuff around that is not at absolute zero Kelvin. And even that kind of radiative contact is enough, in fact, to make it not reach absolute zero.

So the point is, one way or another, you can easily see that it becomes impossible to keep pulling heat out of something and keep it down at the temperature that's right there at zero, but again it's possible to get very close.

All right, what we're going to be able to do next time is take what we've seen so far, and develop the conditions for reaching equilibrium. So in a general sense, we'll be able to tell which way the processes go left unto themselves to move toward equilibrium.