Wednesday, January 30, 2008

Volume Up! Volume Down!

My friend and classmate, Aaron Thong (currently a teacher in ACJC teaching Chemistry!) asked a very interesting question which he thought of after a student asked him a question, that is:

For a reaction of the form NaOH (aq) + HCl (aq), does the final volume of the system increase, decrease, or stay the same, on the provision that there is no loss of mass from the system?

For those taking Physical Chemistry this semester, this is a typical problem that makes use of partial molar quantities (if you haven't heard of it yet, how can you be a University Chemistry student? Haha.), and well, have a go at it!

Kudos to Aaron!

Sunday, January 27, 2008

Canon Not In 'D'

Yup, I'm talking about a canonical ensemble here; my Physical Chemistry professor, Dr. Fan Wai Yip, completely missed out this essential concept in my course last semester, which is essential to the concept of partition functions, statistical weight and the entropy of systems.

So what exactly is a canonical ensemble? Let us consider what it means by the word ensemble first:

Above we have a collection of atoms - any simple collection of microscopic objects can be referred to as an ensemble. Typically, we consider an ensemble to be a macroscopic system. So now, what is a canonical ensemble? Well, look at this:


Well, a canonical ensemble is simply a collection of ensembles as above! It is simply partitioning a huge, gigantic macroscopic system into smaller subsystems, each one still a macrosystem by itself. What this means is that each subsystem by itself, still obeys the statistical laws of statistical mechanics perfectly well, and therefore, by calculating the properties of one subsystem, one can simply multiply that property by the number of subsystems (if it is an extensive quantity) to obtain the overall property or quantity.

And the reverse is true: if you can calculate a quantity for the entire canonical ensemble, then it follows that you can divide by the number of such ensembles to obtain the quantity for the small subsystem by itself.

We shall now use this idea to derive a very general expression for the entropy of a system in terms of the probabilities of a macrosystem being in a certain energy state r. Let us consider a very general case where we have a huge macrosystem that can be partitioned into n identical subsystems. Each subsystem will then be in thermal contact with one another, interacting weakly, and each system will also have the same probabilities of being in certain energy states.

Let us then assume that each subsystem can exist in energy states 1, 2, 3, ..., r... and we allow the associated probability of each subsystem being found such an energy state be p(r). Then for n such subsystems, the number of systems to be found in the energy state r is simply:

With this, we then proceed to calculate the number of ways all of the subsystems can be partitioned within the huge system. What I mean to say is that we now proceed to determine the number of possible ways to distribute all of the subsystems into the various energy states 1, 2, 3, ..., r ... and we do this by determining the statistical weight of the entire canonical ensemble:

The corresponding entropy of the entire ensemble is then:And if we use Stirling's approximation we obtain: We then substitute in the very first relation we stated at the start of this derivation:

And the next step is to simply divide the entropy of this huge macrosystem by the number of subsystems in the ensemble, so that we obtain:

And there you have it, the entropy of a single macroscopic system entirely in terms of probabilities!

Saturday, January 26, 2008

Repeat After Me: Schottky, Not Schotty Defect!

What a lovely Saturday night for reminiscing; ever since learning about point defects in my Inorganic Chemistry module, I've never gotten the pronunciation right before. Can you believe it?

Schottky Defect

Amazing. I keep pronouncing it as 'Schotty' by insisting on the absence of a 'k'; I should really give the guy some credit! Haha.

In any case, this Schottky Defect is a prime example of what we call a point defect; basically defects occur in all solids, and this can be easily proven via a consideration of thermodynamic parameters. Instead of the usual ΔG = ΔH - TΔS treatment that we usually apply in Chemistry, let us consider a more top-down approach that is much more complete, albeit still lacking in details.

Now, at absolute zero, we know that according to the Third Law of Thermodynamics the atoms of a perfect crystalline solid are arranged perfectly in a regular crystal lattice, hence giving rise to zero entropy and zero disorder (we neglect any residual entropy in this discussion for simplification purposes!). Now, as the temperature increases, there will be a corresponding increase in thermal agitation that tends to produce defects in the crystalline structure - defects are basically misalignments or irregularities in an otherwise regular array of atoms in the solid.

When the temperature increases, the atoms go into a more frenzied state of vibration about their lattice positions, and sometimes, the atoms can actually be displaced from their lattice site, and migrate to the surface of the solid. This results in a vacant lattice site, which is referred to as a Schottky defect. The diagram below shows a perfect ordering in two dimensions, and a Schottky defect occuring:

And that's basically it, a Schottky defect that forms due to disorganizing thermal motion. Now what we wish to do now is to determine how the number of Schottky defects varies with the absolute temperature for a crystal that is in thermal equilibrium at a temperature T. Well, let us put into place certain assumptions:

1) The energy associated with a Schottky defect is ε: in other words, we say that the zero of energy is an atom within the lattice, and the energy of an atom on the surface of the solid with respect to an inner atom is ε, which is what is required to produce a defect.

2) For N atoms, let there be n defects, such that the total energy is nε: by saying this, we are saying that there are relatively few defects as compared to the total number of atoms (n is much lesser than N). As such, as according to the diagram below, all defects are well spaced away from one another such that each defect is surrounded by a regular array of atoms:


3) We have assumed that n is much lesser than N: in general, this is true at temperatures below 500 K, because the energy of a Schottky defect is roughly of the order of 1 eV, while the thermal energy at 300 K is around 1/40 eV. As such, we say that very few defects form at normal to moderate temperatures.

With this three assumptions, we can now say that:

Total energy of system, E(n) = nε

The next step is to then determine the statistical weight (or thermodynamic probability) of a typical macrostate of the solid crystal. Let us go by the above assumptions, so let's consider a crystal composed of N atoms, with n defects as our macrostate, and the number of microstates (i.e. number of ways to obtain such a configuration) is simply the number of ways one can select n lattice sites from N lattice sites:
The corresponding entropy term is then given by:

Now you might say: isn't this expression too easy? Using simple statistics for the statistical weight? And then for the entropy? Well, you're right, we have made some additional assumptions:

1) We have neglected surface effects: basically, there is an additional entropy term that we neglected, which corresponds to the number of ways you can arrange the atoms on the surface of the solid. But the reason for this is simple: for a typical amount of solid, say one mole, we have 10^23 atoms present. Since the number of atoms on the surface is proportional to (10^23)^2/3, we have roughly 10^16 sites on the surface of the solid. Comparing this number (10^16) to the actual number of inner atoms (~10^23), we say that surface effects can be suitably neglected under normal conditions.

2) We have neglected vibrational effects: earlier, we mentioned that the atoms do vibrate about their equilibrium positions, and this in fact contributes to the entropy as well at higher temperatures.

However, having neglected these two effects doesn't mean we're calculating the wrong entropy - it just means that we're calculating an incomplete entropy term. The key point here is that surface effects, defects and vibrations each have their own entropy and energy term - we can then calculate their energies separately (same goes for their entropy) and then sum them up later on, and it is still correct. The basis for such a treatment is that each component is a subsystem and they all interact weakly. Because they interact weakly, they are essentially independent of one another, and thus we can separate out their thermodynamic variables.

This is reminiscent of having an isolated system being split into two or more subsystems, where each subsystem has its own characteristic temperature; it is these temperatures (and other variables) that must be equal to one another when thermal equilibrium ensues within the solid.

Now that the problems are out of the way, let us consider a mathematical simplification by application of Stirling's rule, which states that for large values of x:

In which case if we apply it to the entropy of the system, we now have:

I've obtained the derivative of S with respect to n in anticipation that we'll need it later. Now, we'll make use of the basic definition of temperature in terms of entropy so that:

The right hand side is simply an extension of the middle term where I've used the chain rule; now that we've obtained the partial derivative of S with respect to n, let us obtain the other derivative:

Putting everything together we obtain the final expression:

Let us keep in mind that n <<>

And voila! This expression has the form of the much-celebrated Boltzmann distribution! An amazing thing, don't you think?

Just ruminate over it. :p

Friday, January 25, 2008

Certain Uncertainty

Ever wondered how the Heisenburg Uncertainty Principle works? I mean, after so many textbooks and explanations, have you ever wondered what the following statement means:While the first term (ΔE) refers to an uncertainty in energy of the system, the second term (Δt) refers to the natural lifetime of the system in that state. And how would you use it?

Well here’s a classic example that is not often used in textbooks – consider the ground state of a molecule, and given that the ground state is the lowest energy state of the molecule, then the ground state must correspondingly be the most stable energy state of the molecule. That is the most basic postulate in statistical mechanics, where the lowest energy state is the most stable state.

Now, if we are given a molecule in the ground state, and provided that there is no perturbation (in the form of energy given to it via radiation, heat, etc.), then we can be very sure that the molecule will always stay in the ground state. We therefore say that the molecule stays in the ground state for an indefinitely long time unless energy is given to it to excite it.

Now, this would just mean that the natural lifetime of the ground state is:

Yes, in other words, we say that the lifetime is infinitely long! So what does the Uncertainty Principle tell us? It tells us that:

Indeed, we conclude that the uncertainty in the energy of the ground state must be zero! That is, the ground state is exactly known with a well defined energy! Our first conclusion must therefore be: the ground state of any molecule must therefore be a state of definite energy.

With this first concluding statement in mind, let us consider a typical quantum mechanical transition between two energy levels:

Now, we know that the ground state has a definite energy (G), as explained earlier, but what about the excited state (E)? Does it possess also a definite energy? Now, let us consider a fact, an experimental fact: excited molecules always decay back to the ground state, either via thermal collisions or re-radiation of electromagnetic radiation, giving rise to emission spectra lines.

This means that we must insist that the molecules don’t have an infinite life time, because if they did, then the molecules once excited, would never decay. Once again, we turn to the use of the Uncertainty Principle, where we see that:

This time round, the uncertainty in energy isn’t zero, but of a certain magnitude, depending on the lifetime of the excited state. Now, we should more properly depict the transition as:

Interesting! The transition is no longer as properly defined as it was earlier! This explains why spectroscopic graphs don’t possess infinitely sharp peaks, because this natural line broadening phenomenon takes place.

However, this equation has very often been misused, where the student very frequently mistakes the term ΔE for the energy gap between the excited state and the ground state, and simply substitutes this into the Uncertainty Principle to calculate the lifetime – notice that this isn’t correct, because the term ΔE more accurately refers to the blurring of the excited state, rather than the energy of transition itself.

Nevertheless, we can still say that as the energy of transition increases, the excited state is higher in energy, and thus less stable, which still translates to a smaller lifetime – it is for this reason that students fail to make the proper distinction between energy of transition, and energy of state.

Particulate This!

Everyone's heard of Feynman Diagrams (or maybe not! :p) but what exactly are these? To begin understanding such a pictorial diagram of quantum processes, one first needs an example, which is aptly shown below:
I haven't quite drawn the diagram in the conventional way (without the axes being labelled) because I want the meaning of the diagram to be explicit. In this diagram, the bottom left arrow depicts an electron moving to the right - notice that it moves upwards (in time) meaning it is travelling forward in time, and it moves right (in position) meaning it is travelling to the right in real life.

The bottom right arrow represents a positron (the anti-electron) - in this convention, all anti-particles are drawn with a reversed arrow, so while the electron has an arrow that points forward in time, the positron has an arrow that points backwards in time. You might ask: why so? Simply put, an antiparticle is a particle travelling forward in time! We'll have more on that later; for the time being, suffice it to say that this is the convention we're going to adopt.

You'll notice two dotted lines at the top, which corresponds to two identical photons - you might notice that there are no arrows at all. Want to fashion a guess? It's because a photon is its own anti-particle, and hence no arrow is needed to distinguish between a photon and an anti-photon at all. Hey wait a minute! Are you saying that photons can actually be travelling forward and backwards in time? Perhaps, for light itself defines the very limits of time travel, and maybe only a photon can experience true time travel without being altered itself.

So how do I read this diagram? Easy, if we use the conventional wisdom of our real world, we say that an electron and a positron travel towards one another, collide, annihilate, and their mass energy is converted into electromagnetic radiation in the form of two photons that travel away from one another.

But in modern Physics, there is an alternative viewpoint: an electron moves from the left to the right, and at one moment in time, emits two photons. The emitting of these two photons causes the electron to change its momentum in time, and causes it to move backwards in time, becoming a positron that goes on towards the right, but backwards in time.

Well, that's all for this post - till when I'm feeling awake again!

Wednesday, January 23, 2008

Thermodynamics and Mechanics: Complementarity

Sometimes separate fields in Physics come together for an unified application in certain problems. Don't believe me? Just take a look at the following thought experiment:


We have two identical metal spheres (consider them perfectly spherical!) with radius r, mass m, in the presence of a gravitational field on Earth g - but one of them hangs on a thread of negligible thickness and size, while the other lies motionless on the ground (assume that the ground is rigid and doesn't absorb any heat from anything including the ball). Now, the ball has a heat capacity c, which we can assume to be constant throughout this experiment (of course it won't be, but why torture ourselves here?)

The experimental procedure is as follows: we use a flame to impart E joules of energy to both balls and simply wait.

The question I have for you is: What is the final temperature of both balls? Will they be the same? If yes why? If not, why? [Hint: Of course the final temperature won't be the same!]

Here's another 2 questions for the really psychedelic hardcore Physics lover: Can you work out for me the difference in temperature? Can you also work out the difference in size? [Assume that the ball has a constant expansivity of a]


Working this problem out just demonstrates how intricately thermodynamics is linked to mechanics; indeed, the First Law of Thermodynamics itself embodies the essence of one of the important principles in mechanics, the Principle of Conservation of Energy. :)

-------------------------------------------------

Well, it's been days, and no one has replied or commented or whatsoever, so I guess it's time to go on and reveal more of the answer to fulfill my own needs and complete the post. This question is a very simple one, as long as one bothers to draw out the initial state and final state of the system - notice that I'm making use of one of the basic concepts in Thermodynamics, the fact that initial and final states determine the change in any state function, and in this case, the temperature of the ball.

Consider the two expansion processes:



What do you notice? I'll leave this post hanging here for another few days before someone comes along to comment on what happens. :p

Tuesday, January 22, 2008

Time Based Entropy

Call me silly, but I'm just very persistent in Thermodynamics, and hence the need to justify the point of my last post: namely, that entropy is just as much a function of time as it is a function of heat flow, for heat flow is also a function of time, for without time, how can heat flow?

However, let us first consider the importance of entropy - what exactly is this monster called entropy? Well, as all elementary definitions put it, entropy is the amount of disorder within a system. So if you have a cup of hot water, and cup of cold water, naturally the hot water will have its molecules sloshing around (alright, not quite literally) with kinetic motion, far more than the cold miserable molecules of the cold water would. As such, by virtue of its internal molecular motion, we say that the hot water has more entropy of motion because of a higher temperature.

Yeah yeah, so it's disorderliness, so what? Sure enough, that's not all. Let us then dive into the Second Law of Thermodynamics again, in its simplest form: the entropy of the universe must always increase in an irreversible process. Ah, but why must entropy increase? This is always a difficult concept to explain, and thus let us have a thought experiment:

If I have a container with a partition separating hot and cold water, I say that the hot water is a region of higher entropy, and the cold water a region of lower entropy - but is this really the state of maximum entropy? No! Why? Because there exists a very obvious sense of order! Because the container can be exactly divided into an ordered region and a disordered region, and therefore, there is an intrinsic order associated with the system!

So how would I increase the entropy further? Easy, break that orderliness! So we break the partition and allow the regions to mix, producing lukewarm water, and thereby increase the entropy of the system to a maximum. Now, ask yourself - this is an irreversible process, is it not? You will never see a lukewarm glass of water separating itself into hot and cold regions spontaneously by itself! No way man! And the direction of increasing entropy tells us how this works.

Now, let us consider this again: why is entropy so important? Think about it: the hot water and cold water have internal energy U - this internal energy can be used to do work, because the hot water has thermal energy that can be used to generate electricity by perhaps, driving a thermocouple.

Now consider the hot and cold water mixing - it still has internal energy U right? But yet the amount of work it can do is lesser! When you use warm water to drive a thermocouple, not so much energy is produced as work because the temperature isn't that high.

Weird! The amount of internal energy available is the same, but yet the amount of work that can be done is different! And this is explained because of entropy. As entropy increases, what this means is that the energy contained within a system is more spread out.

Ask yourself again: what kind of energy is useful? Why, of course, it must be energy that is able to flow from one region to another, energy that can flow! If energy can't be transported or unable to flow, then we simply can't tap it or harness it! Imagine if the chemical energy from your food couldn't be moved from the food itself into your cells for you to utilize! You'd be unable to do anything with the food you ate!

Entropy causes a spreading of the energy into an equilibrium state, such that there is an even mix of energy (and mass) everywhere within the system, such that in such an even distribution of energy, energy can't move anymore! Or rather, if energy moves in one direction, an equal amount will move in the opposite direction that compensates such movement, and thus there is no net movement of energy observable. Work is the net movement of energy, a loose definition, and if energy can't even move, work can never be done. Of course work has a more rigorous definition, but oh well, it's sufficient for this point.

---------------------------------------------------------

Well, here comes the main point of this post, to prove (in a very non-rigorous manner) that entropy is also a function of time. Please bear with me as I plough you through some essential basics before that. Let us first take a look at how the change in entropy (dS) is mathematically defined in basic Thermodynamics:


Simple enough, a change in entropy (dS) is caused by a reversible flow of heat (dQrev) in or out of the system, divided by the temperature of the system (T). If the flow of heat is into the system, then dQrev is postive, otherwise it's negative. The reason why it's defined like this can't be explained using any simple ideas, but suffice it to say (the mathematical derivation is very complex and time consuming) that heat is the flow of energy brought about by molecular motion, which therefore increases the disorderliness of the system. As such, we use the flow of heat as a measure of the change in entropy of the system.

The temperature is present in the equation because obviously for a very high temperature a small flow of heat wouldn't cause that much significant a change in the entropy of the system. Thus in Physics one would say that entropy change is a change weighted by the temperature of the system.

Let us now first prove an important theorem in Thermodynamics, the Clausius Inequality, which actually shows that no matter what, as long as you have a cycle, the entropy of the universe must definitely increase! Interesting right, a proof! Now let's first start from the First Law of Thermodynamics, which states that:

This means the change in internal energy of a system is equal to the heat flow in/out of the system and the work done on/by the system. Notice that this two quantities can be either reversible (rev) or not. We then make the distinction between reversible work done on the system and irreversible work done on the system:

That is, reversible work done on the system is lesser or equal to the irreversible work done on the system. The equality holds only when the work done is reversible, in which case a subscript rev is added to dW. And if we do some mathematical arrangement, we see that:

Which must lead us to the conclusion that:
And therefore we have:

Which tells us that if we divide throughout by the absolute temperature of the system we obtain:

And recognising that the quantity on the left is simply the change in entropy, we write:

And we proceed to determine the total change in entropy when we have a cycle, by integrating over the cycle, which is indicated with a small circle on the integral to indicate a closed integral:

And recall by definition that a cycle is a process that brings a system from state A to some state, and then back to state A. If entropy is a state function, then the system being at state A, will possess a fixed entropy, and thus the change in entropy of the system must be zero since the final and initial entropy of the system is the same:

Which concludes the derivation of the Clausius Inequality:

So what exactly does this inequality mean? Well, you must notice that if the entropy change of the system is zero, then we must agree that change in entropy of universe = change in entropy of surroundings, am I right? Now look at the equation above: it says that the heat flow is always negative in a cycle, and therefore the heat that flows must flow out of the system into the surroundings.

Wait a minute, doesn't the heat flowing into the surroundings mean that the surroundings' entropy change must be positive? Hey that means that the entropy change of the universe increases right?

Correct! In any cycle, we end up increasing the entropy of the universe - we can't go against this principle. This means that everytime you turn on and use the engine in your car, you're killing the universe. :p

---------------------------------------------------------

As for time-based entropy, let us consider the change in entropy again:

Then using the chain rule in basic calculus, we have:

Which then rearranges into:
We have recognised that dQrev/dt simply refers to power, and we explicitly refer to power as a function of time by putting the bracket Prev(t). And from the previously worked out example, we have:

Which allows us to say:

So that we can conclude that:
In other words, the entropy changes with time because heat must flow as a function of time! If there exists no time for heat to flow, there can be no change in entropy. This last equation took for granted that the power is always causing heat flow into the system, which is not necessarily true, but it's just a special case anyway. :p

Therefore, by knowing the mathematical form of P(t), we can simply integrate within the limits, and obtain entropy as a function of time. Which is actually easily done if we consider heat transmission via radiation. The law of heat transmission by radiation is summed up in the Stefan-Boltzmann Law of Radiation, where:

And there you have it, you actually can express the entropy of a system as a function of time (notice in the above I omitted the constant of integration because I'm lazy, :p).


Tuesday, January 15, 2008

Entropy As A State Function

As with many other state functions, entropy itself is also a state function. What exactly does it entail for a system to have something as its state function though? Easy - it simply means that the physical property known as a state function depends solely on the state of the system and doesn't depend on the system's past history or how the system came into existence.

I'm not going to go into a lecture here, but I'm just going to give you one question to consider regarding entropy: if entropy is truly a state function, then no matter how the change takes place, regardless of how the system was prepared, we can ignore what the system went through and simply consider the initial and final states right?

Now the above is an absolute truth that applies to all state functions, but now let us consider this scenario (a gedanken, or thought experiment, as I'd like to call it):

I've a piece of alloy, and it's a special alloy, known as β-brass, made up of 50% Zinc and 50% Copper. My question is, if I have β-brass at room temperature and I cool it down slowly and reversibly to 10 K, and alternatively, I cool it down rapidly to 10 K, will the change in entropy be the same?

If not, why? If yes, under what conditions? Answering this question will allow you to understand that entropy is in fact, a special function of a special parameter of life.

-------------------------------------

So far, everyone (everyone being a miserable 2 people) have decided that the entropy change must of necessity be the same regardless of how the process is carried out, because entropy is a state function, and must therefore be independent of the path taken, for that is the very essence of a state function (mathematical rigour aside!).

However, I was waiting for an alternative view to this problem - that someone would say that everyone is stubbornly following the Laws of Thermodynamics, why can't everyone see an obvious flaw?

I was hoping for someone to give this alternative point of view:

Dude, look at the situation this way man, under normal conditions at room temperature, the Zinc and Copper atoms are all in a very disordered array, with both types of atoms randomly dispersed amongst atoms of each type. If you cool it down rapidly, then what happens is that you freeze the molecular infrastructure in that disordered form! If you cool it down slowly, you allow rearrangements to take place such that hey, you actually obtain an ordered form, where each Copper atom is surrounded by eight Zinc atoms (and vice versa!). Clearly both methods of cooling will result in different end states, so how can the entropy ever be the same?

By the way, if you're thinking about other possible ways to reject this idea, forget it, I must be right - if you're thinking about volume, both methods result in the same volume, so this can't be the reason for the discrepancy. I've proven that the 2nd Law isn't always right, and must therefore be subject to certain conditions.

I so wanted someone to give this explanation, so I could burst his bubble. But oh well, turns out I'm the one offering this explanation for people to burst my bubble instead. :p

So the question is, what is wrong with his view? Figure it out, and you'll realise that entropy is a very important function of a very important parameter of our lives.

-------------------------------------

The key idea here is that entropy is also a function of time! When the person above said that rapid cooling makes a difference as opposed to slow cooling, he is right! When you cool the solid slowly, you allow time for the atoms to move into their fixed ordely positions so that they can assume a regular array by the time the solid hits 10 K. But if you cool it rapidly, the atoms may not have enough time to hit that fixed orderly state, and thus be frozen into a seemingly disordered state.

I said seemingly because that disordered state isn't the actual final state of equilibrium, and it is more accurately known as a metastable state. A metastable state is one where the relaxation time is long as compared to the time where the process governing the evolution of the state is being observed. That is to say, that the disordered array of atoms is actually still undergoing a process, where the atoms are still moving throughout the solid (even though you can't observe it!). As such, you'll have to wait for the atoms to move into their final positions before you make the final measurements!

Hey, that's not fair! You might be thinking that I was giving you a trick question, but think about this: How long should you wait in both instances? If you want a reversible process, theoretically, you need to wait infinitely long. If you want the atoms to go to their positions at 10 K after being cooled rapidly, notice that because the temperature is so low, you'll also have to wait infinitely long. As such, in both instances, you have to wait infinitely long!

This shows that entropy is indeed a function of time - this is best shown using mathematical proofs, but just suffice it to say that entropy is the arrow of time.

And this is embodied in the Second Law of Thermodynamics, where it is said that entropy tends towards a maximum - pardon me, but how can anything tend towards a maximum if it is developing within a process that is governed by time? It shows that entropy is a quantity that evolves with time. We have failed to see this because the original idea of entropy came from macroscopic observations, and not via atomic properties.

Oh well. I think this first post is pretty much a fiasco. :p