Wednesday, January 30, 2008
Volume Up! Volume Down!
For a reaction of the form NaOH (aq) + HCl (aq), does the final volume of the system increase, decrease, or stay the same, on the provision that there is no loss of mass from the system?
For those taking Physical Chemistry this semester, this is a typical problem that makes use of partial molar quantities (if you haven't heard of it yet, how can you be a University Chemistry student? Haha.), and well, have a go at it!
Kudos to Aaron!
Sunday, January 27, 2008
Canon Not In 'D'
So what exactly is a canonical ensemble? Let us consider what it means by the word ensemble first:
Above we have a collection of atoms - any simple collection of microscopic objects can be referred to as an ensemble. Typically, we consider an ensemble to be a macroscopic system. So now, what is a canonical ensemble? Well, look at this:
Well, a canonical ensemble is simply a collection of ensembles as above! It is simply partitioning a huge, gigantic macroscopic system into smaller subsystems, each one still a macrosystem by itself. What this means is that each subsystem by itself, still obeys the statistical laws of statistical mechanics perfectly well, and therefore, by calculating the properties of one subsystem, one can simply multiply that property by the number of subsystems (if it is an extensive quantity) to obtain the overall property or quantity.
And the reverse is true: if you can calculate a quantity for the entire canonical ensemble, then it follows that you can divide by the number of such ensembles to obtain the quantity for the small subsystem by itself.
We shall now use this idea to derive a very general expression for the entropy of a system in terms of the probabilities of a macrosystem being in a certain energy state r. Let us consider a very general case where we have a huge macrosystem that can be partitioned into n identical subsystems. Each subsystem will then be in thermal contact with one another, interacting weakly, and each system will also have the same probabilities of being in certain energy states.
Let us then assume that each subsystem can exist in energy states 1, 2, 3, ..., r... and we allow the associated probability of each subsystem being found such an energy state be p(r). Then for n such subsystems, the number of systems to be found in the energy state r is simply:
With this, we then proceed to calculate the number of ways all of the subsystems can be partitioned within the huge system. What I mean to say is that we now proceed to determine the number of possible ways to distribute all of the subsystems into the various energy states 1, 2, 3, ..., r ... and we do this by determining the statistical weight of the entire canonical ensemble:The corresponding entropy of the entire ensemble is then:And if we use Stirling's approximation we obtain: We then substitute in the very first relation we stated at the start of this derivation:
And the next step is to simply divide the entropy of this huge macrosystem by the number of subsystems in the ensemble, so that we obtain:
And there you have it, the entropy of a single macroscopic system entirely in terms of probabilities!Saturday, January 26, 2008
Repeat After Me: Schottky, Not Schotty Defect!
In any case, this Schottky Defect is a prime example of what we call a point defect; basically defects occur in all solids, and this can be easily proven via a consideration of thermodynamic parameters. Instead of the usual ΔG = ΔH - TΔS treatment that we usually apply in Chemistry, let us consider a more top-down approach that is much more complete, albeit still lacking in details.
Now, at absolute zero, we know that according to the Third Law of Thermodynamics the atoms of a perfect crystalline solid are arranged perfectly in a regular crystal lattice, hence giving rise to zero entropy and zero disorder (we neglect any residual entropy in this discussion for simplification purposes!). Now, as the temperature increases, there will be a corresponding increase in thermal agitation that tends to produce defects in the crystalline structure - defects are basically misalignments or irregularities in an otherwise regular array of atoms in the solid.
When the temperature increases, the atoms go into a more frenzied state of vibration about their lattice positions, and sometimes, the atoms can actually be displaced from their lattice site, and migrate to the surface of the solid. This results in a vacant lattice site, which is referred to as a Schottky defect. The diagram below shows a perfect ordering in two dimensions, and a Schottky defect occuring:
And that's basically it, a Schottky defect that forms due to disorganizing thermal motion. Now what we wish to do now is to determine how the number of Schottky defects varies with the absolute temperature for a crystal that is in thermal equilibrium at a temperature T. Well, let us put into place certain assumptions:
1) The energy associated with a Schottky defect is ε: in other words, we say that the zero of energy is an atom within the lattice, and the energy of an atom on the surface of the solid with respect to an inner atom is ε, which is what is required to produce a defect.
2) For N atoms, let there be n defects, such that the total energy is nε: by saying this, we are saying that there are relatively few defects as compared to the total number of atoms (n is much lesser than N). As such, as according to the diagram below, all defects are well spaced away from one another such that each defect is surrounded by a regular array of atoms:
Now you might say: isn't this expression too easy? Using simple statistics for the statistical weight? And then for the entropy? Well, you're right, we have made some additional assumptions:
1) We have neglected surface effects: basically, there is an additional entropy term that we neglected, which corresponds to the number of ways you can arrange the atoms on the surface of the solid. But the reason for this is simple: for a typical amount of solid, say one mole, we have 10^23 atoms present. Since the number of atoms on the surface is proportional to (10^23)^2/3, we have roughly 10^16 sites on the surface of the solid. Comparing this number (10^16) to the actual number of inner atoms (~10^23), we say that surface effects can be suitably neglected under normal conditions.
2) We have neglected vibrational effects: earlier, we mentioned that the atoms do vibrate about their equilibrium positions, and this in fact contributes to the entropy as well at higher temperatures.
However, having neglected these two effects doesn't mean we're calculating the wrong entropy - it just means that we're calculating an incomplete entropy term. The key point here is that surface effects, defects and vibrations each have their own entropy and energy term - we can then calculate their energies separately (same goes for their entropy) and then sum them up later on, and it is still correct. The basis for such a treatment is that each component is a subsystem and they all interact weakly. Because they interact weakly, they are essentially independent of one another, and thus we can separate out their thermodynamic variables.
This is reminiscent of having an isolated system being split into two or more subsystems, where each subsystem has its own characteristic temperature; it is these temperatures (and other variables) that must be equal to one another when thermal equilibrium ensues within the solid.
Now that the problems are out of the way, let us consider a mathematical simplification by application of Stirling's rule, which states that for large values of x:
In which case if we apply it to the entropy of the system, we now have:
I've obtained the derivative of S with respect to n in anticipation that we'll need it later. Now, we'll make use of the basic definition of temperature in terms of entropy so that:
The right hand side is simply an extension of the middle term where I've used the chain rule; now that we've obtained the partial derivative of S with respect to n, let us obtain the other derivative:
Putting everything together we obtain the final expression:
Let us keep in mind that n <<>
And voila! This expression has the form of the much-celebrated Boltzmann distribution! An amazing thing, don't you think?
Just ruminate over it. :p
Friday, January 25, 2008
Certain Uncertainty
Well here’s a classic example that is not often used in textbooks – consider the ground state of a molecule, and given that the ground state is the lowest energy state of the molecule, then the ground state must correspondingly be the most stable energy state of the molecule. That is the most basic postulate in statistical mechanics, where the lowest energy state is the most stable state.
Now, if we are given a molecule in the ground state, and provided that there is no perturbation (in the form of energy given to it via radiation, heat, etc.), then we can be very sure that the molecule will always stay in the ground state. We therefore say that the molecule stays in the ground state for an indefinitely long time unless energy is given to it to excite it.
Now, this would just mean that the natural lifetime of the ground state is:
Yes, in other words, we say that the lifetime is infinitely long! So what does the Uncertainty Principle tell us? It tells us that:
Indeed, we conclude that the uncertainty in the energy of the ground state must be zero! That is, the ground state is exactly known with a well defined energy! Our first conclusion must therefore be: the ground state of any molecule must therefore be a state of definite energy.
With this first concluding statement in mind, let us consider a typical quantum mechanical transition between two energy levels:
Now, we know that the ground state has a definite energy (G), as explained earlier, but what about the excited state (E)? Does it possess also a definite energy? Now, let us consider a fact, an experimental fact: excited molecules always decay back to the ground state, either via thermal collisions or re-radiation of electromagnetic radiation, giving rise to emission spectra lines.
This means that we must insist that the molecules don’t have an infinite life time, because if they did, then the molecules once excited, would never decay. Once again, we turn to the use of the Uncertainty Principle, where we see that:
This time round, the uncertainty in energy isn’t zero, but of a certain magnitude, depending on the lifetime of the excited state. Now, we should more properly depict the transition as:
Interesting! The transition is no longer as properly defined as it was earlier! This explains why spectroscopic graphs don’t possess infinitely sharp peaks, because this natural line broadening phenomenon takes place.
Nevertheless, we can still say that as the energy of transition increases, the excited state is higher in energy, and thus less stable, which still translates to a smaller lifetime – it is for this reason that students fail to make the proper distinction between energy of transition, and energy of state.
Particulate This!
The bottom right arrow represents a positron (the anti-electron) - in this convention, all anti-particles are drawn with a reversed arrow, so while the electron has an arrow that points forward in time, the positron has an arrow that points backwards in time. You might ask: why so? Simply put, an antiparticle is a particle travelling forward in time! We'll have more on that later; for the time being, suffice it to say that this is the convention we're going to adopt.
You'll notice two dotted lines at the top, which corresponds to two identical photons - you might notice that there are no arrows at all. Want to fashion a guess? It's because a photon is its own anti-particle, and hence no arrow is needed to distinguish between a photon and an anti-photon at all. Hey wait a minute! Are you saying that photons can actually be travelling forward and backwards in time? Perhaps, for light itself defines the very limits of time travel, and maybe only a photon can experience true time travel without being altered itself.
So how do I read this diagram? Easy, if we use the conventional wisdom of our real world, we say that an electron and a positron travel towards one another, collide, annihilate, and their mass energy is converted into electromagnetic radiation in the form of two photons that travel away from one another.
But in modern Physics, there is an alternative viewpoint: an electron moves from the left to the right, and at one moment in time, emits two photons. The emitting of these two photons causes the electron to change its momentum in time, and causes it to move backwards in time, becoming a positron that goes on towards the right, but backwards in time.
Well, that's all for this post - till when I'm feeling awake again!
Wednesday, January 23, 2008
Thermodynamics and Mechanics: Complementarity
We have two identical metal spheres (consider them perfectly spherical!) with radius r, mass m, in the presence of a gravitational field on Earth g - but one of them hangs on a thread of negligible thickness and size, while the other lies motionless on the ground (assume that the ground is rigid and doesn't absorb any heat from anything including the ball). Now, the ball has a heat capacity c, which we can assume to be constant throughout this experiment (of course it won't be, but why torture ourselves here?)
The experimental procedure is as follows: we use a flame to impart E joules of energy to both balls and simply wait.
The question I have for you is: What is the final temperature of both balls? Will they be the same? If yes why? If not, why? [Hint: Of course the final temperature won't be the same!]
Here's another 2 questions for the really psychedelic hardcore Physics lover: Can you work out for me the difference in temperature? Can you also work out the difference in size? [Assume that the ball has a constant expansivity of a]
Working this problem out just demonstrates how intricately thermodynamics is linked to mechanics; indeed, the First Law of Thermodynamics itself embodies the essence of one of the important principles in mechanics, the Principle of Conservation of Energy. :)
-------------------------------------------------
Well, it's been days, and no one has replied or commented or whatsoever, so I guess it's time to go on and reveal more of the answer to fulfill my own needs and complete the post. This question is a very simple one, as long as one bothers to draw out the initial state and final state of the system - notice that I'm making use of one of the basic concepts in Thermodynamics, the fact that initial and final states determine the change in any state function, and in this case, the temperature of the ball.
Consider the two expansion processes:
What do you notice? I'll leave this post hanging here for another few days before someone comes along to comment on what happens. :p
Tuesday, January 22, 2008
Time Based Entropy
However, let us first consider the importance of entropy - what exactly is this monster called entropy? Well, as all elementary definitions put it, entropy is the amount of disorder within a system. So if you have a cup of hot water, and cup of cold water, naturally the hot water will have its molecules sloshing around (alright, not quite literally) with kinetic motion, far more than the cold miserable molecules of the cold water would. As such, by virtue of its internal molecular motion, we say that the hot water has more entropy of motion because of a higher temperature.
Yeah yeah, so it's disorderliness, so what? Sure enough, that's not all. Let us then dive into the Second Law of Thermodynamics again, in its simplest form: the entropy of the universe must always increase in an irreversible process. Ah, but why must entropy increase? This is always a difficult concept to explain, and thus let us have a thought experiment:
If I have a container with a partition separating hot and cold water, I say that the hot water is a region of higher entropy, and the cold water a region of lower entropy - but is this really the state of maximum entropy? No! Why? Because there exists a very obvious sense of order! Because the container can be exactly divided into an ordered region and a disordered region, and therefore, there is an intrinsic order associated with the system!
So how would I increase the entropy further? Easy, break that orderliness! So we break the partition and allow the regions to mix, producing lukewarm water, and thereby increase the entropy of the system to a maximum. Now, ask yourself - this is an irreversible process, is it not? You will never see a lukewarm glass of water separating itself into hot and cold regions spontaneously by itself! No way man! And the direction of increasing entropy tells us how this works.
Now, let us consider this again: why is entropy so important? Think about it: the hot water and cold water have internal energy U - this internal energy can be used to do work, because the hot water has thermal energy that can be used to generate electricity by perhaps, driving a thermocouple.
Now consider the hot and cold water mixing - it still has internal energy U right? But yet the amount of work it can do is lesser! When you use warm water to drive a thermocouple, not so much energy is produced as work because the temperature isn't that high.
Weird! The amount of internal energy available is the same, but yet the amount of work that can be done is different! And this is explained because of entropy. As entropy increases, what this means is that the energy contained within a system is more spread out.
Ask yourself again: what kind of energy is useful? Why, of course, it must be energy that is able to flow from one region to another, energy that can flow! If energy can't be transported or unable to flow, then we simply can't tap it or harness it! Imagine if the chemical energy from your food couldn't be moved from the food itself into your cells for you to utilize! You'd be unable to do anything with the food you ate!
Entropy causes a spreading of the energy into an equilibrium state, such that there is an even mix of energy (and mass) everywhere within the system, such that in such an even distribution of energy, energy can't move anymore! Or rather, if energy moves in one direction, an equal amount will move in the opposite direction that compensates such movement, and thus there is no net movement of energy observable. Work is the net movement of energy, a loose definition, and if energy can't even move, work can never be done. Of course work has a more rigorous definition, but oh well, it's sufficient for this point.
Well, here comes the main point of this post, to prove (in a very non-rigorous manner) that entropy is also a function of time. Please bear with me as I plough you through some essential basics before that. Let us first take a look at how the change in entropy (dS) is mathematically defined in basic Thermodynamics:
Simple enough, a change in entropy (dS) is caused by a reversible flow of heat (dQrev) in or out of the system, divided by the temperature of the system (T). If the flow of heat is into the system, then dQrev is postive, otherwise it's negative. The reason why it's defined like this can't be explained using any simple ideas, but suffice it to say (the mathematical derivation is very complex and time consuming) that heat is the flow of energy brought about by molecular motion, which therefore increases the disorderliness of the system. As such, we use the flow of heat as a measure of the change in entropy of the system.
The temperature is present in the equation because obviously for a very high temperature a small flow of heat wouldn't cause that much significant a change in the entropy of the system. Thus in Physics one would say that entropy change is a change weighted by the temperature of the system.
Let us now first prove an important theorem in Thermodynamics, the Clausius Inequality, which actually shows that no matter what, as long as you have a cycle, the entropy of the universe must definitely increase! Interesting right, a proof! Now let's first start from the First Law of Thermodynamics, which states that:
This means the change in internal energy of a system is equal to the heat flow in/out of the system and the work done on/by the system. Notice that this two quantities can be either reversible (rev) or not. We then make the distinction between reversible work done on the system and irreversible work done on the system:That is, reversible work done on the system is lesser or equal to the irreversible work done on the system. The equality holds only when the work done is reversible, in which case a subscript rev is added to dW. And if we do some mathematical arrangement, we see that:
Which must lead us to the conclusion that:
And therefore we have:
And recognising that the quantity on the left is simply the change in entropy, we write:
And we proceed to determine the total change in entropy when we have a cycle, by integrating over the cycle, which is indicated with a small circle on the integral to indicate a closed integral:
And recall by definition that a cycle is a process that brings a system from state A to some state, and then back to state A. If entropy is a state function, then the system being at state A, will possess a fixed entropy, and thus the change in entropy of the system must be zero since the final and initial entropy of the system is the same:
Which concludes the derivation of the Clausius Inequality:
So what exactly does this inequality mean? Well, you must notice that if the entropy change of the system is zero, then we must agree that change in entropy of universe = change in entropy of surroundings, am I right? Now look at the equation above: it says that the heat flow is always negative in a cycle, and therefore the heat that flows must flow out of the system into the surroundings.
Wait a minute, doesn't the heat flowing into the surroundings mean that the surroundings' entropy change must be positive? Hey that means that the entropy change of the universe increases right?
Correct! In any cycle, we end up increasing the entropy of the universe - we can't go against this principle. This means that everytime you turn on and use the engine in your car, you're killing the universe. :p
---------------------------------------------------------
As for time-based entropy, let us consider the change in entropy again:
Then using the chain rule in basic calculus, we have:
Which then rearranges into:
We have recognised that dQrev/dt simply refers to power, and we explicitly refer to power as a function of time by putting the bracket Prev(t). And from the previously worked out example, we have:
Which allows us to say:
So that we can conclude that:
In other words, the entropy changes with time because heat must flow as a function of time! If there exists no time for heat to flow, there can be no change in entropy. This last equation took for granted that the power is always causing heat flow into the system, which is not necessarily true, but it's just a special case anyway. :p
Therefore, by knowing the mathematical form of P(t), we can simply integrate within the limits, and obtain entropy as a function of time. Which is actually easily done if we consider heat transmission via radiation. The law of heat transmission by radiation is summed up in the Stefan-Boltzmann Law of Radiation, where:
And there you have it, you actually can express the entropy of a system as a function of time (notice in the above I omitted the constant of integration because I'm lazy, :p).Tuesday, January 15, 2008
Entropy As A State Function
I'm not going to go into a lecture here, but I'm just going to give you one question to consider regarding entropy: if entropy is truly a state function, then no matter how the change takes place, regardless of how the system was prepared, we can ignore what the system went through and simply consider the initial and final states right?
Now the above is an absolute truth that applies to all state functions, but now let us consider this scenario (a gedanken, or thought experiment, as I'd like to call it):
I've a piece of alloy, and it's a special alloy, known as β-brass, made up of 50% Zinc and 50% Copper. My question is, if I have β-brass at room temperature and I cool it down slowly and reversibly to 10 K, and alternatively, I cool it down rapidly to 10 K, will the change in entropy be the same?
If not, why? If yes, under what conditions? Answering this question will allow you to understand that entropy is in fact, a special function of a special parameter of life.