Tuesday, January 20, 2009

Stay Still!

I think I’ll go into the idea of stationary states today; what exactly is a stationary state? Well, it’s essentially a state of a system, where the energy of the state remains constant over time. Defined more rigorously, it’s a state where the expectation values of observables remain constant over time.

For instance, let us take a look at the particle-in-a-box wavefunctions (as functions of time and position):


So let’s consider the probability density function of finding the particle within the box:


Notice that the phase factors (exponential time factors) cancel out when the complex conjugation is taken and multiplied! That is, the probability density function isn’t a function of time! It’s solely a function of position and thus, doesn’t vary with time.

How about the expectation value of the position? Well let’s take a look:


Hey! It turns out that the expectation value of the position doesn’t depend on time as well! So, it turns out that for all states of the system that are eigenstates (that is, if the system exists only as an eigenstate), the state is a stationary state!

So what isn’t a stationary state? Well, let’s look at linear combinations of eigenstates; we know that any linear combination (properly normalized of course) of eigenstates will still result in an arbitrary state that is a solution of the Schrödinger equation, so let’s try a positive linear superposition of the ground state and first excited state (I’ve normalized it for all of you already):


Let us now evaluate the probability density function (click on it to enlarge it):


Notice that now the probability density function is a function of time as well! The time phase factors no longer cancel out! It turns out that for any linear combination of eigenstates, the state will no longer be a stationary state – and therefore observables like its energy, momentum and even position will not have time-constant expectation values.

If you’ve heard physics professors go, “It’s all because of the cross terms!” this is what it means. Haha.

Postulato Negativito!

So I was looking, and I was just a little bit amused:

"So... there are two types of fundamental postulates in science. The first type is the self-evident kind, like how heat flows from hot to cold regions. The second type is one that is more subtle, and needs to be unfolded in a series of arguments before you get the point of it all."

And why am I laughing?

"Unfortunately, in Quantum Mechanics, the fundamental postulates all belong to the second type."

Lol!

Monday, January 19, 2009

Time Evolution

So if you look at it this way, then the time evolution of the wave function of a system is no more than an evolution that is governed by the energy of the system:


Then only separating out the time-dependent wavefunction:


It would appear that the time evolution of a system lies only in the changing of the phase of the system!

Strange. Weird. I still don't get what this means, haha.

Sunday, January 18, 2009

Double Pendulum Revisited

Remember this post: http://wulidancing.blogspot.com/2008/07/double-pendulum.html?

Well, it turns out that you don't really need any high-level mathematics for this question, and I'm just wondering how come I didn't figure it out at that time, haha. So here's the whole set up again:


It'd be useful to see that the equilibrium position should have zero gravitational potential energy:


The trick to solving this question is to lift each pendulum up one after the other; so let's give the first pendulum a push and see what happens to the potential energy:


Notice that both pendulum bobs are raised by the same length, which explains the coefficient of '2'. Now let's give the lower pendulum another push and let's see what happens:


Notice that we've just added another term to the potential energy, and this time the coefficient is '1' because only one bob is raised.

Quite easy right? Haha.

Saturday, January 17, 2009

Lagrangian in Action

As a follow up to my previous post, let's see some Lagrangian Mechanics in action, with the use of a very easy example - the simple pendulum! So let us consider the following set-up:


For your convenience, I've worked out the various quantities to take note already, and I've labelled all of them on the above diagram. Now, with all of these observables in place, we can now write down the Lagrangian immediately:


Notice that I'm no longer using the x coordinate! This explains why the Lagrangian is so versatile, because it can be expressed in terms of any general coordinate, and still work! So given this, let us work out the Euler-Lagrange Equation:


And we see that if we equate the two derivatives, we obtain the equation of motion:


Notice that we didn't even have to go about resolving out the various forces, like tension or weight or whatever! It's really easy with Lagrangian Mechanics!

I believe your teacher might have told you to "only make small oscillations when setting up the pendulum", and the rationale for that being that "small oscillations result in simple harmonic motion." But how?

Easy, let us consider small angular displacements, and our equation of motion automatically reduces to:


Voila! Notice that the second derivative with respect to time of the angular displacement (i.e. angular acceleration) is now directly proportional to the angular displacement! This is the very definition of simple harmonic motion!

Haha. Easy right? Lagrangian Mechanics really simplifies a lot of things. :)

Friday, January 16, 2009

1, 2, 3... Action!

Last night I was getting down to understanding the Lagrangian as it is used in Classical Mechanics, which is something I’ve been putting off for quite a while now; but taking a closer look at it, it doesn’t seem as tough as I thought it’d be! Well well, so how does one start?

Well, a proper derivation proved too tedious for me to want to type it out (I tried deriving it for my friend Shaun, but I gave up halfway because it’s really too long and I was lazy, haha!) so here’s a really general derivation, which needs some prior knowledge of calculus of variations.

But I think all of you are smart. So I’m going to go ahead with this general derivation. Haha.

Alright, let’s start with the following equation, known as the Euler-Lagrange Equation:


As above, we have an equation that is pretty much obvious: it’s simply a differential equation that involves a function of three variables x, y and y’ (the derivative of y with respect to x).

Now, for any function F(x, y, y’) that fulfills this condition, we can immediately say that over the interval as shown below:


I won’t be explaining why this is the case, because it took me quite some pages to type out and I don’t think I’m in the mood to type everything out for this blog post, haha. So please do accept this for the time being!

So what does this have to do with the Lagrangian function in Classical Mechanics? Oh yeah, what is the Lagrangian?

Well, it’s simply this function right here:


Where T is the total kinetic energy and V is the total potential energy of the system. Notice also that:


So we now rewrite the Lagrangian as:


Now, let us consider the following derivatives of the Lagrangian:

Now for all conservative central potentials, which we are all familiar with, we know that:


That is, the negative of the gradient of the potential energy is the force acting on the system at that position (JC students might remember this from basic Gravitational Theory). So let’s keep this in mind first! Now onto another derivative of the Lagrangian:


Of course, we know that:


Which is acceleration! Let us now put everything together:


Now let us compare the Lagrangian differential equation with the Euler-Lagrange equation:


Hey! Doesn’t this mean that the Lagrangian should then imply that the following integral is a minimum:


This integral is known as the action; notice that Newton’s Second Law is actually an Euler-Lagrange Equation when it is written in the form of the Lagrangian, which in turn means that the above integral, the action, is to be a minimum.

This came to be known as what is now called the Principle of Least Action.

And that’s about as watered-down as I can make it for all of you, haha.

Sunday, January 11, 2009

Separate Them!

I would like to highlight in this post, a nice mathematical technique known as separation of variables, which reduces a more complicated partial differential equation into a less complex ordinary differential equation. To illustrate this technique, I shall be using the time-dependent Schrodinger wave equation in 1 dimension, which is as follows:


This equation tells us that the time evolution of the wavefunction depends on how the Hamiltonian operator acts on the wavefunction. That is, the time evolution depends in some manner, on the energy of the system.

With that, we shall now consider the Hamiltonian operator acting on the wavefunction:


However, in most cases, the potential is independent of time, say like how an electron always orbits around a fixed centre of charge; therefore, we indicate this by re-writing:


Notice now that the potential function only depends on the position coordinate, and not time. Now, we can write out the full time-dependent wave equation as:


In quantum mechanics, it turns out that the overall time and position dependent wavefunction can be factored out into a product of separate time wavefunctions and position wavefunctions. That is, we can now write:


This assumption (generally valid when the potential is not a function of time) leads to the technique known as separation of variables, where you effectively factor out one variable from a function. And if we make the substitution, we see that:


Dividing throughout by φ(x)τ(t):


Alright! Now on the left we have a function depending solely on time, and on the right a function depending solely on position! Notice that we have separated the variables! So what’s so good about this you say? Consider this:

‘If I vary x only, then only the right hand side of this equation should change, since the left hand side doesn’t depend on time. However, if the left hand side doesn’t change and remains constant, so should the right hand side! This means that both sides are actually equal to a constant!’

With this revelation, can you fashion a guess as to what this constant might be?

If you said energy, you are absolutely right! We can now equate both sides to the energy of the system:


Also notice that the equations are no longer partial differential equations, but ordinary differential equations! We’ve made life simpler! Hurrah!

Let us look at the time-dependent wavefunction:


Voila! The time-evolution depends on the energy of the system as shown! Now, you might be wondering about the position-dependent wavefunction, so let me rearrange it as shown:


Hark! Isn’t this the usual time-independent Schrodinger wave equation that we always see? Haha.

Friday, January 2, 2009

QM!

After a long break, and with Quantum Chemistry looming in the near future of the coming semester, let us go into a short (and I do mean ‘short’) discourse on the structure of Quantum Mechanics.

So, what is the principal difference between the so-called Classical Physics and Quantum Physics? Is it… that quantities are quantized and no longer continuous? Well actually no, even in the classical realm things are quantized, just that in such small portions that they seem continuous to the human senses! So what is the main difference that makes quantum mechanical treatment ‘quantum’?

First off, you should be acquainted with the term ‘observable’, which in general refers to any dynamical quantity that can be observed in real life; that is, properties such as momentum, position, weight, mass, energy etc. By the word ‘observe’, we mean to say that the property belongs to a system, and the system possesses that property or quantity and what we measure depends on the state of the system.

In all Physics, we are concerned with two types of quantities: ‘parameters’ and observables. So what is a ‘parameter’? Time, for instance, is a parameter – you can measure time no doubt, but time doesn’t belong to a system. A system evolves with time, but it doesn’t possess time per se, so as to speak. With that, time is a parameter, and not an observable. That is to say, the measure of time, doesn’t depend on the state of any system, and evolves very much by itself. In that case, time isn’t an observable!

Alright, so with the definition of an observable laid out, we can now state the fundamental difference:

‘In classical physics, observables are represented by functions
but
in quantum physics, observables are represented by operators.’

So what is this difference? Now… a function by itself, means something. For instance, if I tell you the position of say, my Chemistry professor depends on time as such:


I’m pretty sure you’d be tracing out a parabola in your head. Or maybe not. But the point is, in classical physics, the observable takes on a pretty much, well, for lack of a better word, ‘observable’ form. But if I tell you now, that say, the momentum of my Chemistry professor depends on an operator as such:


Then, what exactly does this mean? The derivative isn’t operating on anything at all! How can I say that the momentum is equal to this derivative – the derivative of what? So it seems like, hey, quantum observables aren’t that easy to observe, huh?

Therein lies the first magic of quantum physics that all science students should be aware of: without any measurement, you can’t say anything about any system. Which makes sense: if you don’t allow the operator to ‘operate’ (i.e. measure) on a system, then no information can be obtained, because an operator needs to operate on a system before it can extract any information!

So that’s why in quantum physics, there needs to be something called the ‘wavefunction’ – essentially, this wavefunction contains all information about a system. If you want to find out something about this system, easy! Just use the appropriate operator on the wavefunction, and it’ll give you a numerical value about just the thing you want to know about the system.

I guess you should be aware of the Hamiltonian; the Hamiltonian is the ‘total energy operator’, meaning, when you apply the Hamiltonian to a wavefunction, you obtain the total energy of the system. And of course, we see this most often in the much-celebrated Schrodinger wave equation:


That is, if I wish to determine the total energy of a system, I just apply the Hamiltonian to the wavefunction of the system, and bam! I get the energy ‘E’!

Alright, I’m tired, and it’s 2:37 am in the morning, and I think I’ll be taking some of these thoughts to bed. Haha.