SCE3337 Quantum Mechanics III - Exvacuo

over the volume V. Recall the rate of energy transferred to a medium by the rate ...... process as a coherent light in the form of a square pulse (as shown below) ...
1MB taille 19 téléchargements 273 vues
SCE3337 Quantum Mechanics III R.T. Sang

1

SCE3337 Quantum Mechanics III (Quantum Mechanics and Quantum Optics)

Teaching Team Dr Robert Sang Robert Sang can be found in Science II Level 0 Room 0.15, non-personal contact is possible via email at the following address: [email protected] or via the telephone on 3875 3848 . Lecture notes/Problem Sheets can be found on the web page: http://www.sct.gu.edu.au/~sctsang It is your responsibility to ensure that you are up-to-date with them and should be downloaded prior to each lecture. 1 . 0 Introduction This course has been divided into two parts; The first part deals with the introduction of operators and techniques that are applied on a regular basis in dealing with problems that require the use of Quantum Mechanics these include: • • • •

The application of New Notations (Bra/Ket) Operator Techniques Perturbation Theory Selection Rules for Single Photon Dipole Allowed Transitions

The second half of the course deals with Quantum Optics which includes • • • •

The interactions of light with matter Optical Processes such as Absorption, Spontaneous/Stimulated Emission Lasers The two level atom treatment of Strong Interactions between laser light and atoms

Course Structure, Tutorials and Examinations There will be eight lectures that deal with the development of the tools of Quantum Mechanics which will also involve three tutorials. There will be a mid-semester 1 hour examination which has a weighting of 50% of the total marks for the course. Following the exam will be a further nine lectures in quantum optics which will also have two tutorials. There will be a final examination at the conclusion of the course which will cover all aspects of the course. Useful Texts Modern Physics and Quantum Mechanics- Anderson Quantum Mechanics- Cohen-Tannoudji, Diu, Laloë The Quantum Theory of Light- Loudon Optical Resonance and Two Level Atoms-Allen and Eberly

2

SCE3337 Quantum Mechanics III R.T. Sang

2 . 0 Basics of One Particle Wave Function Space The quantum state of a particle is defined at any given instant by a wavefunction ψ(r,t). Recall from QMII that the wave function of a particle ψ(r,t): |ψ(r,t)|2 d 3 r represents the probability of finding, at time t, the particle in volume d3 r =dxdydz about the point r. Therefore the total probability of finding the particle over all space is equal to 1, mathematically this is expressed as:



(r,t) d 3r = 1 2

where the integration extends over all space. The above integral must converge (the sum of the probabilities must yield 1) and this type of integral is called a square integrable function. Mathematicians call this set of functions L2 . In QM this set of functions is to wide in scope since |ψ(r,t)|2 has an actual physical meaning therefore we can only keep functions ψ(r,t) which are everywhere defined (we can’t have particles disappearing!), continuous and differentiable. It is also possible to define wavefunctions that have a bounded domain, this allows us to be certain that a particle can be found in a finite region of space. For example we can define a space inside the laboratory in which our particle can be found. We shall call this group of functions that satisfy our conditions the F set of wavefunctions which is a subset of the L2 set. This statistical interpretation of Quantum Mechanics is due to Born, Heisenberg and Bohr. 2 . 1 Structure of the Wave Function Space F a) F as a vector Space It is easy to show that F satisfies all the criteria of a vector space, for example if ψ1 (r) and ψ2 (r) exists in F then the sum of the two vectors should yield another vector that belongs to the vector space F: (r) =

1

1

(r)+

2

2

(r)

where λ 1 and λ 2 are two arbitrary complex numbers. Now in order to show that ψ(r) belongs to the wavefunction space F then we need to demonstrate that it is square integrable. Squaring ψ(r) gives | (r)|2 = | 1 | | (r)| +| 2 | | (r)| +

1

*

2

(r)*

(r)+

1

2

*

(r)

(r)*

The last two terms of this expression have the same modulus which has the upper limit when ψ1 (r)= ψ2 (r) then the final terms equal 1

|

2

| (r)| +| (r)| ]

|ψ(r)|2 is therefore equal to or smaller than the maximum function whose integral must converges since ψ1 (r) and ψ2 (r) are square integrable then ψ(r) must also be square integrable and hence exists in the vector space F.

3

SCE3337 Quantum Mechanics III R.T. Sang

b) The Scalar Product The scalar product is defined as ( , )=∫

*( r) (r)d 3r

The integral always converges if ϕ(r) and ψ(r) belong to F. The scalar product has the following properties ( ( (

)=( +

1 1

)*

+

2

2

)= )=

1 1

*

)+ 2 )+ 2 *

) )

The scalar product is said to be linear in the second relationship and anti-linear with respect to the third relationship. If (ϕ,ψ)=0 then the two functions are said to be orthogonal.

(

,

)=∫

2

(r,t) d 3 r

This number is always real and positive and can only equal zero if ψ(r) =0. norm of ψ(r).

(ψ,ψ) is the

c) Linear Operators An operator is something that transforms a wavefunction to a new wavefunction. If A is an operator it is said to be linear if it produces the following transformation: (r)'=A (r) A (r) = A 1

(r)+

2

(r)] = A

1

(r)+

2

(r)

Examples of Linear Operators The X operator; multiplies a function by x: Xψ(x,y,z)=xψ(x,y,z) The differential operator Dx which differentiates wrt x: Dx ψ(x,y,z) =

∂ψ(x,y,z) ∂x

SCE3337 Quantum Mechanics III R.T. Sang

4

d) Products of Operators Let A and B be two linear operators. The product AB is defined as (AB) (r) = A[B (r )] We say that B operates on ψ(r) which produces a new function ϕ(r)=Bψ(r), A then operates on the new function ϕ(r). In general AB BA i.e. they do not commute. The commutator of the operators A and B is written as [A,B] and has the following definition [ A, B] = AB - BA If [A,B] = 0 then the operators are said to commute. Example: [X, Dx ], we shall use the arbitrary function ψ(r). [X, D x ] ψ(r)

= XDx ψ(r) - Dx X ψ(r) ∂ψ(r) ∂ =x − [xψ(r)] ∂x ∂x ∂ψ(r) ∂ ∂ =x − {( x)ψ(r) + x ψ(r)] ∂x ∂x ∂x = -1. ψ(r)

Hence [X, Dx ] = -1. Since the commutator is non-zero X does not commute with Dx . Exercise 1: Evaluate the commutation [Dx , X 2 ] where the operator X2 =x2 . Is it equal to [X2 , Dx ]?

5

SCE3337 Quantum Mechanics III R.T. Sang

e) Discrete Orthonormal Bases in F Consider a countable set of functions in F which are labelled by a discrete index i (i=1,2,3,..,n,..): u 1 (r), u2 (r),... u i(r), .. which all exist in F. The set of functions {ui(r)} are said to be orthonormal if

(u (r), u (r)) i

j

=

∫ u (r )*u (r)d r = 3

i

j

ij

where δij is the Kronecker delta function which takes the following values δij = 1 when i=j δij = 0 when i ≠ j This set of functions is said to constitute a basis if every function (r) that exists in F can be expanded in one and only one way in terms of u i(r): (r ) = ∑ ci ui (r) i

The components of the wave function in the ui(r) basis my be found by multiplying both sides of this equation by uj(r) and integrating over all space. Proof:

(u j (r ), (r)) = (u j (r), ∑ ci ui (r)) i

= ∑ ci (uj (r),ui (r)) i

= ∑ ci ∫ uj (r)* ui (r)d 3 r (applying orthonormality) i

= ∑ ci

ji

i

= cj ⇒ cj = (uj(r), (r)) = ∫ ui (r)*

(r )d 3r

The coefficient cj of the wave function ψ(r) on uj(r) is equal to the scalar product of wave function ψ(r) with uj(r). Once the basis set {u i(r)} is chosen, it is completely equivalent to express the wave function ψ(r) in terms of the ci coefficients with respect to their basis functions. The set of numbers c j is said to represent (r) in the {u i(r)} basis.

6

SCE3337 Quantum Mechanics III R.T. Sang

f) Expression for the Scalar Product in Terms of the Components Let ϕ(r) and ψ(r) be two wave functions which have the following expansions in the basis {ui(r)}: (r) = ∑ bi ui (r) i

(r ) = ∑ c j u j (r ) j

The scalar product can be calculated such that ( , ) = (∑ biui (r ), ∑ c ju j (r)) i

j

= ∑ bi * c j (ui (r ),u j (r)) i, j

= ∑ bi * c j

ij

i, j

= ∑ bi * ci i

Therefore

(

,

)

=

∑| c |

2

i

i

The scalar product of two wave functions can be expressed in terms of the components of the functions in the basis {ui(r)}.

SCE3337 Quantum Mechanics III R.T. Sang

7

3 . 0 Dirac Notation and the Postulates of Quantum Mechanics We now look at using a new notation which is simpler to write down that the notation that has been used until present (saves us having to write all the messy integrals in QM). This new notation is called Dirac Notation as it was first introduced by Dirac. We will assume that each quantum state can be represented by a state vector which belongs to an abstract space we will call E which is called the state space of the particle. E is a subspace of Hilbert Space∗ . Postulate 1: The quantum state of any physical system is characterised by a state vector, belonging to a space E which is that state space of the system. 3 . 1 Dirac Notation Any element or vector of E space is called ket therefore by postulate 1 we can define any wavefunction which describes a state α by a ket: ψα ≡ |α> This is a ket vector The complex conjugate defines the bra vector ψα* ≡ = q α|α> This is know as an eigenvalue equation where qα is called the eigenvalue which in general will be complex and |α> is known as an eigenfunction. Note that operators act on kets from the left and bras from the right: = qi|i> Recall the orthogonality condition for normalised wavefunctions from lecture 1

∫ψ

i

* ψ jdτ = δij

In bra/ket notation this is expressed as

∫ψ

i

* ψ jdτ = = δij

Another way to think of this is that there is no overlap of the wavefunctions in all space hence the dot product must yield zero. Therefore no eigenfunction depends on any either eigenfunction for its definition and as such is independent. Postulate 4: The complete orthonormal set of eigenfunctions {|j>} can be used to represent or expand any general wavefunction of the same Hilbert subspace: N

|ψ> =

∑ c j | j > and = ∑ c i *c jδij i,j

i,j

= ∑ ci *ci = ∑ | ci | = 1 2

i

i

As we would expect for a normalised wavefunction the sum of the coefficients must be unity since the probability must be conserved.

9

SCE3337 Quantum Mechanics III R.T. Sang

Postulate 5: The expectation of an operator is described in this notation by < |Q| >.



=

* Q d 3r



=

3

* d r

∫ ψ *Q ψd r 3

=

for normalised wavefunctions

The expectation value of an operator corresponds to the average of that operator over all space and time. For example if the operator was X, the position operator, then the expectation value of this operator corresponds to the average value of position of a particle over all space and time. Now from postulate 4 we can rewrite |ψ> as |ψ> = ∑ c j | j > j

Q |ψ> = Q ∑ c j | j > j

∑ c Q | j > using postulate 2 reveals Q|j> = q |j> = ∑c q | j > =

j

j

j

j

j

j

= (∑ c i * < i |)∑ c jq j | j > i

j

∑ ∑ c *c q = ∑| c | q =

i

i

j

j

j

< i| j > = ∑ i

∑ c *c q δ i

j

j

ij

j

2

i

i

i

This equation yields a weighted average of the possible eigenvalues of Q. The probability of a single measurement yielding qi is therefore given by |ci|2 . The probability of finding a system in the state |i> is |ci|2 . Postulate 6: The adjoint of an operator Q is Q† and is defined by the following relationship: = * An operator is said to be Hermitian if Q = Q† All Hermitian operators have real eigenvalues All Measurable Observables (which are real quantities) have Hermitian Operators

10

SCE3337 Quantum Mechanics III R.T. Sang

Proof: Given that Q|i> = qi|i> and that Q = Q† then we need to show that qi are real then = = qi = qi From the definition of the adjoint = * = * = (qi )* = qi * Now the operator Q has been defined to be Hermitian i.e. = Recall that for a complex number q that q = Re(q) + Im(q) and q* = Re(q) - Im(q) Therefore equating the two outcomes above reveals that the only way that qi * = qi is if they have no imaginary terms so qi must be real. 3 . 2 The Projection Operator Suppose that we have a complete orthonormal set of wavefunctions {|a>}. We defined the Projection Operator# such that Pa = |a> as given by postulate 4 may be written as N

|ψ> =

∑c i =1

i

|i>

Operating the projection operator on the wavefunction yields Pa |ψ> =

∑c

i

i

|a >< a | i >= ∑ c i | a > δai i

= ca|a> Thus the projection operator projects a wavefunction onto a particular basis function. Summing over all projection operators in the basis {|a>} gives

∑ P | ψ >=∑ c a

a

a

The term

|a>

a

∑c

a

| a > is just the wavefunction written in the {|a>} basis set hence

a

#

Note the difference between the projection operator given by |a>=I| ψ > a

a

where I is the Identity Operator which maps a wavefunction back onto itself. The complete sum of projector operators therefore yields

∑P

a

a

= I or

∑ | a >< a | = I a

1

SCE3337: QMIII R.T. Sang

Lecture 2 3 . 3 Matrix Representation of Operators We will now look at an alternative representation of operators involved in quantum mechanics which was developed by Heisenberg. Assume that we have an operator Q that we allow to act on the wavefunction |ψa> |ψb > = Q|ψa> This is a typical quantum mechanical process where we say that Q acts on |ψa> to produce a new wavefunction |ψb >. We now apply postulate 4 and let |ψb > be represented by a superposition of basis functions {|k>} such that

∑b |k > |ψ > = ∑ a | j > |ψb > =

k

k

a

j

j

The connection between the a and b coefficients can be defined since |ψb > =

∑b

k

k

| k > = Q ∑ a j | j > = ∑ a jQ | j > j

j

We now multiply the expression by the bra-vector = ∑ b k δ ik k

= bi For the RHS we obtain = < i |Q ∑ a j | j > = ∑ a j < i |Q | j > = ∑ a jQij j

j

j

Equating the LHS and RHS reveals bi =

∑a Q j

ij

j

Qij is called the matrix element of the operator Q and is given by Qij = It is easy to see how this process defined by the expression |ψb > = Q|ψa> can be represent by the following matrix equation

2

SCE3337: QMIII R.T. Sang

 b1   Q11  b2   Q21  b3   Q31 . = . .   .    . .    b n   Qn1

Q12 Q22 Q32 . . . Qn2

Q13 Q23 Q33 . . . Qn3

. . . . . . .

. . . . . . .

. . . . . . .

Q1n   a 1  Q2n   a 2  Q3n   a 3  .  .  .  .    . .   Q nn  a n 

Wavefunction kets are represented by column vectors whereas their complex conjugate (bras) are represented by row vectors (b1 *, b 2 *, b 3 *, ..., b n *). 3 . 4 Matrix Inversions A very useful property of the matrix representation is a technique that allows matrix inversions: If |ψb > = Q|ψa> then |ψa> = Q-1|ψb > where Q-1 is the inverse of the matrix Q. This expression is valid provided that the determinate of the matrix |Q| ≠ 0, i.e. Q is non-singular. Recall from first year maths that Q Q-1 = I where I is the Identity Matrix (in QM we call it the Identity Operator since I|ψb > = |ψb >). This matrix is given by: 1 0 0 I= . .  .  0

0 1 0 . . . 0

0 0 1 . . . 0

. . . . . . .

. . . . . . .

. . . . . . .

0 0 0 . .  .  1

If Q is non singular a very simple technique that can be used to find the inverse is the cofactor technique:* Q-1 =

*

1 T C |Q |

It is also possible to use the Gaussian elimination or row reduction techniques to find the inverse.

3

SCE3337: QMIII R.T. Sang

CT is the transpose1 of the cofactor matrix which is a matrix of cofactors, the elements of which are cofactors of the original matrix Q. The individual elements are called cofactors cij. To find the inverse, one simply finds the cofactor matrix, then take the transpose (exchange rows and columns), and then multiply each matrix element by 1/|Q| The cofactor matrix is defined as Cij = (-1)i+j|qij| |qij| is the minor of the element Qij. It is a scalar value given by the (N-1) x (N-1) determinant remaining in the original matrix when the ith row and jth column are struck out. Example: Consider the general matrix A below: a A = 0 b

0 − b 1 0 c a 

The cofactors of the matrix A are First Row:

−(1)1+1

1 0 = a, c a

−(1)1+ 2

0 b

−b a = −bc , −(1)2+ 2 a b

Second Row: −(1)2+1

0 c

Third Row: −(1)3+1

0 −b = b, 1 0

−(1)3+2

0 = 0, a

−(1)1+ 3

0 1 = −b b c

−b a 0 = a 2 + b2 , −(1)2+ 3 = −ac a b c

a −b = 0, 0 0

−(1)3+3

a 0

0 =a 1

The cofactor matrix is therefore 0  a 2 C = − bc a + b2  b 0

−b  −ac a 

Now to calculate the inverse matrix A-1 we need the transpose of the matrix: −bc a 2 C =  0 a + b2 − b −ac T

1

b 0 a 

To find the transpose just swap the rows with the columns, ie

CijT = Cji

4

SCE3337: QMIII R.T. Sang

The determinate of the matrix A is |A| = +(a)

1 0 0 0 0 1 - (0) + (-b) c a b a b c

= a2 +b2 The inverse matrix is therefore: −bc a 1 2  0 a + b2 A = 2 2 a +b  −ac − b -1

b 0 a 

Exercise: Confirm that this indeed is the inverse of A 3 . 5 The Matrix Form of the Eigenvalue Equation Consider the eigenvalue equation below Q|ψ> = λ|ψ> Expanding |ψ> with respect to the complete orthonormal basis set of functions {|j>} yields N

|ψ> =

∑c

j

| j>

j

Substitution into the eigenvalue equation yields N

N

j

j

∑ c jQ | j > = λ ∑ c j | j > Multiplication by the bra vector = λ ∑ c j < i | j > j

N

N

j

j

∑ c jQij = λ∑ c jδ ij N

∑ c (Q j

ij

− λδ ij ) = 0

j

The only non-trivial solution of this expression (cj≠0) is when the determinate |Qij-λδ ij| = 0

This is know as the secular equation.

5

SCE3337: QMIII R.T. Sang

Q11 − λ Q 21 |Qij-λδ ij| = Q 31 .

Q12 Q13 Q 22 − λ Q23 Q32 Q33 − λ . .

. . =0 . .

The determinant yields a polynomial of order N where N is the dimension of the matrix. The roots of the polynomial are the eigenvalues of the matrix Q. Note that if Q was a diagonal matrix then the secular equation would be Q11 − λ 0 |Qij-λδ ij| = 0 .

0 Q22 − λ 0 .

0 0 Q33 − λ .

. . =0 . .

= (Q11 -λ)(Q22 -λ)(Q33 -λ).....(Q NN-λ) = 0 Hence the eigenvalues of a diagonal matrix are just equal to the diagonal elements Q11 , Q22 ,... Q NN. It can be shown that there is a transformation, called a unitary transformation, such that any Hermitian matrix may be written in diagonal form. This is a very useful property when Q is very large as it enables on to find the eigenvalues of a large system very simply using a computer program.

Example: We now solve an example using the secular equation Consider the following matrix representation of an operator Q which is given by 1 0 Q= 0 0 −2 0

−2  0 4 

Find the eigenvalues and hence the eigenvectors for this operator. Step 1: Write down the eigenvalue equation Q|a> = λ|a> The matrix representation of this eigenvalue equation is 3

∑ c (Q j =1

j

ij



ij

)=0

In matrix notation the secular equation is given by

6

SCE3337: QMIII R.T. Sang

1− λ 0 −2

0 −2 −λ 0 =0 0 4 −λ

Step 2: Solve the determinate to find the eigenvalues ⇒ (1− λ )

0 −λ 0 − (0) −2 0 4 −λ

0 0 −λ + (−2) =0 4− λ −2 0

⇒ (1-λ){- λ (4- λ)} - 0 + 4 λ = 0 ⇒ (1- λ){ - 4λ + λ2 } + 4 λ = 0 ⇒ -4 λ + λ2 + 4 λ2 - λ 3 + 4 λ = 0 ⇒ λ{ 5λ - λ2 } = 0 ⇒ λ{ λ ( 5- λ ) } = 0 This equation is satisfied when λ = 0,0,5 and hence these are the eigenvalues λ 1 =0, λ 2 =0 λ 3 =5. Step 3: We now determine the three eigenfunctions which must be orthogonal. Going back to the eigenvalue equation and substituting in the value for λ 1 : 3

∑ c (Q j =1

j

ij



3

ij ) = ∑ c j (Qij − 0.

1 0 ⇒ 0 0 −2 0

j =1

3

ij ) = ∑ c jQij = 0 j =1

−2   c1  0  • c 2  = 0 4  c 3 

Multiplication of this expression gives c1 - 2c3 =0 0 =0 -2c1 +4c3 = 0 The middle equation specifies that c2 is arbitrary since it is undefined. Accordingly we require a value for c2 and for simplicity we choose c2 = 0. Also we find that c1 = 2c 3 . We choose c3 = 1 and hence c1 = 2.

7

SCE3337: QMIII R.T. Sang

2  Hence the eigen vector for the eigen value λ 1 is: |a1 > = c3 0  1  At this point c 3 is arbitrary but since it is usual to normalise the eigenfunctions we can solve for this scalar using the normalisation condition: 2

2

2

2

2

c1 + c2 + c3 = 2.c 3 + c 3 = 1 ⇒ c3 =

1 5

Therefore the normalised eigenfunction is 2  1 0  |a1 > = 5  1  Now for the second eigenvalue λ2 =0 we would obtain the same result from our secular equation ie c2 would be arbitrary and c1 = 2c 3 . But you recall that the solutions for each eigenvalue must have eigenvectors that are orthogonal. Hence |a2 > must be orthogonal to |a1 >. This is simple to evaluate since.  c1  2c 3  |a2 > = c 2  =  c 2  (this uses the fact that c1 = 2c3 ) c   c  3 3 Now the orthogonality condition specifies that = 0 T

2  = c 2   0  For the normalised eigenfunction c2 can be determined from the normalisation condition. 2

2

2

2

c1 + c2 + c3 = c 2 = 1 ⇒ c2 = 1 Thus 0  |a2 > = 1  0  Now we determine the final eigenfunction for the eigenvalue λ 3 =5. Applying the identical procedure as before −2   c1  1 − 5 0  0 −5 0 c 2  = 0  −2 0 4 − 5  c 3  −4 ⇒ 0 −2

0 −2  c1  −5 0   c 2  = 0 0 −1  c 3 

This yields -4c1 - 2c3 -5 c2 -2c1 - c3

=0 =0 =0

Thus c1 = −

1 c and -5 c2 = 0 ⇒ c2 = 0 2 3

The final eigenfunction in its general form is 1 |a3 > = c3  0  −2 2

2

2

2

2

Applying normalisation gives c1 + c2 + c3 = c 3 + −2c 3 = 1 ⇒ c3 =

1 5

SCE3337: QMIII R.T. Sang

1 1 0 |a3 > = 5  −2 Exercise: Check that the eigenvector |a3 > is orthogonal to |a1 > and |a2 >

9

1

SCE3337: QMIII R.T. Sang

Lecture 3 4 . 0 The Simple Harmonic Oscillator

Equilibrium Position x=0 x

m

F=-kx

Consider the simple harmonic oscillator in the figure above. When the mass m is pulled back and released it undergoes Simple Harmonic Motion (SHM) which is expressed by the equation: m

d 2x + kx = 0 dt 2

k is known as the stiffness of the spring or the spring constant for the restoring force shown in the figure. This equation is well known and can be rewritten in the following form d2 x =− dt 2

2

x

k is the resonant frequency of the SHO. The equation of motion has the m solution (this is very simple just try trial solution eqt and sub into equation above); where

=

x = x0 cos( t) The simple harmonic oscillator plays a large role in many areas of both classical and quantum physics. It provides a model for systems diverse as • • • •

Atoms Molecules Electromagnetic Radiation Atom Traps

The classical equations of motion for a SHO are given by px 2 T= (Kinetic Energy) 2m 1 V = kx 2 (Potential Energy) 2 Recall that the Hamiltonian is H = T + V =

px 2 1 2 + kx 2m 2

2

SCE3337: QMIII R.T. Sang

We now use the principle of correspondence to determine the quantum mechanical Hamiltonian for the SHO. By direct analogy we exchange the linear momentum operator px by −ih

px

∂ ∂x

Thus H=

−h2 2 1 + m 2m x 2 2

2

x2

Let | > be an eigenfunction of the operator H. Recall that the Hamiltonian operating on a wavefunction yields energy eigenvalues and we now week solutions to the eigenvalue equation: H| > =E| > Substitution of our quantum mechanical expression for the Hamiltonian gives (

−h 2 2 1 + m 2m x 2 2 −h 2 2 | 2m x 2



2

x 2 )|

1 > +( m 2

>= E | 2

x 2 − E) |

For convenience we multiply through by −h 2 | m x2

> +(

>

m 2E x2 − )| h h

>= 0

2 which yields h >= 0

This equation may be solved by a power technique developed by Dirac which is quite often used in solving equations involving non-commuting operators. It also forms the basis of much advanced theoretical work in quantum mechanics.

Let

=

m then the equation above can be recast: h

−1

2

x

2

|

>+ x2 |

>=

2E | h

>

we now make the substitutions q=

x ⇒ q 2 = x 2 and −i 1 ∂2 2 p= ⇒p =− x ∂x 2

then the SE equation can be re-written as (p 2 + q 2 )|

>= |

>

where is a dimensionless energy term given by

=

2E h

3

SCE3337: QMIII R.T. Sang

4 . 1 The Quantum Mechanical Properties of p and q It is instructive to consider the properties of the operators p and q. The commutator of p and q is equal to -i. Proof: Let | > be an eigenfunction of the operators p and q then [p,q] | > = pq | > - qp | > −i −i =( )( x) | > −( x)( )| x x x | > = −i{( )• | > + x } + ix | > x x x = −i | >

>

Therefore [p,q] = -i 4 . 2 Annihilation and Creation Operators Since p and q do not commute we can not simply factor the expression p 2 + q 2 into (p+iq)(p-iq) in the eigenvalue equation. Instead consider the following linear forms: (q+ip)(q-ip) = q 2 + ipq − iqp + p2 = q 2 + i(pq − qp) + p2 = q 2 + i[ p,q] + p2 = q 2 + p 2 + i(−i) = q 2 + p2 + 1 (q-ip) (q+ip) = p 2 + iqp − ipq + q2 = p 2 + i(qp − pq) + q 2 = p 2 + i[q, p] + q2 = q 2 + p 2 + i(i) = q2 + p2 − 1 The addition of these two expression yields; (q + ip)(q - ip) + ( q - ip)(q + ip) =2 ( p2 + q2 ) ⇒

p2 + q2 =

1 {(q + ip )(q − ip ) + (q − ip )(q + ip )} 2

We have now transformed or operator form of the eigenvalue equation into an equation that contains linear terms of the operators p and q. This effectively transforms a second order differential equation into two consecutive first order differential equations. We now introduce two new operators called the raising and lowering operators as 1 (q + ip) The lowering operator which is commonly called the 2 annihilation operator a=

a† =

1 (q − ip) 2

The raising operator also called the creation operator

4

SCE3337: QMIII R.T. Sang

The Hamiltonian expression can now be written in terms of these operators: p2 + q2 = {

1 1 1 1 (q + ip) (q − ip) + (q − ip) (q + ip)} 2 2 2 2

p 2 + q 2 = {aa † + a† a} Hence the eigenvalue equation is given by (aa † + a† a) |

>= |

>

We can further rewrite the operators p and q in terms of the operators a and a† : a + a† = ⇒

q=

1 1 (q + ip) + (q − ip) = 2q 2 2

1 a + a †) ( 2

And a − a† = ⇒

p=−

1 1 (q + ip) − (q − ip) = 2ip 2 2

i ( a − a† ) 2

The commutation relation of the raising and lowering operators [a,a† ] is

[ a ,a ] = a a †



− a† a =

⇒ 1 1 1 1 1 { (q + ip) (q − ip) − (q − ip) (q + ip)} = {(q2 + p2 + 1)− (q 2 + p2 − 1)} 2 2 2 2 2



[ a ,a ] = a a



a a † = 1+ a† a   Important result! a † a = a a† -1 





− a† a =1

It is also useful to note that using the commutation relation we can re-write the Hamiltonan in terms of these operators as H = aa † + a † a = 1 + 2a †a = 2aa † − 1 The multiplication of the two operators a† a is called the number operator n, the reason for this will become more clear later.

5

SCE3337: QMIII R.T. Sang

4 . 3 Eigenvectors and Eigenvalues of a† a and a a† : Using the results derived in the last page we can now evaluate our eigenvalue equation. H| > = ( p 2 + q 2 ) | > = | > ⇒

(aa † + a† a) |

>= |

>

we now substitute for a a † by using our commutation relation a a† = 1 + a† a ⇒

(1+ a† a + a †a)|



2a† a |

>= |



2a† a |

>= ( − 1)|



a †a |

>= nˆ |

>= | > −1|

>=

> >

> ( − 1) | 2

>

Hence the number operator nˆ has the same eigenfunctions as the Hamiltonian. The eigenvalue of the number operator nˆ = a† a is given by

( − 1) 2

Consider a further operation in which we operate with a† a: ( − 1) | > 2 ( − 1) (aa † − 1)| >= | > 2 ( − 1) aa † | >= | > + 1| > 2 ⇒

a †a |

>=

aa † |

>=

( + 1) | 2

>

Once again we have shown that the operator aa† has the same eigenfunctions as the Hamiltonian. The eigenvalue of aa† is given by

( + 1) 2

6

SCE3337: QMIII R.T. Sang

4 . 4 Eigenvectors and Eigenvalues of the Annihilation and Creation Operators a and a† : We now consider the application of the raising and lowering operators to the Hamiltonian eigenvalue equation. Let |n> be an eigenstate of the Hamiltonian which has the eigenvalue n . As a consequence the eigenstate |n> is also and eigenstate of the operators a† a and aa† as proved earlier. The eigenvalue equations is then H|n> =

n

|n>



(aa † + a† a) | n >=



(1+ 2a † a)| n >=



(a † + 2a † a †a)| n >= ε n a† | n >

n

n

|n>

| n > we now act on this equation with a† .

we want to factor out a† the reason will become apparent further along, to accomplish this we use the commutation relation and replace a † a with aa † − 1 ⇒

(a † + 2a † {aa † − 1})| n >=



(1+ 2a † a − 2)a† | n >=



(1+ 2a † a)a† | n >= ( n a† + 2a† )| n >



(1+ 2a † a)[a † | n >] = (

n

n

n

a† | n >

a† | n >

+ 2)[a † | n >]

We now define |n+1> = a† |n>, then the above expression is reduced to (1+ 2a † a)| n + 1 >= (

n



(aa † + a† a) | n + 1 >= (



H | n + 1 >=

n+1

+ 2)| n + 1> n

+ 2) | n + 1 >

| n + 1 >= (

n

+ 2)| n + 1 >

This is exactly the same form of the energy eigenvalue equation given above. This shows that given a state |n> of dimensionless energy n there exists another state which has two units of energy more than this state. The upper state has been defined as |n+1>. One can repeat this procedure which yields a ladder of levels increase to infinity. From this we can conclude that

The operator a† INCREASES the energy and the eigenstate of the system

7

SCE3337: QMIII R.T. Sang

We now consider the action of the operator a using the same technique: aH|n> = n a|n> ⇒

a(aa † + a † a)| n >=



a(1+ 2a† a)| n >=

n



(a + 2aa † a)| n >=

n



(a + 2(1+ a †a)a)| n >=

n

a| n >



(a + 2a + 2a† aa)| n >=

n

a| n >



{(1 + 2a † a)a + 2a}| n >=



(1+ 2a † a)[a | n >] = (

n

a|n >

a |n > a|n >

n

n

a| n >

− 2)[a | n >]

We now define a|n> = |n-1> H|n-1> =

n-1

|n-1> = ( n -2)|n-1>

This is the same as the eigenvalue equation. This equation demonstrates that given a state |n> of dimensionless energy n there exists a state which has 2 units of lower energy than this state. The lower state is defined as |n-1>.

The operator a DECREASES the energy and lowers the eigenstate of the system Note that the operators a and a† do not have eigenfunctions defined by |n> unlike a† a and aa† which had the same eigenfunctions as H.

1

SCE3337: QMIII R.T. Sang

Lecture 4 4 . 5 Higher Order Eigenfunctions of H Recall from section 4.3 aa † | n >=

(

a † a | n >=

(

+ 1) | n > and 2

(1)

− 1) |n> 2

(2)

n

n

Summing equations (1) and (2) yields:

(aa



( + 1) ( n − 1)  + a† a) | n >=  n + |n >  2 2 



( + 1) ( n − 1)  H | n >=  n + |n >  2 2 



H | n >=

n

|n>

(3)

Which is what we expect i.e. we recover the original eigenvalue equation. If now operate with a† on equation (1) we can determine a higher order eigenfunction of H: a † a[a† | n >] =

(

n

+ 1) † [a | n >] 2

(4)

Recall from last lecture a † n = n + 1 , hence a † a | n + 1 >=

(

n

+ 1) | n + 1> 2

Replacing a† a with aa† -1 (commutation relation) in equation (4) reveals (aa † − 1)[a † | n >] =

(

n

+ 1) † [a | n >] 2 (

+ 1) † [a | n >] 2



aa † [a† | n >] − 1[a † | n >] =



aa † [a† | n >] =

(

n

+ 1) † [a | n >] + 1[a† | n >] 2



aa † [a† | n >] =

(

n

+ 3) † [a | n >] 2

n

We now sum equations (4) and (5) giving

(5)

SCE3337: QMIII R.T. Sang

(aa



2 ( + 3) ( n + 1) † + a† a)[a† | n >] =  n + [a | n >]  2 2 

H[a† | n >] = (

+ 2)[a † | n > ]

n

(6)

Operating with a† on equation (5) gives a † aa † [a † | n >] = (



a † a(a† )2 | n >=



a † a[(a† )2 | n >] =



a † a | n + 2 >=

(

n

(

n

n

+ 3) † † a [a | n >] 2

+ 3) † 2 (a ) | n > 2

(

n

+ 3) † 2 [(a ) | n >] 2

(7)

+ 3) |n+2> 2

We now replace a† a in equation (7) by aa† -1 :

(aa



− 1)[(a† ) 2 | n >] =

(

n

+ 3) † 2 [(a ) | n >] 2



aa † [(a† )2 | n >]− 1[(a† )2 | n >] =



aa † [(a† )2 | n >] =

(

n

(

n

+ 3) † 2 [(a ) | n >] 2

+ 5) † 2 [(a ) | n >] 2

(8)

Summing equations (7) and (8) gives;

(a a + aa )[(a ) †



† 2

( + 3) ( n + 5)  † 2 | n >] =  n + [(a ) | n >]  2 2 

Therefore H[(a† )2 | n >] = (

n

+ 4)[(a † ) 2 | n >]

(9)

If we do this procedure m times then equations (3), (6) and (9) allow us by induction to deduce the general result: H(a † ) m | n >= (p 2 + q 2 )(a† )m | n >= { n + 2m}[(a† )m | n >] = {

n

+ 2m}| n + m >

I t i s e a s y t o s e e w h y a† is called the raising operator as each successive application a higher energy is obtained.

3

SCE3337: QMIII R.T. Sang

One can show that successive operations by a decreases the energy and as such it is called the lowering operator. One can show by induction the general result: H(a)m | n >= ( p2 + q 2 )(a)m | n >= {

n

− 2m}[(a)m | n >] = {

n

− 2m}| n − m >

4 . 6 Summary of Wavefunctions, Operators and Eigenvalues for the Simple Harmonic Oscillator : Wavefunction

Eigenvalue of Eigenvalue of Eigenvalue of H aa† a† a n +1 n −1 n 2 2 n +3 n +1 n +2 2 2 n + 2m + 1 n + 2m − 1 n +2m 2 2 n −1 n −3 n -2 2 2 n − (2m − 1) n − (2m + 1) n -2m 2 2

|n> a† |n>=|n+1> (a† )m|n>=|n+m> a|n>=|n-1> (a)m|n>=|n-m>

Energy En En + h En + mh En - h En - m h

4 . 7 Lower Energy Boundary Conditions and State Normalisation: It is clear that physically we must have a lower limit when applying the lowering operator since we have postulated that the energy of the simple harmonic oscillator can not be negative. Thus there exist a lowest positive energy state which we define to be the ground state. This means that the measurement of energy, or the expectation value of this energy after a measurement has been obtained must be non-negative. The expectation value is given by:
= H = p2 + q2 =

2

= = + Hence for


0

+

0

Now we define the wavefunction |n> = |0> as the ground state (the state with lowest energy). Clearly acting on this wavefunction with the lowering operator must result in a condition of no energy (otherwise the state |0> would not be the lowest state). Hence a|0> = 0 If we now act on this state with the creation operator gives

4

SCE3337: QMIII R.T. Sang

a† a|0> = 0. a† =0

(1)

but recall that a † a | n >=

n

−1 |n> 2

Hence for the ground state a † a | 0 >=

0

−1 |0> 2

(2)

So assuming that |0> exists, for equation (2) to be consistent with equation (1) requires that a † a | 0 >= ⇒

0

0

−1 | 0 >= 0 2

=1

Recall from last lecture that

n

was dimensionless energy term given by

n

=

2En putting n = 0: h

0

=

2E0 h

2E0 h



1=



E0 =

1 h 2

This says that the lowest energy state is non-zero which is strikingly different to the classical SHO which has a lowest energy state equal to zero.

This lowest energy quantum mechanical state is defined as the Zero Point Energy

5

SCE3337: QMIII R.T. Sang

Using the result for the ground state energy plus the results summarised in the table (section 4.6) it is possible to find a general form for the energy eigenvalues. E0 =

1 h 2



E1 = h

1 + h 2



E2 = 2h

+

1 =  1 +  h 2 1 =  2 +  h 2

1 h 2

Therefore by induction 1 En =  n +  h 2

where n=0,1,2,.......

Energy

 3 En +1 =  n +  h 2  1 En = n +  h 2

a† a

 1 En − 1 = n −  h 2

3 E1 = h 2 1 E0 = h 2

E=0 This enables us to diagrammatically represent the energy states of the quantum SHO: Note that the dimensionless energy term is given by n

=

2En 2  =  n+ h h 

1  h  = 2n + 1 2 

The meaning of the operator a† a as the number operator nˆ is now clear since: a † a | n >= nˆ | n >= Substitution of the value of nˆ | n >= Thus

n

−1 |n> 2 n

from above gives

2n + 1 − 1 | n >= n | n > 2

nˆ | n >= n | n > i.e. n is the nth energy state of the harmonic oscillator.

6

SCE3337: QMIII R.T. Sang

4 . 8 Normalisation of Eigenfunctions It should be noted that the derivations that we have used so far for the wavefunctions of quantum mechanical SHO namely |n+1> = a† |n> have not been normalised. To achieve this we require that = = 1 As a result we need to introduce a numerical factor into the expressions which have been derived above to account for the normalisation. Therefore we should write the expressions as a n = Bn−1 n − 1 a † n = Bn+1 n + 1 where B m are coefficients that are yet to be determined. As an example we will determine the normalised ground state wavefunction. Recall a=

1 (q + ip) 2

Operating on the ground state gives 1 (q + ip) 2

a0 =

Also recall that q =

(1)

0

x and p =

−i x

then q = x p=

⇒ x= −i x

q

sub ( ∂ x) in for p gives

−i

=

q

= −i

q

Therefore substituting back into (1) yields 1 (q + ) 2 q

a0 = ⇒ ⇒

(q +

q

q 0

)

0

= −q

=0

0

0

=0

7

SCE3337: QMIII R.T. Sang

⇒ ⇒



0 0

ln

0

=− ∫ q q q2 = − + c0 2

Thus  q2  −   2

0

= C0 e

where C0 is an arbitrary integration constant which can be determined by normalising the wavefunction: < 0| 0 >= 1 ∞



< 0| 0 >=





*

dq =| C0 |

2

0

0

−∞

− q2

dq

−∞



where the integral

∫e



e −q dq = 2

−∞

1 =| C0 | 2 Yielding for the normalisation constant: 1

 1 2 C0 =   Hence the normalised ground state wavefunction is 1

0

 q2  2

 1  2  − = e  

Problem Sheet 2 will be used to determine the normalisation coefficients for an arbitrary wavefunction |n>.

1

SCE3337: QMIII R.T. Sang

Lecture 5 5 . 0 Approximation Techniques in Quantum Mechanics In a quantum mechanical treatment of the physical world the determination of physical processes is governed by the Schrödinger equation. The equation may be time independent or time dependant: H | (r) >= E | (r) >

(Time Independent S.E.)

H | (r,t ) >= −ih

(Time Dependent S.E.)

t

| (r,t) >

We have already solve the Schrödinger equation for the time independent case for the simple harmonic oscillator. In principle, the physical system is described by either of these equations depending whether we are interested in behaviour or a system that is time dependent or time independent. As it turns out real nature is not simple and there are few exactly solvable problems. Some examples of exact solutions are: • The harmonic oscillator • bounds states of a particle in a square box • The hydrogen atom Actually even in reality the hydrogen atom is not exactly solvable even though the wavefunction can be written down exactly for this system. Small perturbations such as spin orbit effects have to be considered to allow for real experimental observations. If a system consists of more than one particle then interactions between the particles also has to be taken into consideration. As an example the He atom has two electrons that not only interact with the ionic core but also interact with each other. It is impossible to solve such a three body problem exactly. Early quantum physicists did not however completely give up on solving these problems they made allowances for difficult problems by introducing approximation techniques.

2

SCE3337: QMIII R.T. Sang

5 . 1 Time Independent Perturbation Theory The basic idea of time independent perturbation theory is as follows. Suppose we have a system which has a Hamiltonian H0 and we apply a small perturbation, h, to the system such that the system Hamiltonian is: H = H0 + h Here H0 has a much greater influence over the system than h does. We also assume that the Schrödinger equation can be solved exactly: H0 | i >= Ei | i > |i> are the associated eigenkets of the Hamiltonian H0 and Ei are the corresponding eigenvalues. The eigenkets form a complete orthonormal set as we have shown in previous lectures. Then any ket vector can be written as a linear superposition of the eigenkets |i> with < i | j >=

ij

and | n >=

∑a

ni

|i>

i

For convenience we rewrite the system Hamiltonian as H = H0 + h where is a free parameter defined in the interval 0 perturbation on and off.

1. This allows us to turn the

We now want to solve the following Schrödinger equation: H | i' >= Ei' | i' > Where |i'> are the eigenkets of the perturbed system and as such are not the same eigenkets as |i> and as such the eigenvalues Ei Ei'. We further assume that the sets { |i'> } and { |i> } are non-degenerate (i.e. their eigenvalues are unique). This point will be important in the following discussion as it allows us to get around the problem of a division by zero that will come up later. Perturbation theory covering degenerate states will be covered later in the course. We will also say in the limit that as 0 lim | i' >  0→ | i > and → lim Ei'  → Ei →0 We now let the new eigenkets and eigenvalues be represented by the following power series: | i' >=| i > + | i1 > +

| in > + ..  Thus |i'>=|i> and Ei’=Ei if Ei' = Ei + Ei1 + 2 Ei2 + .....+ n Ein + ...  2

| i2 > +....+

n

= 0.

3

SCE3337: QMIII R.T. Sang

It is assumed that successive terms of this power series gets smaller and as a result the series converges. Substitution of the new eigenkets and eigenvalues into H | i' >= Ei' | i' > gives; (H0 + h)| i' >= Ei' | i' > Substitution of the power series for |i'> and Ei' yield (H0 + h){| i > + | i1 > +

(

= Ei + Ei1 +



2

(

2

| i2 > +....+

Ei2 + .....+

(H0 + h){| i > + | i1 > + − Ei + Ei1 +

2

Ei2 + .....+

Grouping terms in powers of

2

n

)

| in > +..}

Ein + ... {| i > + | i1 > +

| i2 > +....+ n

n

n

)

2

| i2 > + ....+

n

| in > +..}

n

| in > +..} = 0

| in > +..}

Ein + ... {| i > + | i1 > +

2

| i2 > +....+

:

1st− Order 0 −Order 474444444 8 644 4 7444 8 6444444 {H0 | i > − Ei | i >} + H0 | i1 > +h | i > −Ei | i1 > − Ei1 | i > +

{

{

}

2 H0 | i2 > + h | i1 > − Ei2 | i > −Ei1 | i1 > −Ei | i2 > + 144 4444444 424444444444 3

}

3

{...} + ......= 0

2nd − Order

For the above equation to be valid each of the terms in brackets must separately equate to zero. The first and second terms give us the 0th order and 1st order terms respectively:

H0 | i >= Ei | i >

Zero Order

H0 | i1 > +h | i >= Ei | i1 > + Ei1 | i >

First Order

H0 | i2 > +h | i1 >= Ei2 | i > +Ei1 | i1 > + Ei | i2 >

Second Order

Ei1 is defined as the first order energy correction.

4

SCE3337: QMIII R.T. Sang

5 . 2 First Order Time Independent Perturbation Theory 5.2a First Order Energy Correction The first order equation can be solved by noting that the eigenket | i1 > can be expressed as a linear superposition of the unperturbed eigenkets |i>: | i1 >= ∑ a1 j | j > j

Substitution into the first order term gives     H0  ∑ a1 j | j >  + h | i >= Ei  ∑ a1 j | j >  + Ei1 | i >  j   j  We now multiply through by the bra vector + < k | h | i >= Ei ∑ a1 j < k | j > +Ei1 < k | i > j

recall that H 0 | j >= E j | j > thus

∑E a

j 1j

j

< k | j > + < k | h | i >= Ei ∑ a1 j

Ek a1 k + < k | h | i >= Ei a1 k + Ei1

kj

+ Ei1

ki

evaluating the summations gives

j

ki

(1)

now for k = i we have Eia1i + < i | h | i >= Eia1i + Ei1 ⇒

< i | h | i >= Ei1

The first order energy correction term is therefore given by the matrix element taken between the unperturbed eigenkets |i>.

5

SCE3337: QMIII R.T. Sang

5.2b First Order Eigenvector Correction: We can now evaluate what the perturbed ket vector |i'> to first order will be. The first order correction to the eigenket |i> is given by | i' >=| i > + | i1 > Recall that | i1 > can be written as a linear superposition of basis states: | i1 >= ∑ a1 j | j > j

substitution back into the first order term yields | i' >=| i > +

∑a

1j

|j>

(2)

j

The coefficients a1 j are evaluated from equation (1) derived in the previous section, hence for the case when k i (i.e. non-degenerate case) then: =08 67 Ek a1 k + < k | h | i >= Ei a1 k + Ei1 ki



< k | h | i >= Eia1k − Ek a1k



< k | h | i >= a1k ( Ei − Ek )



a1k =

< k | h |i > ( Ei − Ek )

k is a dummy variable hence we can call it anything so make it j. Substitution of this expression into equation(2) gives; | i' >=| i > +

< j |h |i >

∑ (E − E ) j≠i

i

|j>

j

We now set =1 thus giving us the corrected eigenket to first order: | i' >=| i > + ∑ j ≠i

< j |h|i > |j> ( Ei − E j )

Notice that each eigenket |j> which has a non-zero matrix element with |i> that is < j | h | i > 0 will contribute to the new eigenket |i'>. The perturbation produces an interference# or mixing of the original eigenkets. States that are close together will have a large amount of mixing. We can see this mathematically as the denominator in the above expression will get large. As the states 1 become separated the mixing will become weaker since ≈ 0. ( Ei − E j ) #

It is valid to call this effect an interference as both the amplitudes and the phases of the matrix elements play a role.

6

SCE3337: QMIII R.T. Sang

5 . 3 Second Order Time Independent Perturbation Theory The extension to second order is relatively straight forward by following the same procedure as for the first order corrections. 5.3a Second Order Energy Correction 2

We start with the

terms in our perturbation expansion:

H0 | i2 > +h | i1 >= Ei2 | i > +Ei1 | i1 > + Ei | i2 > We now expand |i1 > and |i2 >: | i1 >= ∑ a1 j | j > and | i2 >= ∑ a2 m | m > j

m

Substituting into the above equation gives         H0  ∑ a2 m | m >  + h ∑ a1 j | j >  = Ei  ∑ a2 m | m >  + Ei 2 | i > + Ei1  ∑ a1 j | j >  m   m   j   j  We now multiply by the bra vector + ∑ a1 j < k | h | j >= Ei ∑ a2 m < k | m > + Ei1 ∑ a1 j < k | j > +Ei 2 < k | i >



∑a

2m

m

j

2m

Em

km

m



m

j

+ ∑ a1 j = Ei ∑ a2 m j

km

m

a2 k Ek + ∑ a1 j = Ei a2 k + Ei1 a1 k + Ei2

+ Ei1 ∑ a1 j

kj

+ Ei2

ki

j

ki

j



a2 k ( Ek − Ei ) + ∑ a1 j −Ei1 a1 k = Ei2

ki

j

Now for the case when k=i we get Ei2 = ∑ a1 j − Ei1 a1 i j

Recall that the first energy correction term is given by Ei1 =< i | h | i > thus Ei2 = ∑ a1 j −a1 i < i | h | i > j

This can be rewritten as Ei2 = ∑ a1 j +a1i < i | h | i > − a1i < i | h | i > j≠i

= ∑ a1 j j≠i

7

SCE3337: QMIII R.T. Sang

Recall that a1k =

< k | h |i > . Upon substitution into the above equation yields: ( Ei − Ek )

< j | h | i >< i | h | j > j≠i ( Ei − E j )

Ei2 = ∑

Therefore the second order correction to the energy is Ei2 = ∑ j≠i

( Ei − E j )

2

The second order correction to the eigenket is messy to derive (see problem sheet 3) but straight forward using the same procedure as the first order correction and is given by | i' >=| i > + | i1 > +

2

| i2 >

we let =1 then,

| i' >=| i > + | i1 > + | i2 > Expanding i1 and i2 in basis sets yields | i' >=| i > + ∑ a1 j | j > + ∑ a2 m | m > j

m

Therefore | i' >=| i > + ∑ j ≠i

 < m | h | j >< j | h | i > < i | h | i >< m | h | i >  < j |h|i > | j > + ∑ ∑ − |m > 2 E − E E − E ( ) E − E ( Ei − E j ) m ≠i  j ≠i ( ) ( )  i m i j i m 

Exercise: Derive this expression for the corrected eigenvector to second order. Beyond second order, perturbation theory is seldom used as it becomes very messy quickly.

8

SCE3337: QMIII R.T. Sang

5 . 4 The Zeeman Effect: An Application of Time Independent Perturbation Theory You may recall from second year that if we apply an external uniform magnetic field B to an atom with a magnetic moment then it will experience a perturbation. The magnetic moment is given by: J

= − gJ

B

h

J

If the magnetic field is weak ( If B is taken in the Z direction then J • B = J Z B , hence the energy shift is given by Ei1 = gJ

B

h

B < JmJ | J Z | JmJ >

JZ is an operator of the eigenvector |JmJ> with eigenvalue mJ h : Ei1 = gJ ⇒

Ei1 = gJ

B

h B

BmJ h < Jm J | JmJ >

BmJ

As an example applied to a real atom; consider the 61 S0 →63 P1 transition in Hg. If we have no perturbing field then we get the following energy level structure:

m J = -1

mJ = 0

mJ = 1

3

6 P1

Ei 253.7nm Radiation

1

6 S0 No Perturbing Field

9

SCE3337: QMIII R.T. Sang

If we now turn on a weak magnetic field we can calculate the perturbation of the energy levels of the 61 S0 and 6 3 P1 states due to the field. To do this we need to determine the Landè g factor which is given by gJ = 1 +

J (J + 1) + S(S + 1)− L(L + 1) 2J(J + 1)

3 . Now in the ground state the only mJ 2 value is 0 hence this level is not perturbed since the perturbed energy is proportional to mJ. In the case of the 63 P1 state the degenerate energy levels will be split into three nondegenerate states by the amount For the 61 S0 state gJ=1 and for the 63 P1 state g J =

Ei1 =

3 2

B

B

Thus the perturbed energy levels of the 63 P1 state will be Ei' = Ei + Ei1 This is represented in the energy level diagram below: mJ = 1 mJ = 0

Ei1 3

m J = -1

6 P1

Ei1

Ei 253.7nm Radiation

1

6 S0 B-field ON It should be noted that we have used the non-degenerate perturbation theory even though in the 63 P1 state the energy levels were degenerate in mJ. In this special case it is ok to do so as the operator JZ has a definite value whether there is a perturbation or not as a result there is little mixing of these states.

1

SCE3337: QMIII R.T. Sang

Lecture 6 6 . 0 Perturbation Theory for Degenerate States In previous lectures we had assumed that there was no degeneracy among the perturbed states. However in practice it is often encountered that there are several states that have the same energy, that is they are degenerate. The effect of a perturbation is usually to lift the degeneracy (this was seen in the Zeeman effect example last lecture) in the first order correction. There are problems associated with the application of time independent perturbation theory if only part of the degeneracy is lifted in the first order correction. This is seen when one tries to apply second order corrections to degenerate states, recalling the equation to second order: < k | h | i > +a1 k Ek = Ei1 Now if Ek=Ei and

ki

+ Ei a1 k

=0 then

ki

< k | h | i >= 0 Therefore the second order energy correction term yields, Ei2 = ∑ j≠i

0 = ( Ei − E j ) 0 2

This is of course undefined which means that there is a procedural problem with this technique. As such we need to develop a method that can handle this by removing this problem. Consider two states |i> and |k> that are nearly degenerate, with all other states well removed from these states. We also suppose that the matrix element 0. The first order correction to the ket |i> is given by | i' >=| i > + ∑ j ≠i

< j |h|i > |j> ( Ei − E j )

running the summation over j reveals that when j=k that this term dominates since the denominator Ei-Ek gets small and as such that term becomes large therefore: | i' >≈| i > +

< k |h|i > |k > ( Ei − Ek )

A similar expression can be found for the perturbed state |k>: | k' >≈| k > +

|i > (Ek − Ei )

These two expressions show that for states with nearly degenerate energy levels, the perturbed wavefunction for these levels is to a good approximation a mixture of the two unperturbed levels.

2

SCE3337: QMIII R.T. Sang

In the case of degenerate energy levels |i> and |k> we would expect that we could write them as; | i' >= Cii | i > + Cik | k > | k' >= Cki | i > + Ckk | k > For a number of degenerate states we could write | n' >=

∑C

nj

|j>

j

We now substitute in to the eigenvalue equation (H0 + h)| n' >= E'n | n' > : (H0 + h)∑ Cnj | j >= E'n ∑ Cnj | j > j

j

We now write the perturbed energy term as E'n = En + unperturbed energy plus the correction term; (H0 + h)∑ Cnj | j >= ( En +

)∑ Cnj | j >

j



j

∑C

H0 | j > + ∑ Cnj h | j >= En ∑ Cnj | j > + ∑ Cnj | j >

∑C

E j | j > + ∑ Cnj h | j >= En ∑ Cnj | j > + ∑ Cnj | j >

nj

j



which is just the sum of the

nj

j

j

j

j

j

j

j

Since the states are degenerate Ej=En then

∑C

nj

j

h | j >= ∑ Cnj | j > j

Multiplying by the bra vector = ∑ Cnj < k | j >

∑C

< k | h | j >= ∑ Cnj

nj

j



nj

j



j

∑C [< k | h | j > − nj

j

j

kj

kj

]= 0

This equation has a solution when the determinant < k | h | j > − cf: The matrix form of the eigenvalue equation

kj

=0

3

SCE3337: QMIII R.T. Sang

Example Consider two degenerate states |1> and |2> h11 − h21 ⇒

h12 h22 −

=0

(h11 − )(h22 − ) − h12h21



2

(h11 + h22 ) + h11h22 − h12 h21



Solving for yields =

1 (h + h22 ) ± 2  11

( h11 − h22 )

2

− 4h12 h21  

Exercise: Show this Therefore this system with two degenerate levels, the correction term takes on two values. As an example let h11 = h22 = A and h12 = h21 = B , then from the above equation = A ± B . We can determine the coefficients Cnj by reconsidering the equation

∑C [< k | h | j > − nj

kj

j

 A−   B

] = 0 then

B   C1   0    =   using the solution =A+B we get A −   C2   0

 − B B   C1   0    =    B − B  C2   0 ⇒

− BC1 + BC2 = 0 BC1 − BC2 = 0



C1 = C2



|

1 >= C1   1

From normalisation

∑C

i

2

= 1 thus

i

 C1  2 2   (C1 C2 ) = C1 + C2 = 1  C2  For C1 = C2 then C12 + C12 = 1 ⇒ C1 =

1 2

4

SCE3337: QMIII R.T. Sang

Therefore we obtain for the wavefunction | >: |

>=

1 (|1 > + | 2 >) Called a symmetric wavefunction 2

Similarly we find for =A-B the anti-symmetric wavefunction. |

>=

1 (|1 > − |2 >) 2

6 . 1 Example Two: The DC Stark Shift of the n=2 level of Hydrogen As an example of one of the uses of degenerate perturbation theory we will calculate the electric field Stark splitting of the n=2 level of hydrogen by a constant DC electric field E which we will assume points in the Z direction. The nlm

n=2 level has four = Rnl (r)Θlm ( )Φ m ( ) :

degenerate

levels

which

all

have

the

form

  r  1 r 2 −  exp −  3  a0   2a0  4 2 a0   r  1 r exp −  cos 210 = 3  2a0  4 2 a0 a0 200

21±1

=

=

 r  1 r ±i exp −  sin e 3  2a0  4 2 a0 a0

where (x,y,z) = (rsin cos ,rsin sin ,rcos ) transform into the usual spherical coordinates. The perturbation Hamiltonian is given by the dot product between the electric dipole of the atom and the electric field vector E: h = −E• D D is the dipole moment of the atom and is given by D=qd, where q is the charge (in our case q=-e) and d = zkˆ = r cos kˆ . Hence our perturbation operator h is given by h = − Ekˆ • − er cos kˆ = eEr cos We now need to solve the determinant < k | h | j > − < k | h | j >=< 2l' ml' | eEr cos | 2lml > = eE < 2l' ml ' | r cos | 2lml > = eE < 2l' ml ' | z | 2lml >

kj

=0 where

5

SCE3337: QMIII R.T. Sang

We can now apply parity (i.e. is an even or odd function of the spatial coordinate?) arguments to eliminate some terms in the perturbation Hamiltonian, since each wavefunction |2lml> has a definite parity. For example the matrix element < 2l' ml' | z |2 lml > must have odd parity since changing sign of the coordinate makes z -z. The diagonal matrix elements are integrals over products of either: • even function x odd function x even function = odd function or eg: z2 x z x z4 =z7 (odd) • odd function x odd function x odd function = odd function eg: z x z x z=z3 (odd) Recall that the definite integral over symmetric limits of an odd function = 0 (c.f. a

∫ xdx = 0 ) therefore the

diagonal matrix elements must be equal to zero since the

−a

integral < 2lml | z | 2lml > is over symmetric limits. As a consequence only the off-diagonal elements are non zero. The parity of the wavefunction is defined by the factor (-1)l. Therefore is follows that for non-zero matrix elements: l and l' can't be both even or both odd Furthermore ml=ml' for a non-zero integral. This is seen when considering the form of the wavefunction above, where the m component is explicit in the function Φ m ( ) = e im . The matrix elements take the form:  < 2l' ml' | r cos | 2lml >= ∫ Rnl* (r)r 3 Rnl' dr∫ Θ*lm l ( )sin Θl' ml ' d  r

{∫ .......drd } ∫ e

2  − iml iml ' ∫ e e d  0

2

=

−i(m l − ml ' )

d

0

always 644=0, 47 444 8

)4 d3 − i ∫ sin(m − m ) d {∫ .......drd } ∫ cos(m 144 4− 2m 44 2

=

2

l

0

l'

=1⇒ ml = ml ' =0⇒ ml ≠ ml '

l

0

So the integral over the wavefunction = 0 unless ml=ml'. Using these results the only non-zero matrix elements are: = = eE < 200 | r cos |210 > . The integral is given by < 200| r cos | 210 >=−3a0 Exercise: See problem sheet 3 .

l'

6

SCE3337: QMIII R.T. Sang

We notice that there can be no first order correction since the diagonal terms are zero and as such the degeneracy persists. We therefore have a degenerate perturbation so we need to solve < k | h | j > − kj = 0 : |200> |210> |211> |21-1> | 200 > − −3a0 eE 0 0 | 210 > −3a0 eE − 0 0 |211 > 0 0 − 0 |21 − 1 > 0 0 0 − The solution to the determinant is given by 2

(

− [3a0 eE]

2

2

)=0

Exercise: Show this There are four solutions to this equation 1,2 =±3a0 eE, degeneracy is partially lifted by the electric field.

3,4

=0. Therefore the

We now evaluate the perturbed wavefunctions which require solutions to the equation:  −  −3a0 eE   0   0

−3a0 eE − 0 0

0 0 − 0

0 0 0 −

  C1    C2    = 0   C3      C4 

Substituting in ε=+3a0 eE we get  −3a0 eE  −3a0 eE   0   0

−3a0 eE −3a0 eE 0 0

0 0   C1  0 0   C2    = 0 −3a0 eE 0   C3    0 −3a0eE  C4 



1 1 −3a0 eE 0  0

1 1 0 0



1 1  0  0

0  C1  0  C2     = 0 multiplying out gives 0  C3    1  C4 

1 1 0 0

0 0 1 0

0 0 1 0

0  C1  0  C2    = 0 0  C3    1  C4 

C1 + C2 = 0 C1 + C2 = 0 C3 = 0 C4 = 0 Hence C1 =-C2 . Applying normalisation gives

7

SCE3337: QMIII R.T. Sang 2

2

C1 + C2 = 1 so C1 =

1 1 and C2 = − 2 2

Therefore the perturbed wavefunction with energy eigenvalue =+3a0 eE is |

1

>=

1 1 | 200 > − | 210 > 2 2

Similarly one can show using the same procedure that for =-3a0 eE |

2

>=

1 1 | 200 > + | 210 > 2 2

The perturbation has therefore coherently mixes these two states. For the eigenvalues =0, the matrix element =0 from the previous page. Recall the expansion for two nearly degenerate states |i> and |k> we had | i' >≈| i > +

< k |h|i > |k > ( Ei − Ek )

Thus if the matrix element is zero clearly we can have no mixing between the two states and the states are unperturbed since | i' >≈| i > . The eigenvalues and eigenvectors for the n=2 Stark shifted states of hydrogen are given by =+3a0 eE =-3a0 eE =0 =0

1 | 200 > − 2 1 | 2 >= | 200 > + 2 | 3 >=|211 > | 4 >=| 21− 1 > |

1

>=

1 | 210 > 2 1 | 210 > 2

8

SCE3337: QMIII R.T. Sang

Energy

E |200>

|210>

|211>

|21-1>

Before Perturbation

Apply perturbating electric field Energy

|

1

E + 3a0 eE

>=

1 1 | 200 > − | 210 > 2 2 3a0 eE

E E − 3a0eE

|200>

3a0 eE

|210> |

2

>=

1 1 | 200 > + | 210 > 2 2

With perturbing electric field

1

SCE3337: QMIII R.T. Sang

Lecture 7 7 . 0 Time Dependent Perturbation Theory In lectures so far we have covered perturbations to the Hamiltonian that are time independent. We now develop an approximation technique to allow us to handle perturbations that have a time dependence. We write the time dependent perturbation Hamiltonian as H(t)=H0 +h(t) Where H0 is the time independent unperturbed Hamiltonian and h(t) is a perturbation which is time dependent. We also make the assumption that h(t)

The unperturbed eigenfunctions are given as before: H0 |

i

>= Ei |

i

>

This equation leads to stationary states (time independent) which have eigenvalues given by Ei. It should be noted that this equation is formally a solution of the Schrödinger equation when the potential is time independent. It is more formally correct when considering the time dependent phenomena to consider the wavefunction | i> which is associated with these stationary states. This wave function | i> can be evaluated by considering the full Schrödinger equation, which evaluates the wavefunctions and their time dependence. We begin with: H | Ψi (r,t) >= ih

t

| Ψi (r,t) >

where the Hamiltonian is given by H=−

h2 2 ∇ + V(r) 2m r

Thus −

h2 2 ∂ ∇ r | Ψi (r,t) > + V(r) | Ψi (r,t) >= ih | Ψi (r,t) > 2m ∂t

2

SCE3337: QMIII R.T. Sang

We now try separation of variables by splitting the wavefunction into time independent and time dependent part such that | Ψi (r,t) >=|

i

(r) >| i (t) >

Substitution into the Schrödinger equation gives − ⇒

h2 2 ∇ | 2m r

i

(r) >| i (t) > +V(r)|

h2 − | i (t) > ∇ 2r | 2m

i

i

(r) > +V(r) |

(r) >| i (t) >= ih

i

(r) >| i (t) >=|

we now multiply both sides of the equation by 1  h2 2 − ∇ | | i (r) >  2m r

i

(r) > + V(r)|

i

1  h2 2 − ∇ | | i (r) >  2m r

i

(r) > + V(r)|

i

i

|

i

(r) >| i (t) >

(r) > ih

t

| i (t) >

1 which gives | i (r) >| i (t) >

 1 (r) >  = ih | i (t) > t  | i (t) >

Since both sides are only dependent on either | equation are equal to constants:



t

i

(r) > or | i (t) > both sides of the

 1 (r) >  = C = ih | i (t) > | i (t) > t 

Consider the LHS of the equation h2 2 − ∇ | 2m r ⇒

H0 |

i

i

(r) > +V(r)|

(r) >= C |

i

i

(r) >= C |

i

(r) >

(r) >

We have recovered the time-independent SE and is just the usual time independent eigenvalue equation, hence C = Ei

3

SCE3337: QMIII R.T. Sang

Considering the LHS of the separated Schrödinger equation 1 ih | i (t) >= C | i (t) > t ⇒

1 ih | i (t) >= Ei | i (t) > t

⇒ ⇒

t

| i (t) >=

−i E | (t ) > h i i

d | i (t) > −i = Ei dt | i (t) > h

Integrating reveals

∫ ⇒ ⇒

d | i (t) > −i = Ei ∫ dt | i (t) > h

ln (| i (t ) > ) = | i (t) >= Ae

−i E t + C' h i

 −i E t     h i 

 −i E t     h i 

=e

The constant A is found by the normalisation condition. The wavefunction for the stationary states (time independent potential) is therefore given by | Ψi (r,t) >=|

i

(r) >| i (t) >=|

i

(r) > e

 − i E t    h i

Consider the wavefunctions of the full Hamiltonian H(t). Since | Ψi > form a complete orthonormal set thus we can expand the wavefunctions as | Ψ'(r,t) >= ∑ an (t) | Ψn (r,t) > n

where the coefficients an (t) are time dependent.

4

SCE3337: QMIII R.T. Sang

Substitution into the Schrödinger equation gives H | Ψ' ( r,t) >= ih ⇒

t

| Ψ'( r,t) >

+ h(t)) | Ψn (r,t) >= ih

∑ a (t)( H

+ h(t)) | Ψn (r,t) >= ih∑ a˙n (t) | Ψn (r,t) > +ih ∑ an (t)

n

0

n



   ∑ an (t) | Ψn (r,t) > t n

∑ a (t)( H n

0

n

n

n

t

| Ψn (r,t) >



∑ a (t) H n

n

 | Ψn (r,t) > − ih | Ψn (r,t) >  = ih ∑ a˙n (t) | Ψn (r,t) > − ∑ an (t)h(t)| Ψn (r,t) > t 44443 n n 1444444244 0

This term = 0 for the unperturbed wavefunction since H 0 |Ψ n (r,t )>=ih



t

|Ψn (r ,t )>

∑ a (t)h(t)| Ψ (r,t) > −ih∑ a˙ (t) | Ψ (r,t) >= 0 n

n

n

n

n

n

We now multiply by the bra vector < Ψk (r,t)| :

∑ a (t) < Ψ n

k

n

(r,t)| h(t)| Ψn (r,t) > −ih∑ a˙ n (t) < Ψk (r,t)| Ψn (r,t) >= 0 n

We now separate the spatial and time dependence in the wavefunction so that the bra and ket vectors above are defined as < Ψk (r,t)| = e

 Ek t   i   h 

 En t   − i  h  

< ψ k (r)|

| Ψn (r,t) >= e

| ψ n (r) >

Substitution into the equation above reveals

∑ a (t) < n

k

(r)|h(t)|

n

(r) > e

 [ Ek − E n ]t  i  h  

n



∑ a (t) < n

− ih∑ a˙n (t)e

 [ E k − En ] t  i  h   kn

=0

n

k

(r)|h(t)|

n

(r) > e

 [ Ek − E n ]t  i  h  

= iha˙ k (t)

n

We now let Ek − En = h

∑ a (t)h n

n

kn

(t)e(

i

kn

kn t

)

therefore = ih

t

ak (t) where

hkn (t) =


5

SCE3337: QMIII R.T. Sang

This is a set of coupled first order differential equations, one for each ak(t) which determine the an (t) coefficients. These equations are not able to be solved exactly since all of the an (t) coefficients are related to only the derivative of the kth coefficient. Notice that if the perturbation is zero then a˙ k (t) = 0 ,therefore ak (t) must be a constant. This suggests that provided the perturbation is small, the coefficients change slowly. As a first approximation we will assume that the coefficients an (t) on the LHS of the equation are constant. Suppose at t=0, the system is in some state | Ψj (r,t = 0) > this requires that aj(0)=1 and an (0)=0 for n j. This can be interpreted as the system is totally in the j state at the start. At some time t later we will have ih

t

aj (t) = ∑ an (t)h jn (t)e

(i

jn t

)

n

After t=0, since the perturbation is weak aj(t) 1 still and all other coefficients an (t) ≈0 hence only this term will contribute to the sum therefore the right hand side of the equation reduces the sum to RHS= a j (t)hjj (t)e {

(i

jj t

)

≈ h jj (t)

=1

Therefore ih

t

t

aj (t) = h jj (t)

a j (t) =

−i h (t) h jj

Integrating gives t

∫ 0

d −i a j (t)dt = ∫ hjj (t)dt dt h 0 t



−i a j (t) − a j (0) = ∫ h jj (t)dt h 0



i a j (t) = a j (0) − ∫ hjj (t)dt h0



i a j (t) = 1 − ∫ h jj (t)dt h0

t

t

t

6

SCE3337: QMIII R.T. Sang

The coefficients other than aj(t) are given by d i ak (t) = ∑ an (t)hkn (t)e( dt n

ih

kn t

)

But since all an (t) 0 except aj(t) then only one term in the sum of the RHS significantly contributes d (i ak (t) = a j (t)hkj (t)e { dt ≈1

ih



d i (i ak (t) ≈ − hkj (t)e dt h

kj t

kj t

)

)

Integrating this expression: t

∫ 0

t

d i (i ak (t)dt ≈ − ∫ hkj (t)e dt h0

kj t

)

dt

t

i (i ak (t) − a1 (0) k2 3 ≈ − h ∫ hkj (t)e 0 ≈0 t

i (i ak (t) = − ∫ hkj (t)e h0

kj t

) dt

kj t

)

dt

7

SCE3337: QMIII R.T. Sang

7 . 1 Example 1: Constant Perturbation As an example we will consider a step function perturbation which is turned on at t=0 and remains constant thereafter. The matrix elements of the perturbation are therefore constants and can be taken outside the integral: i a j (t) = 1 − h jjt h The other coefficients are t

i (i ak (t) = − hkj ∫ e h 0

kj t

) dt t



i  −i ( i ak (t) = − hkj  e h  kj



i  −i  ( i ak (t) = − hkj  e h  kj 



ak (t) = −

hkj  ( i e h kj 

kj t

)

kj t

)   0

kj t

)

 − 1  

− 1 

The probability of finding the system in the state | Ψk > with the system starting at time t=0 2

in state | Ψj > is ak (t) . Then for k j we find  h  ak (t) =  kj   h kj 

2

2

 h  =  kj   h kj 

2

 h  =  kj   h kj 

2

 h  =  kj   h kj 

2

e ( i   e (i   e (i 

kj t

)

(i − 1 • e  

kj t

)

(− i − 1 •  e  

kj t

) (− i e

{

 2 − e( i 

kj t

kj t

)

)

−e

kj t

(i

(− i +e

)

kj t

− 1  )

kj t

kj t

*

− 1 

)

}

−e

)  

( −i

kj t

)

+ 1 

8

SCE3337: QMIII R.T. Sang 2

[ {

 h  =  kj  2 − 2cos(  h kj  now cos(2A ) = 1 − 2sin 2 ( A) ⇒ cos(  h  =  kj   h kj 

2

kj

kj

}]

t)

 t t ) = 1 − 2sin 2  kj   2 

    2 kjt     2 − 21 − 2sin     2     

 2  kjt     sin  2 2 2   ⇒ ak (t) = 2hkj  h2 kj2      Thus the probability of finding the system in state | Ψk > oscillates at an angular frequency which corresponds to the transition frequency as shown in the figure below: 1

0.8

0.6

0.4

0.2 -6

-4

-2

0 0

2

4

x 6

1

SCE3337: QMIII R.T. Sang

Lecture 8 7 . 2 The Harmonic Perturbation and Fermi's First Golden Rule We now look at one of the more important time dependant perturbations which is an oscillatory perturbation. These types of perturbations are important as nearly all matterelectromagnetic radiation interactions can be thought of as an oscillatory perturbations produced by an electric or a magnetic field. We have already seen the effect of DC electric and magnetic fields we now consider the effect of AC fields. Consider a time dependent perturbed Hamiltonian given by: h(t) = V cos( t ) This type of perturbation could be the due to the interaction of a monochromatic electromagnetic wave from a laser with a atom. The exact details of the interaction are contained in the V term. If there were more than one frequency involved, the perturbation would be a Fourier series over all of the frequencies present. The above Hamiltonian may be written as  ei t + e− i t  h(t) = V cos( t ) = V     2 Substituting into our expression for the amplitudes ak(t) gives t

i (i ak (t) = − ∫ hkj (t)e h0 ⇒

ak (t) = −

iVkj 2h

kj t

) dt

t

∫ (e

+ e −i

i t

t

)e(

i

kj t

)

i

(

kj −

dt

0



iVkj t  i ( ak (t) = − e 2h ∫0 



iV ak (t) = − kj 2h

t i − t  e i( kj + ) t e ( kj )    +  i( kj + ) i ( kj − )  0



iV ak (t) = − kj 2h

 ei (   i (

kj

+

+

)t

kj

+

kj

)t

+e

−1

)

e( + i( i

)t   dt

kj



kj

)t

− 1  − ) 

We notice that if kj ≈ then the term with the negative sign dominates all the other terms. This process corresponds to absorption of a photon from the perturbing field with energy h kj exciting the system from the lower state | Ψj > to the higher energy state | Ψk > , the transition has the frequency:

2

SCE3337: QMIII R.T. Sang

Ek − E j = h

kj

=

The other term dominates when kj ≈ − , this case corresponds to stimulated emission whereby the incident photon from the perturbing field stimulates the atom to make a downward transition. One can see that if we are exactly on resonance i.e. kj = 0 or kj = − then we get a singularity and as such becomes undefined. This concept will 0 be discussed further into the course when a finite lifetime for an excited state is introduced. In short, the terms in the denominator have a complex term added which stops it going zero. Clearly for there to be an upward or downward transition = kj , this means that the photon of the field must be approximately equal to the transition frequency. Consider the case for near resonant absorption In this case the second term in the brackets dominates such that Vkj  e i(  ak (t) ≈ − 2h  ( ⇒

kj −

kj

)t

− 1  − ) 

Vkj  e i ( ∆ )t − 1  ak (t) = −   2h  ( ∆ )  V = − kj e 2h

∆  i t  2 

∆  i  ∆2  t −i −e  2 e  (∆ ) 

 t 

   

∆  i ∆  t −i  Vkj 2i i ∆2 t  e  2  − e  2 =− e  2h ∆ 2i 

iVkj t i ∆2 t  ∆ t  =− e sin 2h ∆ t 2  2   ∆ t  iVkj i  ∆2  t  sin 2   =− te  ∆ t  2h     2

 t 

   

3

SCE3337: QMIII R.T. Sang

Therefore the probability of being in the state | Ψk > is:   ∆ t  Vkj 2 2  sin  2   2 ak (t) = 2 t  ∆ t  4h     2 One can show that a similar expression can be found for the stimulated emission term. You sin x  2 may recognise that the factor in the brackets has the form  which is identical to the x  intensity profile of the single slit diffraction pattern as shown below: 2

1

0.8

0.6

0.4

0.2 x -8

-6

-4

-2

0

2

4

6

8

0

For all practical purposes, the transition probability is zero unless − ≤

∆ t ≤ 2

Multiplying through by 2 h gives: 2h∆ t ≤ 2h ⇒ −h ≤ ∆Et ≤ h 2

−2h ≤

Note that h is Planck's constant. This sets an upper limit on the energy difference multiplied by the time t which has elapsed since the field was turned on: ∆Et ≤ h This is just the uncertainty principle! In effect it says that the longer that the perturbation is on, the more nearly ω =ω kj .

4

SCE3337: QMIII R.T. Sang

This is an interesting result, since it says that for near resonant absorption or emission the probability of the process is proportional to the time squared, however experimentally it is observed that the probability is directly proportional to the time. The reason for this difference is due to the fact that even under ideal conditions there is an intrinsic width to the energy levels for an excited atom (due to the uncertainty principle) which causes the probability to be proportional to t and not t2 . It is assumed that even for a perfectly monochromatic source of excitation with a single frequency such that ∆ = − kj means that will have a range of values. Then to get the probability of getting a transition from state | Ψj > to | Ψk > is given by the integration over

(large range): 2   ∆ t  Vkj 2 2 ∞  sin 2   W(t) = 2 t ∫  ∆ t  d(∆ ) 4h ∆ =−∞     2

Taking t to be fixed we let X =

∆ t 2 , hence d(∆ ) = dX and substituting into the above 2 t

equation gives Vkj 2 2 2 ∞  sin( X )  2 W(t) = 2 t dX 4h t X∫−∞  X  1442443 ⇒

 Vkj 2  W(t) =  2  t  2h 

This result is known as Fermi's First Golden Rule.

End of Part A of the Course

1

SCE3337: QMIII R.T. Sang

Lecture 9 8 . 0 Selection Rules The selection rules allow us to determine which optical transitions are allowed and which are forbidden. They can be determined by considering the matrix element of the electric dipole operator. For optical transitions, the electric field dominates so we may write V = −E• D where D is dipole operator and is given by D=-er. Thus the matrix element is Vkj = eE < Ψk (r,t) | r | Ψj (r,t) > The radial matrix element can be expanded in terms of spherical polar coordinates, since the relation between the Cartesian coordinates and the spherical polar coordinates are defined by θ z

r y

φ x

x = r sin cos y = r sin sin z = r cos Taking just the x component we have ∞


= ∫ ∫ ∫ 0 0 0

* k

(r)r sin cos

j

(r)r 2 sin drd d

2

SCE3337: QMIII R.T. Sang

Where we have separated the time dependence of the wavefunctions. We now separate the wavefunction as done previously: j

(r) = Rn j l j (r )Θ lj m j ( )Φm j ( )

Then ∞


= ∫ R

r Rn j l j dr∫ Θ

* 3 nk l k

0

2 * lk mk

sin Θl j m j d 2

0

∫Φ

* mk

cos Φ m j d

0

8 . 1 The (m) Selection Rule Evaluating the integral in gives: 2

∫Φ

2 * mk

cos Φ m j d =

0

∫e

−imk

im j

cos e

d

0

1 = 2 =

1 2

2

∫e

i( m j −m k )

[e

i

+ e − i ]d

0

2

∫e

i( m j −m k +1)

i ( m j − mk −1 )

+e

d

0

This integral is equal to zero unless either (m j − mk + 1) = 0 or (m j − mk − 1) = 0 . The matrix element < k (r,t)| x | j (r,t) > is therefore zero unless m j − mk = ∆m = ±1. We find that we get the same result if we try to find the matrix element of the y component. For the z component the matrix element < k (r,t)| z | j (r,t) > for the dependent part of the integral is 2

∫Φ

2 * mk

Φmj d =

0

∫e 0

2 −im k

im j

e

d =

∫e

i( m j −m k )

d 14 4244 3 0

=0 if m j ≠m k

Therefore the only non-zero matrix element for the z component is for m=0. Thus the selection rules for the magnetic projection quantum numbers for dipole allowed transitions are: ∆m = 0,±1

3

SCE3337: QMIII R.T. Sang

8 . 2 Spin Selection Rule Consider the spin of the electron. The total wavefunction may be written as the product of the spatial and spin terms: =

nlml

s

The electric dipole operator does not act on the spin wavefunction (it does with magnetic fields, but very weakly with time varying electromagnetic waves) so we can write:
= e ∫

* k

* sk

(r)

(r)d = e ∫

* k

r

(r)r

j

(r) j

(r)d

sj

d sk s j

Therefore ∆s = 0

8 . 3 J and L Selection Rules We can use the parity of the wavefunction to determine the L selection rule. Consider the effect of changing the sign of the x,y,z coordinates of the one electron atom wavefunctions. In spherical polar coordinates changing the sign is equivalent to: r → r,



− ,



+

Thus Under Parity

Transformation ψ nlml (r,θ,φ)  → ψ nlm l (r, π − θ,π + φ) = (−1)l ψ nlml (r,θ,φ)

The parity of the wavefunction is determined on whether l is even or odd. This result is applicable for all bound or unbound eigenfunctions for any potential that is spherical in form. Now consider the matrix element
= e ∫

* k

(r)r

j

(r)d

We have shown previously that the integration or r which is an odd function requires that the wavefunctions < k (r)| and | j (r) > must have different parity so as to yield a symmetric definite integral of an even function (see the section on the DC Stark shift in hydrogen). If < k (r)| has even parity then lk must also be even. If | j (r) > has odd parity then l j must be odd. Clearly then lk − l j = ∆ l = ±1,±3,±5 etc and ∆l = ±2,±4,±6..... must be excluded. Thus

l can only change by odd integer values.

4

SCE3337: QMIII R.T. Sang

To establish the possible range of values of l, conservation of angular momentum of the atom before and after it emits a photon must be considered. It is found that if an atom is in an excited state with quantum numbers j=1 and m j=1 and it decays to a ground state with j=0, mj=0 then if the photon which is emitted travels along the z direction it is Left Hand Circularly (LHC) Polarised and carries away one unit of angular momentum h . Clearly, since the atom started off with one unit of angular momentum h , the atom after emitting the photon has zero angular momentum since the photon has carried away one unit of the angular momentum. Consider the case when the atom starts of in the j=1 and mj=-1 states and decays via a photon to the j=0, mj=0 then if the photon which is emitted travels along the z direction it is Right Hand Circularly (RHC) polarised and carries away one unit of angular momentum - h . The maximum amount of angular momentum that a photon can carry is h , so this means that the total change in angular momentum is ∆j = ± 1. We can see that photons can have two angular momentum states, it is also possible to form a superposition state, the wavefunction of which is given by: 1 2

LHC

+

1 2

RHC

This corresponds to linearly polarised light. As such an atom emitting such a photon must exhibit no change in angular momentum, thus ∆j = 0 Hence the selection rules for the total angular momentum j are ∆j = ± 1,0 Clearly if ∆j = ± 1,0 then only ∆l = ±1 are allowed for single photon emission since s=0. Thus the selection rule for l is: ∆l = ±1 Also note that j=0 j=0 transitions are not possible since no angular momentum will be transferred. The single photon dipole allowed selection rules are summarised as follows: ∆j = ± 1,0 ∆l = ±1 ∆ml = ±1,0 ∆s = 0 You should memorise these selection rules!

1

QMIII: SCE3337 R.T. Sang

Lecture 10 9.0 The Einstein A and B coefficients The basic model of the interaction process between electromagnetic radiation and atoms can treated by a model first introduced by Albert Einstein. This theory was phenomenonolgical in nature and makes no explicit use of quantum mechanics except that the energy levels of the atoms are assumed to be quantised and it is convenient to regard the electromagnetic field as photons. Suppose that we have N identical atoms in a gas and each atom has only two energy levels E1 and E2 (we will label the states |1> and |2>) with E1 < E2 as shown below: N2 E g 2 2

-hω

B 21W

B 12W Absorbtion

|2> A21

Spontaneous Emission

|1> N1 E g 1 1

Stimulated Emission

We also have the transition frequency defined by h = E2 − E1 which is the difference in energy between the two energy levels. We also assume that the two energy levels have N 1 and N 2 numbers of atoms each with degeneracies of g1 and g2 respectively. There are three basic radiative processes that can occur: • Absorption Suppose that the energy density of the radiation passing through the gas is W( ) where W( ) is the average energy density, that is the energy per unit volume per unit bandwidth. An atom in the lower energy state |1> can make an up going transition to the higher energy state |2> by absorbing a photon of energy h = E2 − E1 . We assume in this case that the transition of this type is proportional to the energy density with a constant of proportionality of B 12 . The upward transition probability is B12 W . • Stimulated Emission If an atom is in the state |2> and another photon that is resonant with the transition passes this atom, the presence of this photon can induce the atom to emit an identical type of photon which will then take the atom back to state |1>. This rate is proportional to the energy density of the incident radiation, with a constant of proportionality of B 21 . Thus the stimulated emission probability from state |2> to |1> is B21 W .

QMIII: SCE3337 R.T. Sang

2

• Spontaneous Emission Consider an atom in the higher energy state |2>. There is a finite probability given by A 21 that the atom will pass from this state to the state |1>. In doing so it will emit a photon which has a random direction and polarisation with energy hω (hence the term spontaneous emission). Actually this process was a major problem for quantum theory, the explanation of which had to wait until the advent of Quantum Electrodynamics (QED). This process can be explained as follows: the states |2> and |1> are eigenstates of the Hamilitonian and for an isolated system, once it is in this energy eigenstate, if it is unperturbed should remain there forever. QED relies on the basis that even when there are no photons around, the electromagnetic field still has a zero point energy (recall your lectures on the SHO!) and it is this field which induces the atom to decay. The so called vacuum fluctuations are an infinite virtual supply of zero point energy photons which cover all of the frequency range of the EM spectrum, which induces the atom to emit photons (like stimulated emission). You might actually think this is a crazy idea since that means the vacuum field has an infinite energy and this still remains one of the unsolved problems of QED. But there have been experiments such as the Casmir Effect which shows conclusively that this zero point energy exists. 9.1 Rate Equations We now consider the influence of these three processes on the energy level populations, N 1 and N 2 . The total number of atoms is given by N which is the sum of the populations: N = N1 + N2 The rate of change of population of the two energy levels is: dN1 dN =− 2 dt dt Now the rate of change of the population in |1> is: dN1 = (rate at which atoms enter the lower state) - (rate at which they leave) dt = (Stimulated Emission Rate + Spontaneous emission rate) - (Absorption Rate) = N2 B21 W + N2 A21 − N1 B12 W This rate equation holds for the general case of electromagnetic radiation interacting with these two level atoms. Now consider the special case of thermal equilibrium. In this case, the population levels are constant, thus our rate equation becomes: N2 B21 W + N2 A21 − N1 B12 W = 0 Hence − N2 B21 W + N1 B12 W = N2 A21

3

QMIII: SCE3337 R.T. Sang



( N1 B12 − N2 B21 )W = N2 A21



W=

A21  N1   N B12 − B21 2

For thermal equilibrium with no external radiation field on the gas, the relative number of atoms in various energy states is given by the ratio of the Maxwell-Boltzman distributions for each level: E  − 1   kT 

N1 g1e g1  kT  = = e E  − 2 N2 g2  kT  g2 e h

Substitution into the above equation reveals: W=

A21  g1  g e

h     kT 

2

 B12 − B21  

This can be rearranged to give W=

1  g1 B12  g A e 2 21

h     kT 



B21  A21 

Now this expression for the energy density must be consistent with the Planck's law for the radiative energy distribution of a body in thermal equilibrium which is given by : Wd

=

h

3

d

c   e

2 3

h     kT 

 − 1 

For single



W=

1 h c   kT   e − 1 h 3   2 3

Equating the our derived expression for the energy density with the Planck relation gives: 1  g1 B12  g A e 2 21

h     kT 

B  − 21  A21 

=

1 h c   kT   e − 1 h 3   2 3

Hence matching coefficients yields 2 3 B21 c = A21 h 3



B21 =

2 3

c

h

3

A21

4

QMIII: SCE3337 R.T. Sang

Also 2 3 g1 B12 c = g2 A21 h 3

But B21 =

2 3

c

h

3

A21 ⇒ A21 =

h

3 2 3

c

B21

Substitution into our expression yields B12 g2 = B21 g1 As shown the three Einstein coefficients are inter-related. It can be seen that without introducing the stimulated emission process, consistency between the Planck formula and the Einstein expression could not be achieved. Furthermore it must be stressed that although the relationship between the Einstein coefficients have been derived for the thermal equilibrium case, they hold generally since the coefficients are independent of the magnitude of the energy density or the temperature. It should also be noted that for thermal equilibrium, the radiative energy density W is distributed isotropically in space. This is of course not so with a light beam. The relationships do remain valid in systems such as a gas or a fluid in which the atoms or molecules have random orientations so that the interaction within the gas as a whole is isotropic. However in solids, the constituent atoms or molecules may be locked in a common orientation. In this case, the bulk material may have quite anisotropic optical properties. One point of interest is the ratio of power emitted in the spontaneous emission process compared to the stimulated process: A21 W = Ratio of spontaneous to stimulated emission B21 We need to look in detail at the form of the energy density W which is dependent on the frequency of the radiation. Recall for the case of thermal equilibrium that the energy density is given by Planck's Law for a frequency interval of → + d is: Wd

=

h

3

c   e

2 3

d h     kT 

 − 1  h ≈ 1, the corresponding k BT 50µm which is in the infrared part of the spectrum.

For room temperature at approximately 300K then the ratio frequency is around 6x1012 Hz with

5

QMIII: SCE3337 R.T. Sang

h 1 then the exponential k BT in the term for the energy density dominates and as such the energy density is small thus In the case where frequencies are larger than 6x1012 Hz i.e.

A21 >> B21 W As such for frequencies that are greater than 6x10 12 Hz, the thermally induced spontaneous emission rate is much larger than the stimulated process.

9.2 The Quantum Theory of the Einstein B Coefficient So far in this section of the course, we have treated the Einstein A and B coefficients as parameters that are experimentally determined. But, in fact, the absorption and emission of the radiation can be calculated by using Time Dependent Perturbation Theory. Consider now an idealised atom consisting of two energy levels |1> and |2> with transition frequency ω12 . We now apply a harmonic perturbation on this system with radiation that has frequency ω and is near resonant with the transition frequency as shown below. |2> E 2 -hω

ω12

|1> E 1 The transition probability is given by Fermi's first Golden Rule (see section 7.2 of the lecture notes):  Vkj2  W(t) =  2  t  2h 

6

QMIII: SCE3337 R.T. Sang

Recall that in the harmonic perturbation case the perturbation had the form h = V cos t . This term more accurately should be written as h = V cos( t + k • r) which means that there is a spatial dependence to the perturbation (without this term we have just assumed that the perturbation is at the origin). This perturbation Hamiltonian describes the interaction between an atom and the electric field of the light. Consider the electromagnetic wave to be polarised along the x direction and propagates along the z direction. The atom consists of a nucleus and is surrounded by n electrons, over the dimensions of the atom, which is in the order of 10-10 m. For optical frequencies > 1015 Hz then k • r =kz with E0 parallel to the x-direction. Substituting this back into Fermi's First Golden Rule gives the transition probability of going from state |1> to state |2> is < 1| Dx | 2 > t 2h 2 2

W12 (t) = E0

2

Now the energy density of an electromagnetic wave is given by W=

1 2

0

E0

2

Hence the probability of transition can be related to the energy density such that;  W12 (t) =  

 2  < 1| Dx | 2 > Wt 0h  2

It follows therefore if we divide this equation by t we will get the transition rate:   2 W12 (t)/ t =  2  < 1| Dx | 2 > W  0h 

7

QMIII: SCE3337 R.T. Sang

We can equate this to the induced transition rate ( B12 W ):  B12 W =  

 2  < 1| Dx | 2 > W 0h  2

Thus the Einstein B coefficient is  B12 =  

 2  < 1| Dx | 2 > 0h  2

for a single atomic system. What about a gas or molecules or atoms? In this case all of the electric dipoles will be orientated in different directions at any particular time in space in random directions. Given that θ is the angle between the dipole and the E field then < Dx >=< D > cos 2

we need the average value of < 1| Dx | 2 > 2

< 1| Dx | 2 > = < 1| D |2 > cos2 2

Thus we need the average value of cos 2 is given by cos 2 =

integrated over the solid angle of a sphere which

1 3

Exercise: Show this Hence the Einstein B coefficient is B12 =

3 0h

2

< 1| D | 2 >

2

The B 21 coefficient can be found by the relationship derived earlier: B12 g2 = B21 g1



B21 =

g1 B g2 12



B21 =

g1 2 2 < 1| D | 2 > g2 3 0 h

8

QMIII: SCE3337 R.T. Sang

This approach does not yield the Einstein A coefficient directly but we can get this from the relationship between the A and B coefficients derived using the rate equation model: A21 =

h

3 2 3

c

B21

Therefore upon substitution into the relationship for the A coefficient reveals; A21 =

3 g1 g2 3 hc 3

< 1| D | 2 > 0

2

1

QMIII: SCE3337 R.T. Sang

Lecture 11 1 0 . 0 Optical Excitation of Atoms Suppose that we consider atoms contained in a thin slice of gas perpendicular to an incident light beam, so that the intensity of the incident light changes by a negligible amount as shown below:

Let the beam be switched on at time t=0 and all atoms are in their ground state. We want to find the number of atoms in their excited states at some time t later. Recall the rate equation for a two level system (lecture 10) was given by dN1 dN = − 2 = N2 A + ( N2 − N1 )BW dt dt If the total population is N then N = N1 + N2 ⇒ N1 = N − N2 Substitution into the above differential equation reveals





dN2 = N2 A + (2 N2 − N )BW dt



dN2 = N2 ( A + 2BW ) − NBW dt

This differential equation can be solved by separating the variables. −

dN 2 = dt N2 ( A + 2BW ) − NBW

let c1 = A + 2BW c2 = NBW

2

QMIII: SCE3337 R.T. Sang

Integrating yields N2

−∫ 0

t

dN2 = dt N2 c1 − c2 ∫0

recall the standard integral

dx

1

∫ ax + b = a ln (ax + b)

Hence N

 1  2 t − ln ( N c − c ) = [t ]0  2 1 2   c1 0 ln ( N2 c1 − c2 ) − ln(−c2 ) = −c1 t N c −c  ln  2 1 2  = −c1 t  −c2  N c − c  −c t −  2 1 2  = e( 1 )  c2  Therefore N2 =

{

}

c2 −c t 1 − e( 1 ) c1

Substitution for c2 and c1 gives N 2 as N2 =

{

NBW − A + 2BW t ) 1−e ( A + 2BW

}

Graphically the solution looks like: Excited State Population Vs Time 0.8 0.7

N2

Population

0.6 0.5 0.4 0.3 0.2 0.1 0 0

20

40

60 time

80

100

3

QMIII: SCE3337 R.T. Sang

It is interesting to look at this population for different ranges of the exponential in this expression. For ( A + 2BW )t > BW as such from the above equation we deduce that N2 > A (stimulated emission processes are much greater that the spontaneous processes) using this condition on our steady state solution gives N2 =

NBW N = 2BW 2

The interpretation of this expression is that for powerful excitation, the limiting population in the excited state is half of the total population, so no matter how many more extra photons we put into the system the population in the excited state can not exceed 50% of the total population. The effect is called Saturation and can be seen in the graph above. It is simple to explain this effect as in the high light intensity case the simulated emission effects balance the absorption effects (where we have assumed that spontaneous emission is negligible in high intensity fields).

4

QMIII: SCE3337 R.T. Sang

If we now turned the incident light beam off, the excited atoms will return to their ground state via spontaneous emission of photons. Then our rate equation for the excited population becomes with W set to zero dN2 = N2 ( A + 2BW ) − NBW = N2 A dt dN 2 = − N2 A dt

− ⇒

This equation is separable and easily solved with the boundary condition that at time t=0 there are N20 steady state atoms in the excited state then N

t

dN ∫N 0 N22 = ∫0 −Adt 2 Therefore N2 = N20 e − At A plot of this function is shown below: N2 Population Decay Curve 0.6 0.5

N2

Population

0.4 0.3 0.2 0.1 0 0

20

40

60

80

100

time

The radiation emitted in this process is called fluorescent radiation and the measurement of this provides another experimental technique of measuring the Einstein A coefficient. The reciprocal of A gives the average lifetime for the state r: r

=

1 A

5

QMIII: SCE3337 R.T. Sang

1 0 . 1 Simple Optical Processes You have already encountered in your electromagnetism courses, the macroscopic theory of absorption and dispersion of electromagnetic radiation passing through a medium. In this lecture we will establish the connection between the Einstein A and B coefficients and this macroscopic theory. This is important as we need to be able to make the quantum theory converge with the classical macroscopic theory on a large scale. Let us first re-visit some of the aspects of the macroscopic theory. Consider the picture below: Incident Light GAS

Transmitted Light

We have a light beam incident on a gas and this medium we consider as a dielectric material where the polarisability is proportional to the applied electric field: P=

E

0

where P is the polarisability of the medium and χ is the susceptibility. For an isotropic dielectric, the refractive index is related to the susceptibility by n2 = 1 + For a plane, monochromatic wave passing through a medium, the electric field vector is given by E(z,t) = E0 e− i(

t − nkz )

This can be rewritten as E(z,t) = E0 e− i(

t − kz )

• e i( n −1 ) kz = E0 (z,t)e i( n−1 )kz

The effect of the medium on the electromagnetic wave is given by the second exponential term. For small susceptibilities i.e. >absorption rate or the stimulated emission rate then the quantity i.e. the intensity of the light is weak then, BI B = 2 W and can make downward transitions to a number of states |j>. The lifetime of the state |k> is related to various Einstein A coefficients by the following. If k is the lifetime, the downward transition probability is 1

= sum of the emission probabilities =

∑A

kj

j

k

This is the transition probability per unit time. Notice that we are only considering the spontaneous emission rate, not the stimulated emission rate. An estimate of the natural width can be obtained from the Uncertainty Principle relating energy and time: ∆E∆t ≈ h Consider a two level atom labelled |k> and |j>. State |k> is higher in energy than |j> as shown below. We can determine the natural spread of frequencies of the radiation emitted from state |k> to |i> by the following: . ∆Ek

E k |k> hω

Ej

|j>

If the lifetime of the state |k> is lifetime of the state: ∆t = k ⇒ ∆E k = h ⇒ h∆ ⇒∆

k

=

=h 1 k

k

∆E

j

then the uncertainty in the time t is characterised by the

2

SCE3337: QMIII R.T. Sang

where the width of the line is the sum of the two energy level widths involved: ∆

=

[∆E

j

+ ∆E k

]

h

Often the lower state is the ground state which has an infinitely long lifetime provided that the atom is not subject to a perturbation. The frequency spread is then only that for the upper state only. Obviously the longer the lifetime of the state, the more narrow the energy and hence frequency spread. This is useful to know but it doesn't tell us anything about the spectra profile (or the line shape of intensity Vs frequency) of the emitted radiation. We can gain some further insight into this shape of the spectral profile by considering a classical model of an excited atom which was due to Lorentz. electron

Plane Polarised EM Wave

z

E ω

x

0

y

k

We assume that the excited atom is represented by a classical dipole oscillator consisting of an electron vibrating up and down in simple harmonic motion as shown above. Suppose that the frequency of oscillation is 0 which corresponds to the transition frequency. We also let the atom be subject to electromagnetic radiation which is propagating in the x direction which is plane polarised in the z direction (i.e. the electric field vector points in the z direction) with oscillation frequency such that: E = E0 e − i

t

The electric field will induce the electron to oscillate in the direction of the electric field (up and down in the z direction) due to the force caused by the interaction between the charge and the electric field. This is known as the radiation reaction force and is given by F = qE Therefore F = (−e)E 0e − i

t

(1)

Recall that the equation of motion for a simple harmonic oscillator is given by m

d 2x + kx = 0 dt 2

3

SCE3337: QMIII R.T. Sang

 d2 x m 2 +  dt where

0

=

2 0

 x = 0 

(2)

k m

If we now drive the simple harmonic oscillator with the radiation above we can equate equations (1) and (2):  d2 x m 2 +  dt

2 0

 x = (−e)E0 e −i 

t

Since the electron is oscillating, it is an accelerating charge and as such will radiate energy which corresponds to the spontaneous emission process which acts as a damping process. This means that we need to take into account a damping factor into any equations that we use to describe the process of an atom interacting with a oscillating electromagnetic wave. Taking this into consideration we can incorporate this into our equation of motion above yielding:  d2 x dx m 2 +Γ +  dt dt

2 0

 x  = (−e)E 0e − i 

t

The solution to this equation is given by

x = Ae



Γ t 2 −i ' t

e

eE0 −i t e m 2 2 −i Γ 0 −



where '=

2 0

Γ −   2

2

The first term is a transient term which decays off exponentially with time and represents spontaneous emission. The second term is a steady state term which oscillates at the driving frequency of the atom. Consider for the moment the transient term. This term oscillates at almost exactly the 2 resonant frequency. The power radiated is proportional to x . From classical dipole theory I(t) ∝ e −Γ t that is the damping constant is associated with the spontaneous emission process. What is the distribution of the emitted energy or the intensity distribution as a function of frequency for this transient oscillation? We can use Fourier Analysis to gain some insight into this. For example, we have the time dependence of the oscillator: f(t)=0

t excited state

P=Mv2

After P = Mv1 + hk P = Mv1

E2 − E1 = hω 0

P = hk

|1> ground state Atom Excited state

Atom Ground State

By conservation of momentum Mv2 = Mv1 + P ⇒

Mv2 = Mv1 + hk

(1)

By conservation of energy E2 +

1 1 Mv 2 2 = E1 + Mv12 + h 2 2

(2)

Let ω0 be the frequency of the light emitted if the atom initially had zero velocity before and after the collision, i.e. h

0

= E2 − E1

(3)

Manipulating equation (2) gives

( E2 − E1 ) − h

=

1 M(v1 2 − v 2 2 ) 2

(4)

We can eliminate E2 , E 1 by equation (3) and From equation (1) we can eliminate v 1 v1 = v 2 − Thus

hk M

(5)

8

SCE3337: QMIII R.T. Sang

v1 • v1 = v2 2 −

2h h2 k 2 v2 • k + 2 M M

(6)

Hence using equations (3) and (6) we can rewrite equation (4) as h ⇒

h

1  2 2h h 2k 2  M v 2 − v2 • k + 2 − v2 2   2  M M

0

−h

=

0

−h

= −hv 2 • k +

h 2k 2 2M

Let us define the direction of the emitted photon to be in the z direction. Then 0 −

=−

v2 cos h 2 + c 2Mc 2

where we have used k =

and θ is the angle between the direction of the atom's velocity c and the emission direction of the photon. For the present case we will look at the maximum effect which occurs for θ=0o then our expression reduces to: 0

=

 1− v 2 + h 0   c 2Mc 2 

We can now look at the relative orders of magnitude of the three terms on the RHS of this equation. For a typical atom at room temperature with a transition at optical frequencies we have the following parameters: v 2 ≈ 103 ms −1 , ω ≈ 1015 Rads −1 , M ≈ 10−27 kg. v2 ≈ 10−5 c h ≈ 10 −9 2 2Mc As such the third term in the equation on the RHS is negligible and can be ignored reducing the expression for the transition frequency as 0

=

 1− v 2   c

Making ω the subject of the equation gives =

 v2  0  1− c

−1

Taylor expanding the bracket term to first order about the zero point [ i.e. (1− x)−1 ≈ 1 + x ] reveals

9

SCE3337: QMIII R.T. Sang

=

0

 1+ v 2   c

Therefore the frequency of the emitted radiation experiences a conventional Doppler shift which is determined by the velocity component of the atom in the direction of emission. Exercise: Show that the frequency correction to second order is =

 v 2  v2  2  +  0  1+  c  c  

The velocity of atoms in a gas at temperature T is given by the Maxwellian velocity distribution. The probability that an atom in a gas at temperature T has a z component velocity between v z and v z+dvz is P(v z )dv z =

M −( e 2

Mv z 2 )

dv z

where =

1 kB T

Using our expression for the Doppler shifted frequency we can write the z component of the velocity as vz =

c( −

0

)

⇒ dv z =

0

c

d

0

Substitution into the Maxwellian velocity distribution gives

P( )d

 c M =  e  0 2

  c( − − M   0 

0

2     

)

d

defining ∆ = 2 0  c

2 2k B T = 2 0  M c M

Our expression for the distribution can be rewritten as

P( )d

=

 2 ( − −   ∆ 

 2  e ∆ 

0

2     

)

d

10

SCE3337: QMIII R.T. Sang

This expression for the frequency distribution is known as a Gaussian lineshape. The peak of the Gaussian is at ω=ω0 and the line has an intensity profile given by = I0 e

I( )d

 2 ( − −   ∆ 

0

2     

)

d

This is plotted below for a normalised distribution as a function of ω-ω0 . Gaussian Distribution 1

Intensity (Arb. Units)

0.8

0.6

0.4 FWHM = ∆ ln(2)

0.2

0 −1.5

−1

−0.5

0

0.5

1

1.5

ω−ω 0

You can show that it is given by FWHM = ∆ ln(2) = 2

0

c

2kB T ln(2) M

≈ 7.16x10−7

0

per

Kelvin amu

Where amu stands for atomic mass unit = 1.66x10-27 kg. As an example of the size of the broadening due to this effect consider a helium atom with amu =4 at 300K. In this case FWHM

300 = 6.2x10 −6 4 0 Typical values for optical frequencies ω 0 ≈ 1015 Rads−1 then the FWHM is approximately 1010 Rads -1. Comparing this ration to the natural line width shows that only the atomic transitions with the shortest lifetimes of 10-9 sec would a natural line width approaching that of the Doppler width hence this effect, in general is more dominant than the natural linewidth broadening. = 7.16x10−7

1

SCE3337: QMIII R.T. Sang

Lecture 15 1 4 . 0 Collision Broadening So far we have considered two broadening mechanism that are responsible for the width and the shape of spectral lines. We now look at a third mechanism for line broadening which is due to the collisions of atoms which is called collision broadening. This process can be quite complicated and we will only treat this in a rudimentary fashion. Up to now we have treated atoms in a gas as if they to do interact but in any real gas of atoms, atoms will be subject to the interaction forces of nearest neighbour atoms, ions or molecules. This will of course perturb the state of any radiating atoms and as such will lead to a broadening of the lineshape which is quite often larger than the natural linewidth. The increase in the linewidth is a function of the density of the perturbing species and is therefore also known as pressure broadening.

Energy

Consider the influence of a single perturber at a distance r as shown in the figure below:

Unperturbed Frequency

|k> hω ki

Perturbed Frequency

|i>

h(ω k + ∆ω ki )

r Distance between atoms If ∆Vk (r) and ∆Vi (r ) are the changes in the energy levels of the states |k> and |i> respectively for a two level transition, then the instantaneous change to the transition frequency is given by k



i

=∆

ki

=  

∆Vk (r) − ∆Vi (r )   h 

2

SCE3337: QMIII R.T. Sang

1 4 . 1 Interatomic Forces It is often possible to represent the long range interaction between an excited atom and a perturber by a potential of the form: ∆Vk (r) =

Cnk rn

where Cnk is a constant which depends on the excited level involved as well as the perturbing species. n is an integer an corresponds to various forces some of the well know cases are: • n=2: This applies to the case of hydrogen and hydrogenic ions in the electric fields produced by other ions or electrons. These effectively give rise to a Stark shift of the energy levels which depends linearly on the field strength. • n=4: The describes the Stark broadening in helium and other systems where the splitting is a quadratic function of the electric field of the perturbers. • n=3: This applies to the case of the resonance dipole-dipole interaction. This interaction is finite only when the excited atom interacts with an identical atom in the ground state and when a strong allowed dipole transition, usually the resonance line of the atom, connects the two levels. • n=6: This is the usual long range attractive van der Waals dipole-dipole interaction which always exists between any two atoms. • The first two cases are only important when dealing with highly ionised gases, for unionised gases, the most important interactions are the dipole-dipole interactions as well as the van der Waals interaction. It should be noted that we have not discussed short range forces as their effects are difficult to calculate and can not be expressed by any general formula. 1 4 . 2 The Lorentz Treatment of Collision Broadening Recall from Fourier transform theory that we used in the last lecture that the shape of the line profile at a frequency separation of = 0 - from the line centre is determined by the 1 radiation emitted during the time interval t, where ∆t ≈ . If we imagine the atom ∆ emitting radiation until perturbed by a strong collision, then the time of interest will be the mean time between collisions, Tc. This must be compared with the duration of one collision, tc. We consider a collision regime where the duration of the collision time is less than the change in frequency from the central frequency of a transition i.e. tc
and |g> with energies Ee and Eg respectively with E e > E g as shown below

The atom is now subject to the interaction with a monochromatic optical oscillating electric field which has a frequency that is near resonant with the transition. The atomic unperturbed Hamiltonian is given by Ha with the unperturbed energy eigenvalues given by the eigenvalue equation: Ha | g >= − Ha | e >=

h 2

h 2

0

0

|g>

|e>

2

SCE3337: QMIII R.T. Sang

The matrix elements of Ha are then h 0 < j | Ha | i >=  2  0

  h  −  2 0 0

Recall from previous lectures that the Interaction Hamiltonian for an atom interacting with an electric field is given by h(t) = −d • E(t ) where in this case the electric field is given by E(t ) = E cos

L

t

Assume that the atomic dipole moment is aligned parallel to E(t) then h(t) = −dE cos

L

t

The matrix elements of h(t) are found by taking the expectation of h(t): < j | h(t)| i >= − < j | dE | i > cos

L

t

E does not act on the atomic states hence < j | h(t)| i >= −E cos

L

= −deg E cos

t< j|d|i> L

t

where deg is the matrix element given by deg =< j | d | i > Because |g> and |e> are optically connected, their parity must be opposite (see Lecture 9) such that < g | h(t)| g >=< e | h(t)| e >= 0 Thus only off diagonal elements are non-zero < j | h(t)| i >= −deg E cos

L

t (non zero for i j)

Hence we can write the matrix for deg : 0 deg =< j | d | i >=  d

d  0

3

SCE3337: QMIII R.T. Sang

1 matrices 2 which are known as the Pauli spin matrices. For a electron with total spin vector S then the components of the vector are given by: It is possible to represent both the atomic and interaction Hamiltonians by spin

where

Sx =

h  0 1 1  = 2  1 0 2

Sy =

h 0  2i

Sz =

h 1 0  1  = 2  0 −1 2

,

x

y

and

−i 1 = 0 2

z

x

y

z

are the Pauli spin matrices .

Going back to the matrix form of the interaction Hamiltonian it is possible to rewrite it in terms of the matrix S x: < j | h(t)| i >= −deg E cos = −E cos = −E cos

L

L

t

0 t d

d  0

0 td  L 1

1  0

dE h 0 cos Lt  h 2 1 = −2Ω cos LtS x = −2

where

1  0

is defined as the Rabi frequency, such that Ω=

dE h

The Rabi frequency gives the strength of coupling between the light field and the atom. We now consider rewriting the atomic Hamiltonian Ha in terms of the Pauli spin matrices, recall that: h 0 < j | Ha | i >=  2  0 

  h   − 2 0 h 1 0  = 0   2  0 −1 = 0 Sz 0

4

SCE3337: QMIII R.T. Sang

Since we can put the Hamiltonians into these forms containing the Pauli spin matrices the 1 spin representation is therefore suitable for describing the interaction of monochromatic 2 light with a two-level atom. From second year you will have learnt that an electron in the presence of a magnetic field will undergo Larmor precession. It is now possible to develop the same analogy for the precession of our pseudo spin vector S about a pseudo magnetic field in the following way: Let us reconsider how spins precess in a magnetic field. This precession is known as the Larmor precession. The magnetic moment of an electron is given by: s

= −gs

S=− S

B

h

where is the gyromagnetic ratio with

s

= −gs

B

h

S = − S.

Recall that a magnetic dipole in the presence of a magnetic field B will experience a torque (since the dipole will tend to want to align with the B field) which is given by Γ=

×B

The action of this torque is shown in the figure below:

From this figure we can see that the hence we may write: Γ=

is identical to the change in S , with respect to time,

∆S dS = ∆t dt

From the figure above we can see that the vector dS is just the arc length which is defined as dl=rd thus in terms or our vectors dS = S sin d

5

SCE3337: QMIII R.T. Sang

Hence the torque is given by Γ= where

dS d = Ssin = Ssin dt dt

(1)

is the precessional frequency. The torque is also given by Γ=

S

×B= − S×B

(2)

Equating equations (1) and (2) enables the precessional frequency to be defined such that S sin = − S×B ⇒ Ssin = − SBsin ⇒

= B

The interaction energy of the magnetic moment in the magnetic field is given by ∆E = −

•B

S

If we now confine B to the z direction only we can write this as ∆E = − (

)

S z

Bz

From the definition of µS this can be rewritten as ∆E = − (− Sz )Bz = BzSz This is just related to the precessional frequency found above hence ∆E = Sz Hence the atomic Hamiltonian can be re-defined as Ha =

S = B0 Sz

0 z

This tell us that we can represent the Larmor precession of S , the pseudo spin vector about a pseudo magnetic field B 0 which is parallel to the z direction.

6

SCE3337: QMIII R.T. Sang

Recall that the interaction Hamiltonian was given by < j | h(t)| i >= −2Ωcos

L

tS x

We can rewrite this as hij (t) = B1Sx where B1 = −

2Ω

cos

L

t

The motion of the pseudo spin vector is quite complex as it precesses about B 0 and B 1 . It is often convenient to re-express the pseudo magnetic field B 1 as B1 = −



(e

Lt

+ e−

)

Lt

To simplify our model, we now transform to a frame that rotates at the laser frequency about OZ such that B1 → B1' where B1' = −



(e

Lt

+ e−

Lt

)e



Lt

=−



(1+ e

−2

Lt

L

)

We now make the rotating wave approximation (RWA) which is an approximation that allows us to discard high frequency terms (i.e. those rotating at twice the laser frequency). The neglected high frequency terms give rise to the Bloch-Siegert shift. The other term that is left is a constant in the (-)x direction and the magnitude is given by B1' =



The precessional frequency 0 in this rotating frame is reduced to 0 − L = ∆ , where the is defined as the detuning between the atomic transition frequency and the laser frequency. Summarising what we have done so far: The new x',y',z', rotating frame we have the following parameters: B0 ' =



B1' = − Beff = Beff =



( B0 ' )

2

( B0 ' )

+ ( B1' ) 2

2

+ ( B1 ' ) = ∆2 +Ω 2 2

7

SCE3337: QMIII R.T. Sang

Where B eff is just the vector sum of B 1 ' and B 0 ' We now define the effective precessional frequency Ω eff = Beff , thus from the above definitions: Ω eff = Ω2 + ∆2 We can now investigate the precession of the pseudo spin vector S about B eff. Recall that the magnetic moment is S

=− S

From previous arguments we were able to demonstrate that the torque was Γ=

dS = dt

S

× Beff

Substitution for µS gives Γ = − S × Beff Recall the vector identity: A × B = −B × A therefore Γ = Beff × S = Ωeff × S We can write

eff

and S in terms of components in the x', y', z' axis such that

Ω eff = (−Ω,0,∆) S = (Sx ,Sy ,Sz ) dS =Ω eff × S dt In component form this becomes:  dSx ,  dt

dSy , dt

ˆi dSz  = Ω dt  Sx

ˆj 0 Sy

Explicitly multiplying this out yields dSx ˆ = i (−∆Sy ) dt dSy ˆ = j(ΩSz − ∆ Sx ) dt dSz ˆ = k( ΩSy ) dt

kˆ ∆ Sz

SCE3337: QMIII R.T. Sang

It is usual to redefine the components of the pseudo spin vector as u 1 ⇒ dSx = du 2 2 v 1 Sy = ⇒ dSy = dv 2 2 w 1 Sz = ⇒ dSz = dw 2 2 Sx =

Thus the component equations become: du = −∆v dt dv = ∆u +Ωw dt dw =−Ωv dt These three equations are known as the Optical Bloch Equations.

8

1

SCE3337: QMIII R.T. Sang

Lecture 17 1 6 . 0 Interpretation of the Bloch Vector Components Recall from early lectures that the state or the wavefunction of an atom may be represented as a linear superposition of basis states: >=

|

n

∑c | i > i =1

i

where cn are coefficients and {|i>} are a complete set of n orthonormal states. In the case of the two level atom this sum is very simple and has only two terms which can be written as >= b | e > + a | g >

|

where a and b are coefficients and are in general complex (see Lecture 1). For a normalised wavefunction recall that the sum square of the coefficients is unity: a + b =1 2

2

The states in the matrix formalism are represented by the eigenvectors:  1 | e >=    0  0 | g >=    1 1 6 . 1 The significance of the w or S z com ponent Now recall from the last lecture that we defined the w vector in terms of the S z component of the pseudo spin vector: Sz =

w ⇒ w = 2 Sz 2

The wavefunction can be written in terms of the column vectors as |

 0  1  b >= a  + b  =    1  0  a

The eigenket is the complex conjugate of this:
]

We now write the dipole operator in terms of its real and imaginary components such that, d = dr + id i Substitution back into the expectation of the dipole reveals,
=

1 Re[(dr + idi )(< u > + i < v > )] h

Multiplying this out and taking only the real terms yields,
=

1 [ d < u > − di < v >] h r

This shows that u and -v represent the in-phase and in-quadrature (right angle) components of the atomic dipole with the electric field {since in the Bloch equations (see last lecture) the terms involving u are +u. and the v terms are -v. }. The third Bloch equation that involves the time derivative of w, shows that the only dependence on the other Bloch component vectors is v. Since this term only involves the change of population and is coupled to the electric field that is producing the energy changes we can conclude that v must be the absorptive component of the dipole and as such u must be related to the dispersive part of the dipole. u describes dispersive effects and v describes absorption effects. It is also possible to show that

(

u2 + v 2 + w 2 = a + b 2

)

2 2

=1

4

SCE3337: QMIII R.T. Sang

1 6 . 3 The Block Sphere We can define a new space where w, u and v are the unit vectors in this space and the block vector is defined with respect to these unit vectors. The Bloch vector S rotates about the dS =Ω eff × S . eff vector since dt It is useful to note a couple of important cases: (1) = 0: In this case the Bloch vector is confined to the w-v plane since Ωeff only has a component in the u direction then eff points along this direction and as such the Bloch vector precesses about the u axis and can not move into the u-v plane. If the atom is fully in the ground state then w=-1, in the case where the atom is in the excited state w=1. The direction of positive rotation is from the -w axis to the -v axis, the reason for this choice of direction will be come clear in the next section. Physically we see that for an oscillating electric field the population oscillates forever between the ground and excited state while the electric field is on (a result that we saw in time dependent perturbation theory). Of course in reality we must include spontaneous emission which would dampen the amplitude of the oscillations. The Bloch vector for the zero detuning case is shown below: w

+1

u-v Plane w-v Plane

Ω eff u

v S

S(t) = u(t) + v(t) + w(t) 2

2

2

-1

Bloch vector zero detuning

5

SCE3337: QMIII R.T. Sang

(2) 0: In this case the Bloch vector may rotate outside of the w-v plane and into the u-v plane since eff is no longer constrained to the u axis as there is also a component with magnitude ∆ along the w axis (see figure below). The magnitude of the unit vector must remain normalised since [u2 + v 2 + w 2 = 1], to accommodate this condition, the magnitude of w and v components must be reduced and as such complete population inversion is not possible. The phase of the inversion (w), that is the position of the maxima and minima of w, will also be different since S will precess differently to that of the of the zero detuning case. The Bloch sphere for an arbitrary non-zero detuning case is shown below:

Bloch vector non-zero detuning

6

SCE3337: QMIII R.T. Sang

1 6 . 4 Rabi Solution to the Bloch Equations The simplest solution to the Bloch equations is to look at the on resonance case with ∆=0, in this case the equations are reduced to du =0 dt dv =Ω w dt dw =−Ωv dt This solution for u(t) must be a constant hence u(t) = 0. We will assume that Ω is a constant (this is known as the actual Rabi solution, in the general case when Ω =Ω(t) there is no analytic solution to the Bloch equations). Taking the time derivative of the third equation reveals: d2 w dv 2 = −Ω dt dt using equation (2) this can be rewritten as d2 w +Ω 2 w = 0 dt 2 This is a homogeneous, linear, second order differential equation, we will try solution w =e

t

Substitution of this solution into the second order de yields the following equation 2

e t +Ω 2 e t = 0

This is solved for 2

+Ω 2 = 0

Therefore = ±iΩ Hence we have the solution w = AeiΩt + Be−iΩt This can be also represented as the sum of two trigonometric functions:

7

SCE3337: QMIII R.T. Sang

w(t) = Acos(Ωt ) + B sin(Ωt ) We can use some boundary conditions to find the solutions to the coefficients A and B. The factor in the brackets {Ωt } is the angle that the Block vector makes with the w-v axis which we will call θ then the w vector is then w = Acos + Bsin where plane.

is the angle that the Bloch vector makes with respect to the w axis in the w-v

Lets assume that at =0 that the atoms have a an initial inversion of w 0 then the block vector is completely aligned in along the w axis and hence w0 = A When

=

2

then w = 0 hence

B=0 Therefore the solution for w is w( ) = w0 cos The solution for v can be found following the same procedure: v ( ) = w0 sin If all of the atoms are in the ground state at =0 (and hence t=0) then w 0 =-1 at t=0 the solution for w as a function of is shown below. π









1



0.5

w

1.5Ω 0 2Ω -0.5

-1 0

400

800 Ωt (degree)

1200

1600

8

SCE3337: QMIII R.T. Sang

When

=

, v( ) = −1 hence a positive is for a rotation about the u axis from the -w 2 2 axis to the -v axis. We can see from this graph that Ω , the Rabi frequency, determines the rate at which transitions are coherently induced between the two atomic levels, the frequency at which the light field is inducing these transition is known as the Rabi flopping frequency. 16.5

and

π pulses: 2

The Rabi frequency gives the rate at which transitions are coherently induced between the two atomic levels. An atom initially in the ground state has w=-1, if then at time t when Ωt = then w=1, that is the population will be entirely in the upper state. Since we have chosen Ω to be a constant, only the time varies which allows us to view this excitation process as a coherent light in the form of a square pulse (as shown below) interacting with the two level atom. θ(t)

Ω t t1

t2

The square pulse, the area under the curve is called the pulse area

If we have the following condition that =Ω (t1 − t2 ) = This interaction process will just invert a ground state atom. This type of pulse is termed a π pulse. It is obvious that if we kept applying n π pulses, where n is an odd integer, the population would be inverted n times. This property has become particularly useful in atomic physics experiments. In the case were the pulse is equal to

π then w= 0 and the atom will be in a superposition 2

state of |e> and |g>.

The effect of the pulses are shown in the figure below.

9

SCE3337: QMIII R.T. Sang

w

w w=1 Excited State v=-1 Superposition state

θ=π v

-v

-v

θ=

π 2

v Ground State w=-1

Ground State w=-1

-w

-w

π pulse

π pulse 2

The general case for the Rabi solution (i.e.∆≠0 and Ω(t)=Ω) for w is

[

)]

(

[ (

∆Ω Ω w = − u0 2 ∆2 +Ω 2 t − v0 2 sin 2 1 − cos (∆ +Ω ) ( ∆ +Ω 2 )

2

)]

(

∆ +Ω cos ∆ +Ω t 2

∆ +Ω t + w0 2

2

(∆

2

+Ω

2

2

)

2

)

This solution is somewhat tedious to derive but it is however useful to look at the solution for the inversion in this case to demonstrate the effect on different detunings. u0 , v 0 , w 0 are the initial values of the basis Bloch component vectors. It is easy to demonstrate that we will recover the zero detuning case if we have ∆=0 , u0 = v 0 = 0 and w 0 =-1. The figure below shows the effect of different detunings on w as a function of Ωt π









1 ∆=0 0.5

w

∆=Ω 0

∆=2Ω

-0.5

-1 0

400

800 Ωt (degree)

1200

1600

10

SCE3337: QMIII R.T. Sang

1 6 . 6 Phenomenologi cal Decay Constants The Bloch equations in the form that we have investigated so far, could not be used for a real atom since there has been no decay representing spontaneous emission etc, incorporated in the equations. The spontaneous emission process has implications on the coherence or the phase of the system. As this process is random it introduces a loss of phase in the system and hence a loss of coherence between the induced dipole of the atom and the electric field of the light. We now introduce two damping constants that we label T1 and T2 that modify the optical Bloch equations in the following way: du u = −∆v − dt T2 dv v = −∆u +Ωw − dt T2 w − weq dw =−Ωv − dt T1 The damping constants T1 and T2 are termed the longitudinal and transverse decay constants. We see that T1 is related only to the population and hence it is related to spontaneous emission. The T2 constant is related only to the u and v components and hence is the dephasing factor between the atomic dipole and the electric field. T1 can be determined, taking the following approach: Since T1 is related to the population only, then w must decay exponentially. Assume that the we have applied a _ pulse to the system and the light field is instantaneously turned off then the Bloch equation for the inversion is w − weq dw =− dt T1 The solution to this is trivial:

[

]

w(t) = wo − weq e



t T1

+ weq

Where w0 is the initial inversion. We are looking at times after the π pulse has been initiated hence w0 = 1. When the light field has been turned off the population will decay into the ground state hence weq = −1 ie the population is in the ground state hence w(t) = [ 2]e



t T1

−1

Recall from earlier lectures that an excited state decays as e −Γ t hence we deduce that 1 =Γ T1 where Γ=1/lifetime = the Einstein A coefficient.

11

SCE3337: QMIII R.T. Sang

u and v depend on terms that are given by a * b but b ∝ e −Γt 2

a is the constant for the ground state that does not decay hence: a*b ∝e Hence T2 =

Γ − t 2

2 Γ

The solution to the Bloch equations with damping in the rotating frame can be found using a Laplace transform technique and was first solved by Torrey for the magnetic resonance case and as a result the solutions are often referred to as the Torrey solution. The general solution to the equations is given by C x(t) = Ae− t +  B cos( st) + sin(st ) e− bt + D s where x(t) represents any of the vectors: w(t), u(t) and v(t). The constants , and s are found from a secular equation of the Laplace equations and won't be quoted here as the solution takes some time to derive (for those that are interested the book "Molecules and Radiation" by J. Steinfeld outlines the solution). The constants A, B, C and D are found via boundary conditions. Let's look at one well known case for ∆=0, in this case the constants for general solution is =

1 Γ = T2 2

=

1  1 1  1 Γ 3  +  = Γ +  = Γ 2  T1 T2  2 2 4

1 1 1 3Γ  2 2  s = Ω −  +  = Ω −  4  T1 T2  4 2

Assuming that we start with ground state atoms at t=0 then the solution to the equations are u=0 v =−e

3 − Γt 4

w =−e

3 − Γt 4

sin(Ω' t ) cos(Ω' t )

3Γ 2 Ω' = Ω 2 −    4 

12

SCE3337: QMIII R.T. Sang

A graphical solution to the inversion with Ω >>Γ approx (Ω = 400Γ ) is shown below

1

w

0.5

0

-0.5

-1 0

400

800

1200

1600

Ωt

The population oscillates as a trig function since the field and the dipole are in phase and as such is a coherent interaction, but has a exponential decay associated with it. It is clear from this that the damping stops the oscillation of the population due to the spontaneous emission process. As a result there is dephasing effect of the driving electric field with the dipole and the oscillations are damped at an exponential rate. At large time the field and the dipole are no longer coherent and as such we would expect that the oscillations would completely die off. Since the driving field is strong, at long times, the population is driven to is steady state population distribution, which for a two level atom under strong light interaction, is half in the ground state and half in the excited state (see Lectures 9 and 10). The oscillatory behaviour shown at early time is known as Optical Nutation and has been observed experimentally at Griffith University in the Laser Atomic Physics Laboratory.