Hey there! Sign in to join this conversationNew here? Join for free
    • Thread Starter
    Offline

    2
    ReputationRep:
    In equation 5.8 in this document

    http://www.damtp.cam.ac.uk/user/tong/qft/qft.pdf

    I am trying to derive this Hamiltonian. I find

    H= \pi \dot{\psi} - L = i \psi^\dagger \dot{\psi} - \bar{\psi} ( i \gamma^\mu \partial_\mu - m ) \psi = i \bar{\psi} \gamma^0 \partial_0 \psi - i \bar{\psi} \gamma^\mu \partial_\mu \psi = m \bar{\psi} \psi = \bar{\psi} ( i \gamma^i \partial_i + m ) \psi

    so I get a minus sign the other way around because of the defn of the dot product of 4 vectors \gamma^\mu \partial_\mu = \gamma^0 \partial_0 - \gamma^i \partial_i

    So does anyone else think this is a typo? I'm sure it isn't since he uses it in the following pages!

    Thanks.
    Offline

    1
    ReputationRep:
    (Original post by latentcorpse)
    In equation 5.8 in this document

    http://www.damtp.cam.ac.uk/user/tong/qft/qft.pdf

    I am trying to derive this Hamiltonian. I find

    H= \pi \dot{\psi} - L = i \psi^\dagger \dot{\psi} - \bar{\psi} ( i \gamma^\mu \partial_\mu - m ) \psi = i \bar{\psi} \gamma^0 \partial_0 \psi - i \bar{\psi} \gamma^\mu \partial_\mu \psi = m \bar{\psi} \psi = \bar{\psi} ( i \gamma^i \partial_i + m ) \psi

    so I get a minus sign the other way around because of the defn of the dot product of 4 vectors \gamma^\mu \partial_\mu = \gamma^0 \partial_0 - \gamma^i \partial_i

    So does anyone else think this is a typo? I'm sure it isn't since he uses it in the following pages!

    Thanks.
    It's not a typo, it's in Peskin and Schroeder too (although the notes are almost exactly the same).

    I'm pretty sure \gamma^\mu \partial_\mu = \gamma^0 \partial_0 + \gamma^i \partial_i, although it's a while since I did this.  \gamma^\mu is not a 4-vector.
    • Thread Starter
    Offline

    2
    ReputationRep:
    (Original post by TableChair)
    It's not a typo, it's in Peskin and Schroeder too (although the notes are almost exactly the same).

    I'm pretty sure \gamma^\mu \partial_\mu = \gamma^0 \partial_0 + \gamma^i \partial_i, although it's a while since I did this.  \gamma^\mu is not a 4-vector.
    So someone else told me that

    \gamma^\mu \partial_\mu = \gamma^0 \partial_0 + \gamma^i \partial_i = \gamma^0 \partial^0 - \gamma^i \partial^i
    i.e. when we have mu's that just tells us to take the product and add but the minus sign only appears when we express it all in terms of up or all in terms of down indices because we have to raise or lower with a metric of Lorentzian signature.

    I'm fairly sure \gamma^\mu is still a 4 vector though. What's your justification for it not being one? I mean it's a vector with 4 components, right? Those components just happen to be matrices.
    Offline

    1
    ReputationRep:
    (Original post by latentcorpse)
    So someone else told me that

    \gamma^\mu \partial_\mu = \gamma^0 \partial_0 + \gamma^i \partial_i = \gamma^0 \partial^0 - \gamma^i \partial^i
    i.e. when we have mu's that just tells us to take the product and add but the minus sign only appears when we express it all in terms of up or all in terms of down indices because we have to raise or lower with a metric of Lorentzian signature.

    I'm fairly sure \gamma^\mu is still a 4 vector though. What's your justification for it not being one? I mean it's a vector with 4 components, right? Those components just happen to be matrices.
    Well, you can sometimes treat it like one, but a 4-vector is a vector in 4-dimensional real (minkowski) space, \gamma^\mu is not. I'm pretty sure it's not anyways, as I remember that the whole point of the term  \gamma^\mu \partial_\mu in the dirac equation was that if \gamma^\mu were a 4-vector, then it wouldn't be lorentz-invariant. I think there are some subtleties to it though, I'll have a poke around on wiki to see if I can find anything relevant.

    Edit: Found this: http://en.wikipedia.org/wiki/Gamma_matrices , scroll down to 'physical structure'

    This time last year I hated the dirac equation, took me ages to get used to the algebra surrounding it, now I've forgotten it all again.
    • Thread Starter
    Offline

    2
    ReputationRep:
    (Original post by TableChair)
    Well, you can sometimes treat it like one, but a 4-vector is a vector in 4-dimensional real (minkowski) space, \gamma^\mu is not. I'm pretty sure it's not anyways, as I remember that the whole point of the term  \gamma^\mu \partial_\mu in the dirac equation was that if \gamma^\mu were a 4-vector, then it wouldn't be lorentz-invariant. I think there are some subtleties to it though, I'll have a poke around on wiki to see if I can find anything relevant.

    Edit: Found this: http://en.wikipedia.org/wiki/Gamma_matrices , scroll down to 'physical structure'

    This time last year I hated the dirac equation, took me ages to get used to the algebra surrounding it, now I've forgotten it all again.
    Lol. I'm now confused as to the whole definition of 4 vector products!

    So we had \gamma^\mu \partial_\mu = \gamma^0 \partial_0 + \gamma^i \partial_i

    But say we have two arbitrary 4 vectors in the space, momentum and position. Do we define p \cdot x = p^\mu x_\mu = p^0 x_0 + p^i x_i = p^0 x^0 - p^i x^i = p_0 x_0 - p_i x_i?
    I say this because on p108, in the paragraph just above eqn 5.9, he writes that \vec{p} \cdot \vec{x} = \Sigma_i p^i x^i = -p^i x_i

    So I'm a bit confused!!!
    Offline

    1
    ReputationRep:
    (Original post by latentcorpse)
    Lol. I'm now confused as to the whole definition of 4 vector products!

    So we had \gamma^\mu \partial_\mu = \gamma^0 \partial_0 + \gamma^i \partial_i

    But say we have two arbitrary 4 vectors in the space, momentum and position. Do we define p \cdot x = p^\mu x_\mu = p^0 x_0 + p^i x_i = p^0 x^0 - p^i x^i = p_0 x_0 - p_i x_i?
    I say this because on p108, in the paragraph just above eqn 5.9, he writes that \vec{p} \cdot \vec{x} = \Sigma_i p^i x^i = -p^i x_i

    So I'm a bit confused!!!
    It's defined here: http://en.wikipedia.org/wiki/Four-vector

    All he's done there is lower one of the indices, given they're all spatial, all the terms become negative.
    • Thread Starter
    Offline

    2
    ReputationRep:
    (Original post by TableChair)
    It's defined here: http://en.wikipedia.org/wiki/Four-vector

    All he's done there is lower one of the indices, given they're all spatial, all the terms become negative.
    ok so i now accept that the defn of the scalar product is with all +'s when we have 1 index up and the other down but then when we want them all up or all down we get -'s on the spatial terms.

    however this \vec{p} \cdot \vec{x}, why is this not equal to p^i x_i? That's how I would have defined it! But clearly this is wrong!
    Offline

    1
    ReputationRep:
    (Original post by latentcorpse)
    ok so i now accept that the defn of the scalar product is with all +'s when we have 1 index up and the other down but then when we want them all up or all down we get -'s on the spatial terms.

    however this \vec{p} \cdot \vec{x}, why is this not equal to p^i x_i? That's how I would have defined it! But clearly this is wrong!
    Uh, according to wiki, one up and one down is + - - -, and I'm pretty sure it's right. That's why I was saying you can't necessarily treat  \gamma^\mu as a 4-vector. To be honest I'm a little confused as to why as well, it just seems that way. The really annoying thing is I swear I didn't have a problem with this last year, now I can't remember anything.

    Because it's not a 4-vector, if you were dotting two vectors in normal 3D euclidean space then that's how you would do it.


    Edit: Maybe it's because  \partial_\mu is conjugate (cant think of a better word) to  x^\mu , so even though it transforms like  x_\mu , the minus signs work the other way around.
    Offline

    0
    ReputationRep:
    (Original post by latentcorpse)
    ok so i now accept that the defn of the scalar product is with all +'s when we have 1 index up and the other down but then when we want them all up or all down we get -'s on the spatial terms.

    however this \vec{p} \cdot \vec{x}, why is this not equal to p^i x_i? That's how I would have defined it! But clearly this is wrong!
    Here's my interpretation, if it helps:

    \bold{p} \cdot \bold{x} = \eta_{\mu \nu} p^{\mu}x^{\nu} = p^0 x^0 - p^1 x^1 ...

    Since \eta_{\mu \nu} is diag(1,-1,-1,-1).

    If we choose \bold{p} \cdot \bold{x} = p^0 x^0 - \vec{p} \cdot \vec{x} as the definition of \vec{p} \cdot \vec{x}, it follows that \vec{p} \cdot \vec{x} = \sum_{i = 1,2,3} p^i x^i. And lowering a spatial index gives the minus sign (still summing from 1 to 3.)

    Essentially, defining \vec{p} \cdot \vec{x} by \bold{p} \cdot \bold{x} = p^0 x^0 - \vec{p} \cdot \vec{x} means the minus sign follows.
    Offline

    1
    ReputationRep:
    (Original post by Scipio90)
    Here's my interpretation, if it helps:

    \bold{p} \cdot \bold{x} = \eta_{\mu \nu} p^{\mu}x^{\nu} = p^0 x^0 - p^1 x^1 ...

    Since \eta_{\mu \nu} is diag(1,-1,-1,-1).

    If we choose \bold{p} \cdot \bold{x} = p^0 x^0 - \vec{p} \cdot \vec{x} as the definition of \vec{p} \cdot \vec{x}, it follows that \vec{p} \cdot \vec{x} = \sum_{i = 1,2,3} p^i x^i. And lowering a spatial index gives the minus sign (still summing from 1 to 3.)

    Essentially, defining \vec{p} \cdot \vec{x} by \bold{p} \cdot \bold{x} = p^0 x^0 - \vec{p} \cdot \vec{x} means the minus sign follows.
    Isn't that a bit circular? It's the same as defining \vec{p} \cdot \vec{x} = \sum_{i = 1,2,3} p^i x^i.

    I think the definition is such that if you did it in 3D euclidean space you'd get the same result.
    • Thread Starter
    Offline

    2
    ReputationRep:
    (Original post by Scipio90)
    Here's my interpretation, if it helps:

    \bold{p} \cdot \bold{x} = \eta_{\mu \nu} p^{\mu}x^{\nu} = p^0 x^0 - p^1 x^1 ...

    Since \eta_{\mu \nu} is diag(1,-1,-1,-1).

    If we choose \bold{p} \cdot \bold{x} = p^0 x^0 - \vec{p} \cdot \vec{x} as the definition of \vec{p} \cdot \vec{x}, it follows that \vec{p} \cdot \vec{x} = \sum_{i = 1,2,3} p^i x^i. And lowering a spatial index gives the minus sign (still summing from 1 to 3.)

    Essentially, defining \vec{p} \cdot \vec{x} by \bold{p} \cdot \bold{x} = p^0 x^0 - \vec{p} \cdot \vec{x} means the minus sign follows.
    ok. this seems acceptable to me.

    would you then agree that p \cdot x = p^0 x_0 + p^1 x_1 + p^2 x_2 + p^3 x_3?
    • Thread Starter
    Offline

    2
    ReputationRep:
    (Original post by TableChair)
    Uh, according to wiki, one up and one down is + - - -, and I'm pretty sure it's right. That's why I was saying you can't necessarily treat  \gamma^\mu as a 4-vector. To be honest I'm a little confused as to why as well, it just seems that way. The really annoying thing is I swear I didn't have a problem with this last year, now I can't remember anything.

    Because it's not a 4-vector, if you were dotting two vectors in normal 3D euclidean space then that's how you would do it.


    Edit: Maybe it's because  \partial_\mu is conjugate (cant think of a better word) to  x^\mu , so even though it transforms like  x_\mu , the minus signs work the other way around.
    also found this
    http://en.wikipedia.org/wiki/Summation_convention
    which seems to suggest one up and one down is with all pluses.

    However, as you were saying, I also found in some of my old notes that

    \partial_\mu = ( \frac{1}{c} \frac{\partial}{\partial t} , \vec{\nabla} )
    but
    \partial^\mu = ( \frac{1}{c} \frac{\partial}{\partial t} , -\vec{\nabla} )
    Offline

    0
    ReputationRep:
    (Original post by TableChair)
    Isn't that a bit circular? It's the same as defining \vec{p} \cdot \vec{x} = \sum_{i = 1,2,3} p^i x^i.

    I think the definition is such that if you did it in 3D euclidean space you'd get the same result.
    Kind of, but I think it's supposed to be. (I'm expressing myself quite badly...)

    I think the idea is that we have in classical mechanics \vec{p} \cdot \vec{x} = p^1 x^1 + ... , since it's the upstairs things that reduce to newtonian quantities.

    Then when we construct the SR invariant \bold{p} \cdot \bold{x} using the specific form of the minkowski metric, we notice that it has the classical dot product inside it.
    Offline

    1
    ReputationRep:
    (Original post by latentcorpse)
    also found this
    http://en.wikipedia.org/wiki/Summation_convention
    which seems to suggest one up and one down is with all pluses.

    However, as you were saying, I also found in some of my old notes that

    \partial_\mu = ( \frac{1}{c} \frac{\partial}{\partial t} , \vec{\nabla} )
    but
    \partial^\mu = ( \frac{1}{c} \frac{\partial}{\partial t} , -\vec{\nabla} )
    But they havent raised one of the indices there, which would then make it + - - -

    Yeah, I think that's why its + + + + for the original question, you swap it when you differentiate. I've forgotten way too much.
    • Thread Starter
    Offline

    2
    ReputationRep:
    (Original post by TableChair)
    But they havent raised one of the indices there, which would then make it + - - -

    Yeah, I think that's why its + + + + for the original question, you swap it when you differentiate. I've forgotten way too much.
    I don't understand what bit you're talking about here?

    Where haven't they raised an index?

    So it's ++++ from the wiki page?
    Offline

    1
    ReputationRep:
    (Original post by latentcorpse)
    I don't understand what bit you're talking about here?

    Where haven't they raised an index?

    So it's ++++ from the wiki page?
    All they've done is written out explicitly what the summation convention is, i.e. that you sum over the indices. The inner product is \bold{p} \cdot \bold{x} = \eta_{\mu \nu} p^{\mu}x^{\nu} = p^0 x^0 - p^1 x^1 ... . There is no longer a mixture of raised and lowered indices. It's + - - -.

    Edit: I agree that p \cdot x = p^0 x_0 + p^1 x_1 + p^2 x_2 + p^3 x_3, I'm just saying that if you raise the indices of the x quantities, its + - - -.
    Offline

    1
    ReputationRep:
    (Original post by Scipio90)
    Kind of, but I think it's supposed to be. (I'm expressing myself quite badly...)

    I think the idea is that we have in classical mechanics \vec{p} \cdot \vec{x} = p^1 x^1 + ... , since it's the upstairs things that reduce to newtonian quantities.

    Then when we construct the SR invariant \bold{p} \cdot \bold{x} using the specific form of the minkowski metric, we notice that it has the classical dot product inside it.
    Ok, I see what you're saying, but I'm saying surely the definition comes from newtonian mechanics, hence why it is written that way around. It's all semantics really though.
 
 
 
  • See more of what you like on The Student Room

    You can personalise what you see on TSR. Tell us a little about yourself to get started.

  • Poll
    Did TEF Bronze Award affect your UCAS choices?
  • See more of what you like on The Student Room

    You can personalise what you see on TSR. Tell us a little about yourself to get started.

  • The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.

    Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE

    Quick reply
    Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.