The Student Room Group

Tensors

Scroll to see replies

Reply 80
Original post by latentcorpse
oops. sorry. we want to show n[a;bnc]=0n_{[a;b}n_{c]}=0 given that nan_a is orthogonal to the surface β(x)=0\beta(x)=0.


Hmmm. I don't see any obvious reason why that's true, so it's probably just a matter of working out the algebra. I'm going to guess, by analogy with the Euclidean case, that we have na=fβ;an_a = f \beta_{;a} for some scalar function f. Then na;b=f;bβ;a+fβ;abn_{a;b} = f_{;b} \beta_{;a} + f \beta_{;ab}, so na;bnc=fβ;af;bβ;c+f2β;abβ;cn_{a;b} n_c = f \beta_{;a} f_{;b} \beta_{;c} + f^2 \beta_{;ab} \beta_{;c}. Perhaps something magic happens and everything cancels?

But actually, I think you've omitted a condition. I suspect you'll have to assume that gabnanb=1g^{ab} n_a n_b = 1 or something like that in order to do this.
(edited 13 years ago)
Original post by Zhen Lin
Hmmm. I don't see any obvious reason why that's true, so it's probably just a matter of working out the algebra. I'm going to guess, by analogy with the Euclidean case, that we have na=fβ;an_a = f \beta_{;a} for some scalar function f. Then na;b=f;bβ;a+fβ;abn_{a;b} = f_{;b} \beta_{;a} + f \beta_{;ab}, so na;bnc=fβ;af;bβ;c+f2β;abβ;cn_{a;b} n_c = f \beta_{;a} f_{;b} \beta_{;c} + f^2 \beta_{;ab} \beta_{;c}. Perhaps something magic happens and everything cancels?

But actually, I think you've omitted a condition. I suspect you'll have to assume that gabnanb=1g^{ab} n_a n_b = 1 or something like that in order to do this.


Well, here is the original question:
http://www.maths.cam.ac.uk/postgrad/mathiii/pastpapers/2007/Paper61.pdf
I don't see any extra conditions. though?

Why did you take na=fβ;an_a=f \beta_{;a}?
I've been reading through the notes and there's something about half way down p105 that I think might have something to do with this question but I'm not sure.


And one other thing about integration on manifolds. Why is there a g\sqrt{-g} in (344) when on the earlier pages where he defines integration on a manifold, he simply uses an action g\sqrt{|g|}. I guess above (344), he does say define so should I just take it as it is without worrying about where it came from or do you happen to know?
(edited 13 years ago)
Reply 82
Original post by latentcorpse
Well, here is the original question:
http://www.maths.cam.ac.uk/postgrad/mathiii/pastpapers/2007/Paper61.pdf
I don't see any extra conditions. though?

Why did you take na=fβ;an_a=f \beta_{;a}?


In the Euclidean case, β;a\beta_{;a} gives you the normal covector to the surface since a vector TaT^a is tangent to the hypersurface iff β;aTa=0\beta_{;a} T^a = 0. This should be familiar from multivariable calculus. Taking a literal reading of the question, I could take any multiple of that and still get a normal covector.

And one other thing about integration on manifolds. Why is there a g\sqrt{-g} in (344) when on the earlier pages where he defines integration on a manifold, he simply uses an action g\sqrt{|g|}. I guess above (344), he does say define so should I just take it as it is without worrying about where it came from or do you happen to know?


For pseudo-Riemannian manifolds with signature (,+,+,+)(-,+,+,+) or (+,,,)(+,-,-,-), the determinant of the metric is negative. So g=g\sqrt{-g} = \sqrt{|g|}.

g\sqrt{|g|} gives you the scale factor needed to get a hypervolume that corresponds to intuition, i.e. if you have a small hypercube with side lengths ϵ\epsilon, then as ϵ0\epsilon \to 0, its hypervolume ϵn\sim \epsilon^n. If you want to see an example to convince you why we need it recall that for spherical polar coordinates for Euclidean 3-space, the volume form is r2sinθdrdθdϕr^2 \sin \theta \, dr \wedge d\theta \wedge d\phi, and indeed g=r2sinθ\sqrt{g} = r^2 \sin \theta. If you're still not convinced, remember that we get r2sinθr^2 \sin \theta by computing the Jacobian determinant. It turns out that if you start from Cartesian coordinates, the determinant of the metric is exactly the square of the Jacobian determinant. (In fact, the metric is the "square" of the Jacobian matrix, in a sense.)

But yeah, it's basically by definition, but it's not completely unmotivated.
Original post by Zhen Lin
In the Euclidean case, β;a\beta_{;a} gives you the normal covector to the surface since a vector TaT^a is tangent to the hypersurface iff β;aTa=0\beta_{;a} T^a = 0. This should be familiar from multivariable calculus. Taking a literal reading of the question, I could take any multiple of that and still get a normal covector.


Where does β;aTa=0 \beta_{;a} T^a=0 come from though?
Reply 84
Original post by latentcorpse
Where does β;aTa=0 \beta_{;a} T^a=0 come from though?


Think about the definition of a level surface and what it means for a vector to be tangent to a level surface. This should be covered in any good multivariable calculus textbook.
Original post by Zhen Lin
Think about the definition of a level surface and what it means for a vector to be tangent to a level surface. This should be covered in any good multivariable calculus textbook.


Yeah so it's basically like saying that their inner product is 0 so they must be orthogonal and if β;a\beta_{;a} is normal to the surface then TaT^a must be tangent to the surface.

But why is na=β;an_a=\beta_{;a}?
Reply 86
Original post by latentcorpse
Yeah so it's basically like saying that their inner product is 0 so they must be orthogonal and if β;a\beta_{;a} is normal to the surface then TaT^a must be tangent to the surface.

But why is na=β;an_a=\beta_{;a}?


Did I say it was? In all likelihood it's probably na=β;agbcβ;bβ;c\displaystyle n_a = \frac{\beta_{;a}}{\sqrt{g^{bc} \beta_{;b} \beta_{;c}}}. But try it with na=fβ;an_a = f \beta_{;a} for arbitrary f first and see where you get.
Original post by Zhen Lin
Did I say it was? In all likelihood it's probably na=β;agbcβ;bβ;c\displaystyle n_a = \frac{\beta_{;a}}{\sqrt{g^{bc} \beta_{;b} \beta_{;c}}}. But try it with na=fβ;an_a = f \beta_{;a} for arbitrary f first and see where you get.


Ok yeah but what I don't get is why β;a\beta_{;a} is normal to the surface?


And if you could just confirm that for a vector field ξa\xi^a, and a scalar field Φ\Phi, the Lie derivative is as follows

LξΦ=ξ(Φ)L_\xi \Phi = \xi ( \Phi ) since LXf=X(f)L_Xf=X(f) by defn
=ξμμ(Φ)=ξμΦ,μ= \xi^\mu \frac{\partial}{\partial \mu} (\Phi) =\xi^\mu \Phi_{,\mu} by expanding in a coordinate basis
=ξμμΦ= \xi^\mu \nabla_\mu \Phi since covariant and partial derivatives are the same on scalar fields by definition.
This is now a scalar equation and so must be true in any basis so we can write it in abstract indices as
LξΦ=ξaaΦL_\xi \Phi = \xi^a \nabla_a \Phi (this was the result I was after, I was just hopign you could confirm that my method is correct?)

Thanks!
(edited 13 years ago)
Reply 88
Original post by latentcorpse
Ok yeah but what I don't get is why β;a\beta_{;a} is normal to the surface?


If you accept that β;aTa=0\beta_{;a} T^a = 0 is a necessary and sufficient condition for TaT^a to be tangent to the level surface, then this follows automatically. If you don't see why that's true, then go think about what the expression β;aTa\beta_{;a} T^a means. (It's a directional derivative. When is a directional derivative zero?)

And if you could just confirm that for a vector field ξa\xi^a, and a scalar field Φ\Phi, the Lie derivative is as follows

LξΦ=ξ(Φ)L_\xi \Phi = \xi ( \Phi ) since LXf=X(f)L_Xf=X(f) by defn
=ξμμ(Φ)=ξμΦ,μ= \xi^\mu \frac{\partial}{\partial \mu} (\Phi) =\xi^\mu \Phi_{,\mu} by expanding in a coordinate basis
=ξμμΦ= \xi^\mu \nabla_\mu \Phi since covariant and partial derivatives are the same on scalar fields by definition.
This is now a scalar equation and so must be true in any basis so we can write it in abstract indices as
LξΦ=ξaaΦL_\xi \Phi = \xi^a \nabla_a \Phi (this was the result I was after, I was just hopign you could confirm that my method is correct?)


That works. But I would just skip from ξ(Φ)\xi (\Phi) to ξΦ\nabla_\xi \Phi and then write ξaaΦ\xi^a \nabla_a \Phi. These are all basically different ways of writing the same thing, by definition.
Original post by Zhen Lin
If you accept that β;aTa=0\beta_{;a} T^a = 0 is a necessary and sufficient condition for TaT^a to be tangent to the level surface, then this follows automatically. If you don't see why that's true, then go think about what the expression β;aTa\beta_{;a} T^a means. (It's a directional derivative. When is a directional derivative zero?)



That works. But I would just skip from ξ(Φ)\xi (\Phi) to ξΦ\nabla_\xi \Phi and then write ξaaΦ\xi^a \nabla_a \Phi. These are all basically different ways of writing the same thing, by definition.


I'm having a bit of trouble in the example on p114.
Now I know that a cylinder is an intrinsically flat object but I want to prove this, i.e. find K=0K=0.

So I assume I use (382)? Is there a faster way than to work out all the Christoffel symbols and then compute teh Riemann tensor and substitute back into (382). Surely there must be?
I mean, he just writes on the 3rd line of the example "the metric is flat" like it's totally obvious!
Reply 90
Original post by latentcorpse
I'm having a bit of trouble in the example on p114.
Now I know that a cylinder is an intrinsically flat object but I want to prove this, i.e. find K=0K=0.

So I assume I use (382)? Is there a faster way than to work out all the Christoffel symbols and then compute teh Riemann tensor and substitute back into (382). Surely there must be?
I mean, he just writes on the 3rd line of the example "the metric is flat" like it's totally obvious!


Sure, you can do that if you want. It's tedious. The usual way of doing it is to observe that there is an isometric (i.e. metric-preserving) diffeomorphism from (an open subset of) flat space to (an open subset of) the cylinder. Isometries preserve the various geometric invariants (e.g. curvature) so you conclude K = 0 on (that open subset of) the cylinder.

In other words, the fact that you can pick up a sheet of paper and roll it into a cylinder without introducing crinkles can be turned into a rigorous proof that the cylinder is flat. (Similarly, the cone is flat.)
Original post by Zhen Lin
Sure, you can do that if you want. It's tedious. The usual way of doing it is to observe that there is an isometric (i.e. metric-preserving) diffeomorphism from (an open subset of) flat space to (an open subset of) the cylinder. Isometries preserve the various geometric invariants (e.g. curvature) so you conclude K = 0 on (that open subset of) the cylinder.

In other words, the fact that you can pick up a sheet of paper and roll it into a cylinder without introducing crinkles can be turned into a rigorous proof that the cylinder is flat. (Similarly, the cone is flat.)


Excellent.

On p115, in (394) he writes that (ιΛ~)(η)=Λ~(ι(η))(\iota \circ \tilde{\Lambda})^*(\eta)=\tilde{\Lambda}^*(\iota^*(\eta))
Why is that?

And at the end of (394) he has ι(Λ(η))=ι(η)\iota^*(\Lambda^*(\eta))=\iota_*(\eta)
Surely this is a typo and it should be ι(η)\iota^*(\eta)?

And in the remark at the top of p116, he says an n dimensional amnifold is maximally symmetric if it has n(n+1)/2 linearly independent Killing vector fields and he calims that this 5 dimensional de Sitter spacetime is maximally symmetric even though it only has 10 Killing vector fields (but form the formula it should have 15)?
(edited 13 years ago)
Reply 92
Original post by latentcorpse
Excellent.

On p115, in (394) he writes that (ιΛ~)(η)=Λ~(ι(η))(\iota \circ \tilde{\Lambda})^*(\eta)=\tilde{\Lambda}^*(\iota^*(\eta))
Why is that?


By definition of pullback, I would think. Pullback is basically precomposition, i.e. fgf^*g is essentially gfg \circ f. But there are some complications and subtleties here because a diffeomorphism induces pullbacks on many different types of objects. (I'm too tired to check what's actually going on here.)

And at the end of (394) he has ι(Λ(η))=ι(η)\iota^*(\Lambda^*(\eta))=\iota_*(\eta)
Surely this is a typo and it should be ι(η)\iota^*(\eta)?


Probably.

And in the remark at the top of p116, he says an n dimensional amnifold is maximally symmetric if it has n(n+1)/2 linearly independent Killing vector fields and he calims that this 5 dimensional de Sitter spacetime is maximally symmetric even though it only has 10 Killing vector fields (but form the formula it should have 15)?


I presume this de Sitter spacetime is the submanifold M he talks about in the preceding section. M is 4-dimensional. Don't be confused by the fact that it's embedded in a 5-dimensional manifold. (Analogy: the 2-sphere S2S^2 can be embedded in R3\mathbb{R}^3 as the boundary of the 3-dimensional unit ball. But S2S^2 is nonetheless a 2-dimensional surface.)
Original post by Zhen Lin

I presume this de Sitter spacetime is the submanifold M he talks about in the preceding section. M is 4-dimensional. Don't be confused by the fact that it's embedded in a 5-dimensional manifold. (Analogy: the 2-sphere S2S^2 can be embedded in R3\mathbb{R}^3 as the boundary of the 3-dimensional unit ball. But S2S^2 is nonetheless a 2-dimensional surface.)


So I guess this is kind of like saying that if we specify 4 of {x0,x1,x2,x3,x4}\{ x^0,x^1,x^2,x^3,x^4 \} then the fifth is given by eqn (393) and hence there are 4 independent parameters for this de Sitter spacetime - meaning it is four dimensional. Correct?

However, he says (again in that remark) that "just like 4d Minkowski spacetime, it is maximally symmetric". This suggests, 4d Minkowski spacetime is maximally symmetric. From the formula, there will be 4*5/2=10 linearly independent Killing vector fields. However, surely this time we'd be using a 4x4 antisymmetric matrix and so there would only be 6 independent parameters. Clearly 10 is not equal to 6! So I'm confused as to what the other four are: could it be that we get four from the fact that the Minkowski metric doesn't depend on any of t,x,y or z and then the other 6 come from the Lorentz transformations?


And just above (397), he says that the de Sitter metric (see eqn (396)) is a metric of constant curvature with K=1L2K=\frac{1}{L^2}. Do you know how to show that this is the value of K? I was going to use (382) but was wondering if there was a better way than having to calculate the Riemann tensor?
(edited 13 years ago)
Reply 94
Original post by latentcorpse
So I guess this is kind of like saying that if we specify 4 of {x0,x1,x2,x3,x4}\{ x^0,x^1,x^2,x^3,x^4 \} then the fifth is given by eqn (393) and hence there are 4 independent parameters for this de Sitter spacetime - meaning it is four dimensional. Correct?


Yes. Of course, this is a claim with some technical content in it and should be proven. (Recall the definition of dimension of a manifold requires you to check facts about the tangent spaces.)

However, he says (again in that remark) that "just like 4d Minkowski spacetime, it is maximally symmetric". This suggests, 4d Minkowski spacetime is maximally symmetric. From the formula, there will be 4*5/2=10 linearly independent Killing vector fields. However, surely this time we'd be using a 4x4 antisymmetric matrix and so there would only be 6 independent parameters. Clearly 10 is not equal to 6! So I'm confused as to what the other four are: could it be that we get four from the fact that the Minkowski metric doesn't depend on any of t,x,y or z and then the other 6 come from the Lorentz transformations?


Yes you forgot about translation symmetry.

And just above (397), he says that the de Sitter metric (see eqn (396)) is a metric of constant curvature with K=1L2K=\frac{1}{L^2}. Do you know how to show that this is the value of K? I was going to use (382) but was wondering if there was a better way than having to calculate the Riemann tensor?


It's well-known that a hyperboloid is a surface of constant curvature, like a sphere, but I don't know how that can be proven.
Original post by Zhen Lin
Yes. Of course, this is a claim with some technical content in it and should be proven. (Recall the definition of dimension of a manifold requires you to check facts about the tangent spaces.)

I don't think this definition was in our notes. What would I have to check?

Original post by Zhen Lin

Yes you forgot about translation symmetry.

So the Killing vectors correspond to four translational symmetries and 6 Lorentz symmetries then?
Reply 96
Original post by latentcorpse
I don't think this definition was in our notes. What would I have to check?


Nevermind then. (The point is that the dimension of the tangent space is equal to the dimension of the manifold.)

So the Killing vectors correspond to four translational symmetries and 6 Lorentz symmetries then?


I believe so.
Original post by Zhen Lin
Nevermind then. (The point is that the dimension of the tangent space is equal to the dimension of the manifold.)



I believe so.


Hey. Sorry to "re-open" this thread but I was wondering if you could clarify something in my notes for another class which also use differential geometry:

The orbits ( integral curves ) of a vector field ξ\xi are the curves x(λ)x(\lambda) to which ξ\xi is everywhere tangent

ξμ(x)x=x(λ)=f(λ)x˙μ\xi^\mu(x)|_{x=x(\lambda)} = f(\lambda) \dot{x}^\mu where x˙μ=dxμdλ\dot{x}^\mu=\frac{dx^\mu}{d \lambda}

Then it says this statement is equivalent to

ξ(x(λ))=f(λ)ddλ\xi(x(\lambda))=f(\lambda) \frac{d}{d \lambda}

Ok well the first statement kind of makes sense - it's saying that ξ\xi is in the same direction as the tangent vector, right?

I don't see how this is equivalent to the 2nd one though because if the second one were true and we were to use the fact that ξ=ξμddxμ\xi=\xi^\mu \frac{d}{d x^\mu} then ξμ=f(λ)\xi^\mu=f(\lambda)

I'm 99% certain that what I just wrote in that last sentence is wrong because it would never be ddxμ\frac{d}{d x^\mu}, it would always be ddxλ\frac{d}{d x^\lambda} but then I don't understand and was hoping you could sort out how we go from teh 2nd statement to the 1st or vice versa? Thanks!
Reply 98
Original post by latentcorpse
Hey. Sorry to "re-open" this thread but I was wondering if you could clarify something in my notes for another class which also use differential geometry:

The orbits ( integral curves ) of a vector field ξ\xi are the curves x(λ)x(\lambda) to which ξ\xi is everywhere tangent

ξμ(x)x=x(λ)=f(λ)x˙μ\xi^\mu(x)|_{x=x(\lambda)} = f(\lambda) \dot{x}^\mu where x˙μ=dxμdλ\dot{x}^\mu=\frac{dx^\mu}{d \lambda}

I'm sorry, but this notation is incomprehensible. I can't even begin to guess what is meant here.
Original post by Zhen Lin
I'm sorry, but this notation is incomprehensible. I can't even begin to guess what is meant here.


I was wondering if you could take a look at eqn (148) in those notes. How does he get that from the geodesic eqn? Shouldn't there be a contribution from d2xdτ2\frac{d^2x}{d \tau^2} term?

Thanks.

Quick Reply

Latest