Turn on thread page Beta
    • Thread Starter
    Offline

    15
    ReputationRep:
    The book i'm reading has considered the series t_{n} = \bigsum_{n=1}^{\infty} (-1)^{n+1} \frac{1}{n} and has shown that while it converges the sum can take any value if the order of terms is rearranged.
    For example: 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}+... = \ln 2 \textit{ but } 1+\frac{1}{3}+\frac{1}{5}+\frac{  1}{7}-\frac{1}{2}+\frac{1}{9}+.... = \ln 4
    I don't really understand this as addition of positive or negative terms is commutative so how does it matter in what order you add the terms? Surely if you sum the terms to infinity all terms are accounted for, regardless of how far they come along in the last?
    Offline

    2
    ReputationRep:
    (Original post by Gaz031)
    The book i'm reading has considered the series t_{n} = \bigsum_{n=1}^{\infty} (-1)^{n+1} \frac{1}{n} and has shown that while it converges the sum can take any value if the order of terms is rearranged.
    For example: 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\frac{1}{5}+... = \ln 2 \textit{ but } 1+\frac{1}{3}+\frac{1}{5}+\frac{  1}{7}-\frac{1}{2}+\frac{1}{9}+.... = \ln 4
    I don't really understand this as addition of positive or negative terms is commutative so how does it matter in what order you add the terms? Surely if you sum the terms to infinity all terms are accounted for, regardless of how far they come along in the last?
    This is a theorem I think you will get to prove in the second Analysis Course. When you're dealing with adding infinite terms, the notion of commutativity doesn't really hold 100 per cent. I don't know why, and I would be glad to see a proof of this.

    It's like saying how can two different types of infinities be different in size, if they are both infinity. Answer is that it can occur. The set of irrationals is considered to have greater cardinallity than the set of rationals.
    Offline

    2
    ReputationRep:
    1 + 1/3 + 1/5 + ... = infinity
    1/2 + 1/4 + 1/6 + ... = infinity

    After you've added a million terms, in whatever order, there is still infinitely much to add and infinitely much to subtract. So it's not surprising that the order in which you take the remaining terms is important.

    --

    Another example: 1, -1, 1, -1, 1, -1, ...

    If you add up in the natural order, the partial sums oscillate between 0 and 1.

    But you could take two ones, then a minus one, the two ones, ... . The partial sums then tend to infinity.
    Offline

    15
    ReputationRep:
    Two things:

    One is that you can rearrange an absolutely convergent series however you want and you'll end up with the same answer.

    Two is that Riemann showed that convergent, but not absolutely convergent, series can be grouped to converge to anything. Note that the positive terms add to infinity and the negative terms to minus infinity. So you can dip into the positives to get roughly what you need limit wise, then from the negatives, then back to the positives etc etc to get whatever limit you need.

    Generally in the example you gave if you take P postives then Q negatives each time you get a limit of ln(2rt(p/q)) which I'll prove for you tomorrow (when I have more time) if you wish.
    • Thread Starter
    Offline

    15
    ReputationRep:
    Firstly, thanks for the replies.

    This is a theorem I think you will get to prove in the second Analysis Course. When you're dealing with adding infinite terms, the notion of commutativity doesn't really hold 100 per cent. I don't know why, and I would be glad to see a proof of this.
    Are there many other properties that don't hold for infinite summations?

    It's like saying how can two different types of infinities be different in size, if they are both infinity. Answer is that it can occur. The set of irrationals is considered to have greater cardinallity than the set of rationals.
    I understand why we can have different sized infinities in some contexts but surely that doesn't apply here, as we are supposedly adding all possible terms in the series.
    My definition for convergence of \sum_{n=1}^{\infty} a_{n} is s_{n} tending to a finite limit as n \rightarrow \infty where 'a' implies that the finite limit is singular and thus that we only have one limit. Is changing the order of the terms changing the actual series?

    1 + 1/3 + 1/5 + ... = infinity
    1/2 + 1/4 + 1/6 + ... = infinity

    After you've added a million terms, in whatever order, there is still infinitely much to add and infinitely much to subtract. So it's not surprising that the order in which you take the remaining terms is important.
    I thought (though this isn't a proper definition) that \sum_{n=1}^{\infty} a_{n} adds terms until a_{n} \approx 0 and so you'd keep adding the negative terms even after the positive terms were approximately zero.

    Another example: 1, -1, 1, -1, 1, -1, ...
    That's a nice clear example but I wouldn't call that convergent as lim_{n \rightarrow \infty} a_{n} \neq 0

    One is that you can rearrange an absolutely convergent series however you want and you'll end up with the same answer.
    I'm just about to move onto that.

    Two is that Riemann showed that convergent, but not absolutely convergent, series can be grouped to converge to anything. Note that the positive terms add to infinity and the negative terms to minus infinity. So you can dip into the positives to get roughly what you need limit wise, then from the negatives, then back to the positives etc etc to get whatever limit you need.
    But if you're taking out only what you need then surely you aren't summing every term?

    Generally in the example you gave if you take P postives then Q negatives each time you get a limit of ln(2rt(p/q)) which I'll prove for you tomorrow (when I have more time) if you wish.
    That would be interesting if you have time.
    Offline

    15
    ReputationRep:
    (Original post by Gaz031)
    Are there many other properties that don't hold for infinite summations?
    Commutativity and associativity of the real numbers guarantees that however a finite sum is calculated the same answer will be attained.

    However (finitely) many times though that the rules

    a+b=b+a
    (a+b)+c = a+(b+c)

    are applied to the terms in an infinite sum there are some rearrangements of the terms, like taking all the positives to the front and the negatives to the end, that can't be achieved.
    Offline

    15
    ReputationRep:
    (Original post by Gaz031)
    But if you're taking out only what you need then surely you aren't summing every term?
    We're looking at the series with terms

    1, -1/2, 1/3, -1/4, 1/5, ...

    which if taken in that order sum to log2.

    Let's say I wished to rearrange the series in such a way that they sum to

    √2 = 1.414...

    Then I would start like this.

    Terms order Cumulative Sum
    1 1
    1/3 1.3333
    1/5 1.5333 <now gone too far - take next negative>
    -1/2 1.0333 <too low - take next positive>
    1/7 1.1762
    1/9 1.2873
    1/11 1.3782
    1/13 1.4551 <too big again - take next negative>
    -1/4 1.2051 <too low - take next positive>

    etc

    But note in the list

    1 + 1/3 + 1/5 - 1/2 + 1/7 + 1/9 + 1/11 + 1/13 - 1/4 + ...

    I'm gonna have all the terms - and I hope you can see that if I keep going like that I will eventually converge on √2
    Offline

    15
    ReputationRep:
    (Original post by Gaz031)
    That would be interesting if you have time.
    Let a_n = 1 + 1/2 + 1/3 + ... + 1/n - logn

    Then (I will show this later if you wish) it is the case that a_n tends to Euler's constant (denoted gamma or C - so let's use C).

    We wish to sum the terms

    1, -1/2, 1/3, -1/4, ...

    where we are taking p positives then q negatives at a time. Let s_n denote the sum to n terms of this series.

    Note

    s_(p+q) = (1+1/3+...+1/(2p-1)) - (1/2+1/4+...+1/(2q))

    More generally

    s_[k(p+q)] = (1+1/3+...1/(2kp-1)) - (1/2+1/4+...+1/(2kq)) =

    (1 + 1/2 + 1/3 + 1/4 + ... 1/(2kp))
    - (1/2 + 1/4 + .... + 1/(2kp))
    - (1/2 + 1/4 +.... + 1/(2kq)) =

    a_(2kp) + log(2kp)
    - 1/2 [a_(kp) + log(kp)]
    - 1/2[a_(kq) + log(kq)] =

    (a_(2kp) - 1/2 a_(kp) - 1/2 a_(kq))
    + log(2p/√(p)/√(q))

    now letting k tend to infinity

    (C-C/2-C/2) + log(2√(p/q)) = log(2√(p/q))

    So s_[k(p+q)] -> log(2√(p/q))

    Then s_n -> log(2√(p/q)) as n->∞

    as a general n is at most p+q diminishing terms from the last partial sum of multiples of (p+q) elements

    Note in the examples that you first gave you had p=q=1 and p=4 q=1 which agree with this formula
    • Thread Starter
    Offline

    15
    ReputationRep:
    Let's say I wished to rearrange the series in such a way that they sum to

    √2 = 1.414...

    Then I would start like this.

    Terms order Cumulative Sum
    1 1
    1/3 1.3333
    1/5 1.5333 <now gone too far - take next negative>
    -1/2 1.0333 <too low - take next positive>
    1/7 1.1762
    1/9 1.2873
    1/11 1.3782
    1/13 1.4551 <too big again - take next negative>
    -1/4 1.2051 <too low - take next positive>

    etc

    But note in the list

    1 + 1/3 + 1/5 - 1/2 + 1/7 + 1/9 + 1/11 + 1/13 - 1/4 + ...

    I'm gonna have all the terms - and I hope you can see that if I keep going like that I will eventually converge on √2
    I can see what you mean but surely the positive terms are going to zero much faster than the negative terms, so very far down the line the negative terms will start overcancelling the positives?
    Offline

    15
    ReputationRep:
    (Original post by Gaz031)
    I can see what you mean but surely the positive terms are going to zero much faster than the negative terms, so very far down the line the negative terms will start overcancelling the positives?
    By themselves the positives add to infinity and the negatives to minus infinity.

    So you can keep dipping into the positives or negatives to get back over or under root 2 - you never run out of either. Even if at points it takes a hundred or a million positives to get back over √2 we know it will eventually happen.

    And because we're taking the next positive, next negative strategy then all the terms in the series will appear. We'll be going through the positives faster than the negatives as we're aiming to converge to a positive number, but all terms will eventually be included
    • Thread Starter
    Offline

    15
    ReputationRep:
    (Original post by RichE)
    Let a_n = 1 + 1/2 + 1/3 + ... + 1/n - logn

    Then (I will show this later if you wish) it is the case that a_n tends to Euler's constant (denoted gamma or C - so let's use C).

    We wish to sum the terms

    1, -1/2, 1/3, -1/4, ...

    where we are taking p positives then q negatives at a time. Let s_n denote the sum to n terms of this series.

    Note

    s_(p+q) = (1+1/3+...+1/(2p-1)) - (1/2+1/4+...+1/(2q))

    More generally

    s_[k(p+q)] = (1+1/3+...1/(2kp-1)) - (1/2+1/4+...+1/(2kq)) =

    (1 + 1/2 + 1/3 + 1/4 + ... 1/(2kp))
    - (1/2 + 1/4 + .... + 1/(2kp))
    - (1/2 + 1/4 +.... + 1/(2kq)) =

    a_(2kp) + log(2kp)
    - 1/2 [a_(kp) + log(kp)]
    - 1/2[a_(kq) + log(kq)] =

    (a_(2kp) - 1/2 a_(kp) - 1/2 a_(kq))
    + log(2p/√(p)/√(q))

    now letting k tend to infinity

    (C-C/2-C/2) + log(2√(p/q)) = log(2√(p/q))

    So s_[k(p+q)] -> log(2√(p/q))

    Then s_n -> log(2√(p/q)) as n->∞

    as a general n is at most p+q diminishing terms from the last partial sum of multiples of (p+q) elements

    Note in the examples that you first gave you had p=q=1 and p=4 q=1 which agree with this formula
    Thanks for the post this sheds some light. I know about Euler's constant.
    I see that with your expression for s_{n}=log\left( 2\sqrt{\frac{p}{q}} \right) you'd be able to approximate to any real by taking certain values of p and q, so I now know how this works but am just getting me head around it.
    I assume that log means to base e in this context?
    Perhaps I should surrender what I know from basic algebra more readily.
    I think i'll record your post for future reference.
    • Thread Starter
    Offline

    15
    ReputationRep:
    (Original post by RichE)
    By themselves the positives add to infinity and the negatives to minus infinity.

    So you can keep dipping into the positives or negatives to get back over or under root 2 - you never run out of either. Even if at points it takes a hundred or a million positives to get back over √2 we know it will eventually happen.

    And because we're taking the next positive, next negative strategy then all the terms in the series will appear. We'll be going through the positives faster than the negatives as we're aiming to converge to a positive number, but all terms will eventually be included
    Ah I see. I almost forgot that \frac{1}{n} itself isn't convergent so I shouldn't really think of collections of terms as going to zero because the terms can be grouped to make a value.
    When we say our series converges in this case do we mean that as n \rightarrow \infty our chosen 'blocks' if numbers then tend to zero? (and so the value stops changing)
    Offline

    15
    ReputationRep:
    (Original post by Gaz031)
    Thanks for the post this sheds some light. I know about Euler's constant.
    I see that with your expression for s_{n}=log\left( 2\sqrt{\frac{p}{q}} \right) you'd be able to approximate to any real by taking certain values of p and q, so I now know how this works but am just getting me head around it.
    I assume that log means to base e in this context?
    Perhaps I should surrender what I know from basic algebra more readily.
    I think i'll record your post for future reference.
    Well I haven't shown that the limit can be any real number - though I have shown it can be arbitrarily close to any number. For example though I haven't shown the limit could be root 2 (which it could by my earlier comments).

    Yes I write log for ln

    Don't quite get the basic algebra surrender comment :confused: Infinite sums are about convergence not really about algebra.

    I'm more than happy to explain further if you have other questions or some points are still confusing
    Offline

    15
    ReputationRep:
    (Original post by Gaz031)
    When we say our series converges in this case do we mean that as n \rightarrow \infty our chosen 'blocks' if numbers then tend to zero? (and so the value stops changing)
    When we say our series converges we mean it in the usual sense of series converging.

    If you keep the cumulative sums (known as partial sums) this is a sequence of reals that tends to the limit.

    And if you follow my algorithm of dipping into the positives or the negatives at each stage depending on whether the partial sum is below or above root 2 you will have a series which converges to root2.

    It doesn't really have anything to do with the blocks tending to zero. It means that the errors between the partial sums and the limit tend to zero.

    An important fact with series is that if the series converges then the nth term tends to zero but the converse does not hold. e.g harmonic series
    • Thread Starter
    Offline

    15
    ReputationRep:
    (Original post by RichE)
    Well I haven't shown that the limit can be any real number - though I have shown it can be arbitrarily close to any number. For example though I haven't shown the limit could be root 2 (which it could by my earlier comments).
    If by 'abritrarily close' you mean |s_{n}-\sqrt{2}|&lt;\epsilon for all n>N then perhaps it converges on \sqrt{2}

    Don't quite get the basic algebra surrender comment :confused: Infinite sums are about convergence not really about algebra.
    Well, I probably meant intuitive thoughts really.... ie we don't really have commutativity. I'm probably wrong in doing so but I seem to think everything in which we're applying operations to 'terms' to have some sort of algebra in.

    An important fact with series is that if the series converges then the nth term tends to zero but the converse does not hold. e.g harmonic series
    Yes it was explicit that lim_{n \rightarrow \infty} a_{n} \neq 0 means divergence but not the converse.

    I'm more than happy to explain further if you have other questions or some points are still confusing
    I pretty much understand this concept now but thanks for the offer. I think the key thing to understand was why the lack of convergence of \frac{1}{n} means a collection of terms with some value can always be added so we don't 'exhaust' our list of positives. The proof helped to make it clearer too.

    Thanks to those who posted for your patience.
    Offline

    15
    ReputationRep:
    (Original post by Gaz031)
    If by 'abritrarily close' you mean |s_{n}-\sqrt{2}|&lt;\epsilon for all n>N then perhaps it converges on \sqrt{2}
    Well if s_n denotes the sum of the first n terms in the series I was algorithmly constructing then yes it is the case that

    for all e>0 there exists N such that for all n>N |s_n - √2| < e

    but I'm sure you knew that as the defn of convergence.

    My point was a different one - that I hadn't shown in the p+q grouping part that the limit could be anything. But that the limits I had attained were spread "densely" amongst the real numbers.

    But it wasn't an important point.
    • Thread Starter
    Offline

    15
    ReputationRep:
    Are you referring to how p and q in log\left( 2\sqrt{\frac{p}{q}} \right) can only be integers and thus we can only get certain sums.
    Could you make p and q vary as you progress through the series so that you could obtain the other sums that way?
    Offline

    0
    ReputationRep:
    do you use your holidays what they are used for :rolleyes: ...sleeping..resting....sleeping some more?!?!?

    STOP STUDYING:eek:, lol
    Offline

    15
    ReputationRep:
    (Original post by Gaz031)
    Are you referring to how p and q in log\left( 2\sqrt{\frac{p}{q}} \right) can only be integers and thus we can only get certain sums.
    Could you make p and q vary as you progress through the series so that you could obtain the other sums that way?
    Yes that's essentially what I did with the root 2 case. That would extend generally - I just used root 2 to help demonstrate a specific example.

    If you wished the limit to be L (any real number) you could use the same idea of dipping into the positives or negatives depending on whether the partial sum was currently below or above L.

    Riemann showed (it's not actually that difficult) that any L can be attained from rearranging any series that is convergent but not absolutely convergent.

    For an absolutely convergent series the positives will add to some finite limit and similarly the negatives. So it isn't possible to keep dipping into an infinite sum of positives or negatives as we had earlier. In the AC case even taking all the posl at once the effect would be finite.
    Offline

    15
    ReputationRep:
    (Original post by Phil23)
    do you use your holidays what they are used for :rolleyes: ...sleeping..resting....sleeping some more?!?!?

    STOP STUDYING:eek:, lol
    Do you use your holidays to post unwelcome pointless drivel? :mad:

    Try making a constructive comment occasionally. There's a reason for that red gem in the corner of your posts.
 
 
 
Turn on thread page Beta
Updated: July 26, 2005

2,840

students online now

800,000+

Exam discussions

Find your exam discussion here

Poll
Should predicted grades be removed from the uni application process
Useful resources

Make your revision easier

Maths

Maths Forum posting guidelines

Not sure where to post? Read the updated guidelines here

Equations

How to use LaTex

Writing equations the easy way

Student revising

Study habits of A* students

Top tips from students who have already aced their exams

Study Planner

Create your own Study Planner

Never miss a deadline again

Polling station sign

Thinking about a maths degree?

Chat with other maths applicants

Can you help? Study help unanswered threads

Groups associated with this forum:

View associated groups

The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.

Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE

Write a reply...
Reply
Hide
Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.