You are Here: Home >< Maths

# Series query. watch

1. The book i'm reading has considered the series and has shown that while it converges the sum can take any value if the order of terms is rearranged.
For example:
I don't really understand this as addition of positive or negative terms is commutative so how does it matter in what order you add the terms? Surely if you sum the terms to infinity all terms are accounted for, regardless of how far they come along in the last?
2. (Original post by Gaz031)
The book i'm reading has considered the series and has shown that while it converges the sum can take any value if the order of terms is rearranged.
For example:
I don't really understand this as addition of positive or negative terms is commutative so how does it matter in what order you add the terms? Surely if you sum the terms to infinity all terms are accounted for, regardless of how far they come along in the last?
This is a theorem I think you will get to prove in the second Analysis Course. When you're dealing with adding infinite terms, the notion of commutativity doesn't really hold 100 per cent. I don't know why, and I would be glad to see a proof of this.

It's like saying how can two different types of infinities be different in size, if they are both infinity. Answer is that it can occur. The set of irrationals is considered to have greater cardinallity than the set of rationals.
3. 1 + 1/3 + 1/5 + ... = infinity
1/2 + 1/4 + 1/6 + ... = infinity

After you've added a million terms, in whatever order, there is still infinitely much to add and infinitely much to subtract. So it's not surprising that the order in which you take the remaining terms is important.

--

Another example: 1, -1, 1, -1, 1, -1, ...

If you add up in the natural order, the partial sums oscillate between 0 and 1.

But you could take two ones, then a minus one, the two ones, ... . The partial sums then tend to infinity.
4. Two things:

One is that you can rearrange an absolutely convergent series however you want and you'll end up with the same answer.

Two is that Riemann showed that convergent, but not absolutely convergent, series can be grouped to converge to anything. Note that the positive terms add to infinity and the negative terms to minus infinity. So you can dip into the positives to get roughly what you need limit wise, then from the negatives, then back to the positives etc etc to get whatever limit you need.

Generally in the example you gave if you take P postives then Q negatives each time you get a limit of ln(2rt(p/q)) which I'll prove for you tomorrow (when I have more time) if you wish.
5. Firstly, thanks for the replies.

This is a theorem I think you will get to prove in the second Analysis Course. When you're dealing with adding infinite terms, the notion of commutativity doesn't really hold 100 per cent. I don't know why, and I would be glad to see a proof of this.
Are there many other properties that don't hold for infinite summations?

It's like saying how can two different types of infinities be different in size, if they are both infinity. Answer is that it can occur. The set of irrationals is considered to have greater cardinallity than the set of rationals.
I understand why we can have different sized infinities in some contexts but surely that doesn't apply here, as we are supposedly adding all possible terms in the series.
My definition for convergence of is tending to a finite limit as where 'a' implies that the finite limit is singular and thus that we only have one limit. Is changing the order of the terms changing the actual series?

1 + 1/3 + 1/5 + ... = infinity
1/2 + 1/4 + 1/6 + ... = infinity

After you've added a million terms, in whatever order, there is still infinitely much to add and infinitely much to subtract. So it's not surprising that the order in which you take the remaining terms is important.
I thought (though this isn't a proper definition) that adds terms until and so you'd keep adding the negative terms even after the positive terms were approximately zero.

Another example: 1, -1, 1, -1, 1, -1, ...
That's a nice clear example but I wouldn't call that convergent as

One is that you can rearrange an absolutely convergent series however you want and you'll end up with the same answer.
I'm just about to move onto that.

Two is that Riemann showed that convergent, but not absolutely convergent, series can be grouped to converge to anything. Note that the positive terms add to infinity and the negative terms to minus infinity. So you can dip into the positives to get roughly what you need limit wise, then from the negatives, then back to the positives etc etc to get whatever limit you need.
But if you're taking out only what you need then surely you aren't summing every term?

Generally in the example you gave if you take P postives then Q negatives each time you get a limit of ln(2rt(p/q)) which I'll prove for you tomorrow (when I have more time) if you wish.
That would be interesting if you have time.
6. (Original post by Gaz031)
Are there many other properties that don't hold for infinite summations?
Commutativity and associativity of the real numbers guarantees that however a finite sum is calculated the same answer will be attained.

However (finitely) many times though that the rules

a+b=b+a
(a+b)+c = a+(b+c)

are applied to the terms in an infinite sum there are some rearrangements of the terms, like taking all the positives to the front and the negatives to the end, that can't be achieved.
7. (Original post by Gaz031)
But if you're taking out only what you need then surely you aren't summing every term?
We're looking at the series with terms

1, -1/2, 1/3, -1/4, 1/5, ...

which if taken in that order sum to log2.

Let's say I wished to rearrange the series in such a way that they sum to

√2 = 1.414...

Then I would start like this.

Terms order Cumulative Sum
1 1
1/3 1.3333
1/5 1.5333 <now gone too far - take next negative>
-1/2 1.0333 <too low - take next positive>
1/7 1.1762
1/9 1.2873
1/11 1.3782
1/13 1.4551 <too big again - take next negative>
-1/4 1.2051 <too low - take next positive>

etc

But note in the list

1 + 1/3 + 1/5 - 1/2 + 1/7 + 1/9 + 1/11 + 1/13 - 1/4 + ...

I'm gonna have all the terms - and I hope you can see that if I keep going like that I will eventually converge on √2
8. (Original post by Gaz031)
That would be interesting if you have time.
Let a_n = 1 + 1/2 + 1/3 + ... + 1/n - logn

Then (I will show this later if you wish) it is the case that a_n tends to Euler's constant (denoted gamma or C - so let's use C).

We wish to sum the terms

1, -1/2, 1/3, -1/4, ...

where we are taking p positives then q negatives at a time. Let s_n denote the sum to n terms of this series.

Note

s_(p+q) = (1+1/3+...+1/(2p-1)) - (1/2+1/4+...+1/(2q))

More generally

s_[k(p+q)] = (1+1/3+...1/(2kp-1)) - (1/2+1/4+...+1/(2kq)) =

(1 + 1/2 + 1/3 + 1/4 + ... 1/(2kp))
- (1/2 + 1/4 + .... + 1/(2kp))
- (1/2 + 1/4 +.... + 1/(2kq)) =

a_(2kp) + log(2kp)
- 1/2 [a_(kp) + log(kp)]
- 1/2[a_(kq) + log(kq)] =

(a_(2kp) - 1/2 a_(kp) - 1/2 a_(kq))
+ log(2p/√(p)/√(q))

now letting k tend to infinity

(C-C/2-C/2) + log(2√(p/q)) = log(2√(p/q))

So s_[k(p+q)] -> log(2√(p/q))

Then s_n -> log(2√(p/q)) as n->∞

as a general n is at most p+q diminishing terms from the last partial sum of multiples of (p+q) elements

Note in the examples that you first gave you had p=q=1 and p=4 q=1 which agree with this formula
9. Let's say I wished to rearrange the series in such a way that they sum to

√2 = 1.414...

Then I would start like this.

Terms order Cumulative Sum
1 1
1/3 1.3333
1/5 1.5333 <now gone too far - take next negative>
-1/2 1.0333 <too low - take next positive>
1/7 1.1762
1/9 1.2873
1/11 1.3782
1/13 1.4551 <too big again - take next negative>
-1/4 1.2051 <too low - take next positive>

etc

But note in the list

1 + 1/3 + 1/5 - 1/2 + 1/7 + 1/9 + 1/11 + 1/13 - 1/4 + ...

I'm gonna have all the terms - and I hope you can see that if I keep going like that I will eventually converge on √2
I can see what you mean but surely the positive terms are going to zero much faster than the negative terms, so very far down the line the negative terms will start overcancelling the positives?
10. (Original post by Gaz031)
I can see what you mean but surely the positive terms are going to zero much faster than the negative terms, so very far down the line the negative terms will start overcancelling the positives?
By themselves the positives add to infinity and the negatives to minus infinity.

So you can keep dipping into the positives or negatives to get back over or under root 2 - you never run out of either. Even if at points it takes a hundred or a million positives to get back over √2 we know it will eventually happen.

And because we're taking the next positive, next negative strategy then all the terms in the series will appear. We'll be going through the positives faster than the negatives as we're aiming to converge to a positive number, but all terms will eventually be included
11. (Original post by RichE)
Let a_n = 1 + 1/2 + 1/3 + ... + 1/n - logn

Then (I will show this later if you wish) it is the case that a_n tends to Euler's constant (denoted gamma or C - so let's use C).

We wish to sum the terms

1, -1/2, 1/3, -1/4, ...

where we are taking p positives then q negatives at a time. Let s_n denote the sum to n terms of this series.

Note

s_(p+q) = (1+1/3+...+1/(2p-1)) - (1/2+1/4+...+1/(2q))

More generally

s_[k(p+q)] = (1+1/3+...1/(2kp-1)) - (1/2+1/4+...+1/(2kq)) =

(1 + 1/2 + 1/3 + 1/4 + ... 1/(2kp))
- (1/2 + 1/4 + .... + 1/(2kp))
- (1/2 + 1/4 +.... + 1/(2kq)) =

a_(2kp) + log(2kp)
- 1/2 [a_(kp) + log(kp)]
- 1/2[a_(kq) + log(kq)] =

(a_(2kp) - 1/2 a_(kp) - 1/2 a_(kq))
+ log(2p/√(p)/√(q))

now letting k tend to infinity

(C-C/2-C/2) + log(2√(p/q)) = log(2√(p/q))

So s_[k(p+q)] -> log(2√(p/q))

Then s_n -> log(2√(p/q)) as n->∞

as a general n is at most p+q diminishing terms from the last partial sum of multiples of (p+q) elements

Note in the examples that you first gave you had p=q=1 and p=4 q=1 which agree with this formula
Thanks for the post this sheds some light. I know about Euler's constant.
I see that with your expression for you'd be able to approximate to any real by taking certain values of p and q, so I now know how this works but am just getting me head around it.
I assume that log means to base e in this context?
Perhaps I should surrender what I know from basic algebra more readily.
I think i'll record your post for future reference.
12. (Original post by RichE)
By themselves the positives add to infinity and the negatives to minus infinity.

So you can keep dipping into the positives or negatives to get back over or under root 2 - you never run out of either. Even if at points it takes a hundred or a million positives to get back over √2 we know it will eventually happen.

And because we're taking the next positive, next negative strategy then all the terms in the series will appear. We'll be going through the positives faster than the negatives as we're aiming to converge to a positive number, but all terms will eventually be included
Ah I see. I almost forgot that itself isn't convergent so I shouldn't really think of collections of terms as going to zero because the terms can be grouped to make a value.
When we say our series converges in this case do we mean that as our chosen 'blocks' if numbers then tend to zero? (and so the value stops changing)
13. (Original post by Gaz031)
Thanks for the post this sheds some light. I know about Euler's constant.
I see that with your expression for you'd be able to approximate to any real by taking certain values of p and q, so I now know how this works but am just getting me head around it.
I assume that log means to base e in this context?
Perhaps I should surrender what I know from basic algebra more readily.
I think i'll record your post for future reference.
Well I haven't shown that the limit can be any real number - though I have shown it can be arbitrarily close to any number. For example though I haven't shown the limit could be root 2 (which it could by my earlier comments).

Yes I write log for ln

Don't quite get the basic algebra surrender comment Infinite sums are about convergence not really about algebra.

I'm more than happy to explain further if you have other questions or some points are still confusing
14. (Original post by Gaz031)
When we say our series converges in this case do we mean that as our chosen 'blocks' if numbers then tend to zero? (and so the value stops changing)
When we say our series converges we mean it in the usual sense of series converging.

If you keep the cumulative sums (known as partial sums) this is a sequence of reals that tends to the limit.

And if you follow my algorithm of dipping into the positives or the negatives at each stage depending on whether the partial sum is below or above root 2 you will have a series which converges to root2.

It doesn't really have anything to do with the blocks tending to zero. It means that the errors between the partial sums and the limit tend to zero.

An important fact with series is that if the series converges then the nth term tends to zero but the converse does not hold. e.g harmonic series
15. (Original post by RichE)
Well I haven't shown that the limit can be any real number - though I have shown it can be arbitrarily close to any number. For example though I haven't shown the limit could be root 2 (which it could by my earlier comments).
If by 'abritrarily close' you mean for all n>N then perhaps it converges on

Don't quite get the basic algebra surrender comment Infinite sums are about convergence not really about algebra.
Well, I probably meant intuitive thoughts really.... ie we don't really have commutativity. I'm probably wrong in doing so but I seem to think everything in which we're applying operations to 'terms' to have some sort of algebra in.

An important fact with series is that if the series converges then the nth term tends to zero but the converse does not hold. e.g harmonic series
Yes it was explicit that means divergence but not the converse.

I'm more than happy to explain further if you have other questions or some points are still confusing
I pretty much understand this concept now but thanks for the offer. I think the key thing to understand was why the lack of convergence of means a collection of terms with some value can always be added so we don't 'exhaust' our list of positives. The proof helped to make it clearer too.

Thanks to those who posted for your patience.
16. (Original post by Gaz031)
If by 'abritrarily close' you mean for all n>N then perhaps it converges on
Well if s_n denotes the sum of the first n terms in the series I was algorithmly constructing then yes it is the case that

for all e>0 there exists N such that for all n>N |s_n - √2| < e

but I'm sure you knew that as the defn of convergence.

My point was a different one - that I hadn't shown in the p+q grouping part that the limit could be anything. But that the limits I had attained were spread "densely" amongst the real numbers.

But it wasn't an important point.
17. Are you referring to how p and q in can only be integers and thus we can only get certain sums.
Could you make p and q vary as you progress through the series so that you could obtain the other sums that way?
18. do you use your holidays what they are used for ...sleeping..resting....sleeping some more?!?!?

STOP STUDYING, lol
19. (Original post by Gaz031)
Are you referring to how p and q in can only be integers and thus we can only get certain sums.
Could you make p and q vary as you progress through the series so that you could obtain the other sums that way?
Yes that's essentially what I did with the root 2 case. That would extend generally - I just used root 2 to help demonstrate a specific example.

If you wished the limit to be L (any real number) you could use the same idea of dipping into the positives or negatives depending on whether the partial sum was currently below or above L.

Riemann showed (it's not actually that difficult) that any L can be attained from rearranging any series that is convergent but not absolutely convergent.

For an absolutely convergent series the positives will add to some finite limit and similarly the negatives. So it isn't possible to keep dipping into an infinite sum of positives or negatives as we had earlier. In the AC case even taking all the posl at once the effect would be finite.
20. (Original post by Phil23)
do you use your holidays what they are used for ...sleeping..resting....sleeping some more?!?!?

STOP STUDYING, lol
Do you use your holidays to post unwelcome pointless drivel?

Try making a constructive comment occasionally. There's a reason for that red gem in the corner of your posts.

### Related university courses

TSR Support Team

We have a brilliant team of more than 60 Support Team members looking after discussions on The Student Room, helping to make it a fun, safe and useful place to hang out.

This forum is supported by:
Updated: July 26, 2005
Today on TSR

### AQA Mechanics Unofficial Markscheme

Find out how you've done here

### 2,840

students online now

Exam discussions

Poll
Useful resources

### Maths Forum posting guidelines

Not sure where to post? Read the updated guidelines here

### How to use LaTex

Writing equations the easy way

### Study habits of A* students

Top tips from students who have already aced their exams