The Student Room Group

Convergence by changing sign of terms

Question:
For two sequences an,bna_n , b_n both 0\to 0, can we always choose signs sn=±1s_n = \pm 1 so that both snan\sum s_n a_n and snbn\sum s_n b_n converge? What about three or more sequences?


Thoughts:
First consider only one sequence an0a_n \to 0. Partial sums an\sum |a_n| are monotone increasing, so either converge to finite limit or diverge to + infinity.
If an\sum |a_n| converges, then done with sn=sign(an)s_n = \text{sign}(a_n). If diverges, then we can change the signs to make snan\sum s_n a_n converge to any limit LL, by making the partial sums oscillate around L. (similar argument to rearrangement theorem for conditionally convergent series, but here don't reorder terms, only change signs).

So for one sequence the result is clearly true, and in fact there are uncountably many choices of the signs (since at least one choice for each limit L) for each divergent sequence ana_n.

For two sequences I have tried some ideas but I cannot prove it or construct a counterexample.
Reply 1
I have a new idea:
If we can first prove that for any complex sequence cn0c_n \to 0 we can choose signs so that sncn\sum s_n c_n converges, then we can take real and imaginary parts, cn=an+ibnc_n = a_n + i b_n and then we get two real sequences an,bn0a_n , b_n \to 0 and the sums snan\sum s_n a_n and snbn\sum s_n b_n must converge. This would prove the question true, but still need to prove this complex case now!
Original post by liewchuan
I have a new idea:
If we can first prove that for any complex sequence cn0c_n \to 0 we can choose signs so that sncn\sum s_n c_n converges, then we can take real and imaginary parts, cn=an+ibnc_n = a_n + i b_n and then we get two real sequences an,bn0a_n , b_n \to 0 and the sums snan\sum s_n a_n and snbn\sum s_n b_n must converge. This would prove the question true, but still need to prove this complex case now!


You might find some mileage in this http://forums.xkcd.com/viewtopic.php?f=3&t=10612, but I've not read it all through.
(edited 13 years ago)
Reply 3
Original post by ghostwalker
You might find some mileage in this http://forums.xkcd.com/viewtopic.php?f=3&t=10612, but I've not read it all through.
Good find, and it has a complete solution when you do get to the end.
Original post by DFranklin
Good find, and it has a complete solution when you do get to the end.


Cheers. I rely on you expertise as to it having a complete solution; I'll have to read it through carefully when I'm sober! Indexing takes a lot of thinking about; well, for me.
(edited 13 years ago)
Reply 6
Original post by ghostwalker
Cheers. I rely on you expertise as to it having a complete solution; I'll have to read it through carefully when I'm sober! Index sets take a lot of thinking about; well, for me.
Quite a lot of the more invovled discussion there is a bit of a blind alley - the proof of the 'boundedness lemma' is a lot simpler than you might expect (and doesn't involve indices).
Reply 7
I'm not sure why the "optimal" value of the bound is 3\sqrt{3} in Bollobas? In the other forum link, the argument suggests it is 2\sqrt{2}. Probably it doesn't matter, but it's making me doubt the validity of the forum proof.
Just to put it out there, Bollobas' room is in my staircase.
Reply 9
Original post by liewchuan
I'm not sure why the "optimal" value of the bound is 3\sqrt{3} in Bollobas? In the other forum link, the argument suggests it is 2\sqrt{2}. Probably it doesn't matter, but it's making me doubt the validity of the forum proof.
Yes, that's bothering me too. I confess, looking at Bela's proof, I can't see what "sqrt(3)" has to do with it. You need the constant to be at least sqrt(2) because of the n=2 case, but I can't see why it doesn't work with sqrt(2). On the other hand, I can't believe Bela's got it wrong.

I think I'm going to see if I can find a counterexample to the sqrt(2) case.
OK, Bela is right (no surprise).

As I understand it, the "gotcha" is that life is easy if you replace z1,z2,z3,... by z1+z2, z3... (or z1-z2, z3, ...) but if you instead replace with z1+z3, z2, ..., then one of the sums you need to bound is z1±z2z_1 \pm z_2, and you can't control whether it's + or -, because that's being specified by the induction hypothesis.

So the question is, "what's the worst that can happen?". Obviously for a worst case, we want z1 + z2 as big as possible (where WLOG we've taken the + sign), which would mean z1 and z2 being in the same direction (or nearly so). But that can't happen, because if z1 + z2 is large, then z1-z2 must be small, and we can use the "life is easy" case. It turns out (think equilateral triangles) that if we insist that the smaller of |z1+z2|, |z1-z2| is greater than 1, then the larger value can't be bigger than sqrt(3).

Similar arguments apply in the case where you replace with z2+z3.

I have to say, this key point was not obvious to me from the solution in Bela's book.
Reply 11
Thanks, probably he skipped the optimality because as long as you have such a constant C for n=1,2, you can make the inductive step and keep the same C from that point on (it doesn't matter what C actually is). But also, he clearly wanted to say that he knew the optimal value.
Original post by liewchuan
Thanks, probably he skipped the optimality because as long as you have such a constant C for n=1,2, you can make the inductive step and keep the same C from that point on (it doesn't matter what C actually is). But also, he clearly wanted to say that he knew the optimal value.
I think the book proof just leaves a fair bit to be filled in. (Bela tends not to spoonfeed, as I recall).

Quick Reply

Latest