The Student Room Group

Scroll to see replies

Reply 80
Sorry to keep everyone waiting here, I just burnt out trying to understand everything yesterday, and I couldn't bring myself to face it again today. But tonight or tomorrow I'll go through the thread again (and give you guys proper replies) - I really want to understand this.
Reply 81
Farhan.Hanif93
I posted this earlier in the thread, I hope it's simple enough to follow :o:.

As DFranklin said, it's not perfectly rigorous as I haven't proved that ln is continuous but that should do for your purposes.


And being very pedantic, it's also assuming that there is such a function as ln in the first place which is something else that needs to be proved :smile:
davros
And being very pedantic, it's also assuming that there is such a function as ln in the first place which is something else that needs to be proved :smile:

Haha, oh dear. :s-smilie:
Managed to dig myself a hole. :rolleyes:
Reply 83
Sorry I kept your guys waiting!
But I think it was the right thing to do - my head's a lot clearer today, and I didn't want to waste your time by trying to work this out when I couldn't think straight.

Farhan.Hanif93
I'm not entirely sure where the confusion is but I'm guessing it's where you want us to show that limh0eh1h1\displaystyle\lim_{h\to 0 } \frac{e^h -1}{h} \to 1?
Well first off, let 1n=eh1    h=ln(1+1n),n∉Z\frac{1}{n} = e^h -1 \implies h = \ln \left(1 + \frac{1}{n}\right), n\not\in \mathbb{Z}
Now note that we can substitute these in to get:
limn1nln(1+1n)\displaystyle\lim_{n\to \infty} \dfrac{\frac{1}{n}}{\ln \left(1 + \frac{1}{n}\right)}
=limn1ln(1+1n)n=\displaystyle\lim_{n\to \infty} \dfrac{1}{\ln \left(1 + \frac{1}{n}\right)^n}
I would now use the definition of e, which is e=limn(1+1n)ne=\displaystyle\lim_{n\to \infty } \left(1+\frac{1}{n}\right)^n to rewrite this as:
=1lne=1=\frac{1}{\ln e} = 1, as required and since that part is equal to 1, ddxex=ex×1=ex\frac{d}{dx}e^x = e^x \times 1 = e^x.
Not sure if this is rigorous enough for some but it's the best I can come up with :p:.


Farhan.Hanif93
That's fine but what don't you get? you seem interested and I'll be happy to explain anything you don't get about what I've done. Just list them as questions and I'll try to clarify.
I'm assuming that since you've seen e^x, it's likely that you've seem lnx and thus I'm guessing you know that lne=1.
Are you confused about what happened to the limit?


I'm trying to understand what you did in your post, but I lose you on this step;
limn1nln(1+1n)\displaystyle\lim_{n\to \infty} \dfrac{\frac{1}{n}}{\ln \left(1 + \frac{1}{n}\right)}
=limn1ln(1+1n)n=\displaystyle\lim_{n\to \infty} \dfrac{1}{\ln \left(1 + \frac{1}{n}\right)^n}

I don't understand that one. You could multiply the top and bottom of the fraction by n, but I don't understand how the denominator can be to the power of n, instead of multiplied by n.

I actually think I get the rest of it - the limit means something like as n approaches infinity, the expression approaches e. Then you can sustitute that into the previous expression.
I'm not going to worry about why e is that value it approaches. xD


nuodai
It's completely exact and mathematically correct. I can see why you're confused -- the concept of moving something arbitrarily close to a point rather than just hitting the point is a hard concept to pick up, but there's really no estimation or trend-spotting involved.


Yes, I think that concept is something I'm having a real problem with. I don't understand how arbitrarily close can be perfectly accurate.
Reply 84
99wattr89
I'm trying to understand what you did in your post, but I lose you on this step;
limn1nln(1+1n)\displaystyle\lim_{n\to \infty} \dfrac{\frac{1}{n}}{\ln \left(1 + \frac{1}{n}\right)}
=limn1ln(1+1n)n=\displaystyle\lim_{n\to \infty} \dfrac{1}{\ln \left(1 + \frac{1}{n}\right)^n}

I don't understand that one. You could multiply the top and bottom of the fraction by n, but I don't understand how the denominator can be to the power of n, instead of multiplied by n.

It uses the law of logarithms that says that qlogp=log(pq)q\log p = \log (p^q).
99wattr89
I'm trying to understand what you did in your post, but I lose you on this step;
limn1nln(1+1n)\displaystyle\lim_{n\to \infty} \dfrac{\frac{1}{n}}{\ln \left(1 + \frac{1}{n}\right)}
=limn1ln(1+1n)n=\displaystyle\lim_{n\to \infty} \dfrac{1}{\ln \left(1 + \frac{1}{n}\right)^n}

I don't understand that one. You could multiply the top and bottom of the fraction by n, but I don't understand how the denominator can be to the power of n, instead of multiplied by n.

Oh ok, that's good because that's only a minor thing for you to overcome. :smile:
Note that abc=abc\dfrac{\frac{a}{b}}{c} = \dfrac{a}{bc}
So we do indeed get limn1nln(1+1n)\displaystyle\lim_{n\to \infty} \dfrac{1}{n\ln \left(1+\frac{1}{n}\right)}.
Then using the law of logs: alnb=lnbaa\ln b = \ln b^a, we can bring the 'n' up as a power.

99wattr89
I actually think I get the rest of it - the limit means something like as n approaches infinity, the expression approaches e. Then you can sustitute that into the previous expression.
I'm not going to worry about why e is that value it approaches. xD

You may be a bit confused about how the limit changed from limh0\displaystyle\lim_{h\to 0} to limn\displaystyle\lim_{n\to \infty}.
I'm not certain if this is a valid thinking process on my part but when we made 'h' the subject of the substitution, we got:
h=ln(1+1n)h=\ln \left(1+\frac{1}{n}\right), now in order for the expression to tend to zero, 'n' must be very large i.e. tend to infinity so 1n\frac{1}{n} is so small that it is negligible. Therefore our value h will be roughly ln(1)\ln (1), which is clearly tending to zero.
Hope that makes sense. :smile:
Reply 86
nuodai
It uses the law of logarithms that says that qlogp=log(pq)q\log p = \log (p^q).


Farhan.Hanif93
Oh ok, that's good because that's only a minor thing for you to overcome. :smile:
Note that abc=abc\dfrac{\frac{a}{b}}{c} = \dfrac{a}{bc}
So we do indeed get limn1nln(1+1n)\displaystyle\lim_{n\to \infty} \dfrac{1}{n\ln \left(1+\frac{1}{n}\right)}.
Then using the law of logs: alnb=lnbaa\ln b = \ln b^a, we can bring the 'n' up as a power.


Oh, of course.
It's embarassing that I didn't notice that, sorry to waste your time there.

Farhan.Hanif93
You may be a bit confused about how the limit changed from limh0\displaystyle\lim_{h\to 0} to limn\displaystyle\lim_{n\to \infty}.
I'm not certain if this is a valid thinking process on my part but when we made 'h' the subject of the substitution, we got:
h=ln(1+1n)h=\ln \left(1+\frac{1}{n}\right), now in order for the expression to tend to zero, 'n' must be very large i.e. tend to infinity so 1n\frac{1}{n} is so small that it is negligible. Therefore our value h will be roughly ln(1)\ln (1), which is clearly tending to zero.
Hope that makes sense. :smile:


I'm really glad to say that makes perfect sense to me. Shocking, I know.
So, I think I do follow your method.


Which means it's really just the concept of limits that I still have trouble with. In actual fact, it seems like limits have been the root of all the problems I've had.
99wattr89
Oh, of course.
It's embarassing that I didn't notice that, sorry to waste your time there.



I'm really glad to say that makes perfect sense to me. Shocking, I know.
So, I think I do follow your method.


Which means it's really just the concept of limits that I still have trouble with. In actual fact, it seems like limits have been the root of all the problems I've had.

Good, don't worry about limits too much because we only really get to look at them in detail when we get to uni maths anyhow. It's good to see that someone else doing the A-Level likes to know why such a thing exist rather than just blindly agreeing with the text book and regurgitate the method. :smile:
Reply 88
Farhan.Hanif93
Good, don't worry about limits too much because we only really get to look at them in detail when we get to uni maths anyhow. It's good to see that someone else doing the A-Level likes to know why such a thing exist rather than just blindly agreeing with the text book and regurgitate the method. :smile:


Hah hah, thanks! :smile:
I just don't see the point of memorising things without understanding them - you'll pass the exam, but you won't be able to use what you learn and develop it further. And really understanding is so much more enlightening and fulfilling.

So thanks for so much help, to both you and Nuodai!
I think it's time to get back to work on C3 - I really need to finish that book soon.

I guess limits will be easier to understand when I've learnt the preceding material.
Reply 89
99wattr89
Yes, I think that concept is something I'm having a real problem with. I don't understand how arbitrarily close can be perfectly accurate.


Here's another example then. I claim to know exactly the value of π\pi. Obviously, I can't tell you what the exact value is, because that would take an infinite amount of time. But I can tell you its value to any precision you want in a finite amount of time (which, of course, depends on the desired precision). Am I wrong to say I know the value exactly?
I guess this post is a bit late and may be a little inappropriate. I noticed a lot people distinguishing exp(x) from e^x but (and I may be wrong here) no one really said why we should do so. I never did A-Level myself so I don't know what you guys get to learn at school, but here are some stuff I found myself thinking about when I first learned calculus. Again this is completely unnecessary for school work but may be of your interest nevertheless. If you find it confusing at any stage just stop reading :smile:

If the OP is serious about maths, it's probably a useful exercise to think at this stage of learning calculus about what it actually means to say a^x. When one is first taught exponentiation, it is almost definitely defined as repeated multiplication, so a^2 = a*a, and a^3 = a*a*a, and so on. This, however, only defines a^x for positive integer values of x. From this very limited definition we find these properties:
- (a^x)*(a^y)=a^(x+y)
- (a^x)^y=a^(x*y)

The first property allows us to define negative exponents as a^(-x)=1/(a^x), and a^0=1, while the second property lets us define a^(1/x) to be a number such that (a^(1/x))^x = a. You'll notice that in extending our set of valid exponents, we do so in a consistent manner, i.e. such that we don't break the familiar property we already had from the more elementary definition of repeated multiplication.

Thus far we've defined exponentiation only for rational exponents, i.e. a^(x/y) for x,y integers. To generalise further to irrational exponents is considerably more complicated than what I did above. Essentially you have to invoke some very specific (and totally non-intuitive properties) of real numbers. I've seen a book (which I can no longer find) doing this the "obvious" way, by expanding the exponent into its infinite decimal expansion, and at each truncation use the rational definition of exponentiation to obtain an approximate, and define the "true" value of a^x to be the limit of the sequence of approximates.

However, this gets quite complicated and if you ever get to do Analysis (Calculus done properly) you'll probably find that your lecturer skates around this whole issue by defining a completely different function called exp(x) as an infinite series. He would then prove that this function has all the familiar properties for rational x, and invoke the concept of "completeness" to argue that this is a sensible generalisation of exponentiation. Furthermore, power series have nice properties that, in this case, allows you to differentiate things term-by-term. That exp'(x)=exp(x) becomes almost immediately obvious when you treat calculus this way.

BUT...

What you were probably not told at school is that calculus as you know it depends ENTIRELY on the subtle properties of the so-called "real" numbers. It turns out that all these seemingly paradoxical business of limit-taking and distinction between "arbitrarily close" and "perfectly accurate" (and indeed that 0.999...=1) stems from the way the real numbers are defined, essentially consequences of plugging in the "holes" in the rational number system. (Indeed the "reality" of the real numbers themselves could be debated philosophically, as computer scientists would tell you that we can represent almost none these numbers in "real life", and the ability to do so would imply a great deal of strange consequences, such as online shopping would no longer be secure...)

Again, I'm sorry if I've confused the hell out of you, but I did find these things interesting myself when I was at your stage.
Reply 91
Zhen Lin
Here's another example then. I claim to know exactly the value of π\pi. Obviously, I can't tell you what the exact value is, because that would take an infinite amount of time. But I can tell you its value to any precision you want in a finite amount of time (which, of course, depends on the desired precision). Am I wrong to say I know the value exactly?


Well, that's an interesting way to put it.
But the values you give me will all be wrong, so you may know the true value, but that's not what the method will produce.

Positronized
I guess this post is a bit late and may be a little inappropriate. I noticed a lot people distinguishing exp(x) from e^x but (and I may be wrong here) no one really said why we should do so. I never did A-Level myself so I don't know what you guys get to learn at school, but here are some stuff I found myself thinking about when I first learned calculus. Again this is completely unnecessary for school work but may be of your interest nevertheless. If you find it confusing at any stage just stop reading :smile:

If the OP is serious about maths, it's probably a useful exercise to think at this stage of learning calculus about what it actually means to say a^x. When one is first taught exponentiation, it is almost definitely defined as repeated multiplication, so a^2 = a*a, and a^3 = a*a*a, and so on. This, however, only defines a^x for positive integer values of x. From this very limited definition we find these properties:
- (a^x)*(a^y)=a^(x+y)
- (a^x)^y=a^(x*y)

The first property allows us to define negative exponents as a^(-x)=1/(a^x), and a^0=1, while the second property lets us define a^(1/x) to be a number such that (a^(1/x))^x = a. You'll notice that in extending our set of valid exponents, we do so in a consistent manner, i.e. such that we don't break the familiar property we already had from the more elementary definition of repeated multiplication.

Thus far we've defined exponentiation only for rational exponents, i.e. a^(x/y) for x,y integers. To generalise further to irrational exponents is considerably more complicated than what I did above. Essentially you have to invoke some very specific (and totally non-intuitive properties) of real numbers. I've seen a book (which I can no longer find) doing this the "obvious" way, by expanding the exponent into its infinite decimal expansion, and at each truncation use the rational definition of exponentiation to obtain an approximate, and define the "true" value of a^x to be the limit of the sequence of approximates.

However, this gets quite complicated and if you ever get to do Analysis (Calculus done properly) you'll probably find that your lecturer skates around this whole issue by defining a completely different function called exp(x) as an infinite series. He would then prove that this function has all the familiar properties for rational x, and invoke the concept of "completeness" to argue that this is a sensible generalisation of exponentiation. Furthermore, power series have nice properties that, in this case, allows you to differentiate things term-by-term. That exp'(x)=exp(x) becomes almost immediately obvious when you treat calculus this way.

BUT...

What you were probably not told at school is that calculus as you know it depends ENTIRELY on the subtle properties of the so-called "real" numbers. It turns out that all these seemingly paradoxical business of limit-taking and distinction between "arbitrarily close" and "perfectly accurate" (and indeed that 0.999...=1) stems from the way the real numbers are defined, essentially consequences of plugging in the "holes" in the rational number system. (Indeed the "reality" of the real numbers themselves could be debated philosophically, as computer scientists would tell you that we can represent almost none these numbers in "real life", and the ability to do so would imply a great deal of strange consequences, such as online shopping would no longer be secure...)

Again, I'm sorry if I've confused the hell out of you, but I did find these things interesting myself when I was at your stage.


Amusingly enough, I understood right up to your conclusion. Maybe you can explain that part to me? I just don't follow the final paragraph at all. ^^;
99wattr89
Well, that's an interesting way to put it.
But the values you give me will all be wrong, so you may know the true value, but that's not what the method will produce.

Amusingly enough, I understood right up to your conclusion. Maybe you can explain that part to me? I just don't follow the final paragraph at all. ^^;


Interestingly your first reply sort of conveys the the last point I was trying to make: to what extent can we "know" the "true value" of a real number? More importantly, what method can we use to produce such a number.

One could define π\pi as the ratio between the circumference and the diameter of a circle, whose value in decimal expansion turns out to be approximately 3.14159..., which is more of an engineer's (or indeed applied mathmo's) viewpoint on these issues. But a pure mathmo might ask further, what is a circle? Clearly a perfect circle, defined as a set of point equidistant to a fixed point in 2D space, doesn't exist in the real world. So again, to what extent do we know what a circle is? The question goes on and on, but it's ultimately a philosophical one and you shouldn't let it bother you when you're doing maths, but rather think about it in the back of your head perhaps after you've done the maths.

On the issue of the real numbers, which was the main point in the last paragraph of my last post, I'm afraid it'll take quite a bit of space to explain. Most elementary analysis course would define the real numbers passively without saying how to construct them, or indeed what they look like, e.g. every upper-bounded subset of R has a least upper bound, or every bounded infinite sequence of R\mathbb{R} have a convergent subsequence. The actual construction of R\mathbb{R} tends to be even worse. These are all very abstract and don't really give you any intuition at all on what real numbers are, and that's because they aren't intuitive. This is the main point I was trying to make.

Why, you may ask, do we need to do things in such a complicated fashion. After all the real numbers are obviously points on an infinite line: they're totally ordered (given two numbers a,b then one of a<b, a=b or a>b is true) and dense (given two numbers a,b we can find another number x such that a<x<b regardless of how close a and b are)

Let me put forth the following "obvious" fact: let f be a continuous function such that f(0)<0 and f(10)>0. Then we can find some x such that 0<x<10 and f(x)=0. Draw yourself some graphs and you would most likely be able to convince yourself of this. However, if f is defined only on the rational numbers (rather than real numbers) this "obvious" result is NOT true: consider f(x)=x^2-2, then f(0)=-2<0 and f(10)=98>0. But there's no 0<x<10 such that f(x)=0! Of course if we're working over R\mathbb{R} then we can take x=sqrt(2), but this isn't rational! What, then, is the difference between Q\mathbb{Q} and R\mathbb{R}, given the similarities that I've outlined above? This is the reason why we have to define the real numbers in such a peculiar manner: so that we can have these "obvious" properties in our continuous functions.

The lesson to take away from this is, if you REALLY want to understand calculus properly, it's not at all an obvious subject, and things that you find intuitively difficult to understand probably are (limit-taking included). Therefore, don't worry about it too much otherwise you'll be unable to learn the useful methods. (If you want to learn Analysis, it'll be useful to be very familiar with the basic calculus results first.)

BTW the result above is known as the Intermediate Value Theorem. If you're interested you could look up these topics on Real Numbers and Real Analysis on Wikipedia or similar sites. However, chances are you'll end up reading about the epsilon-delta definition of limits and continuity which someone (nuodai?) posted earlier. This definitely takes some time to get your head around and you should've feel at all upset if you can't understand it after a few readings. It took (numerous, very smart) people over 200 years from the invention of calculus to figure out what they've been talking about!
Positronized
The actual construction of R\mathbb{R} tends to be even worse. These are all very abstract and don't really give you any intuition at all on what real numbers are, and that's because they aren't intuitive. This is the main point I was trying to make.Don't know if you've already seen it, but http://www.dpmms.cam.ac.uk/~wtg10/decimals.html is an attempt at a more intuitive construction.
Reply 94
Positronized
Interestingly your first reply sort of conveys the the last point I was trying to make: to what extent can we "know" the "true value" of a real number? More importantly, what method can we use to produce such a number.

One could define π\pi as the ratio between the circumference and the diameter of a circle, whose value in decimal expansion turns out to be approximately 3.14159..., which is more of an engineer's (or indeed applied mathmo's) viewpoint on these issues. But a pure mathmo might ask further, what is a circle? Clearly a perfect circle, defined as a set of point equidistant to a fixed point in 2D space, doesn't exist in the real world. So again, to what extent do we know what a circle is? The question goes on and on, but it's ultimately a philosophical one and you shouldn't let it bother you when you're doing maths, but rather think about it in the back of your head perhaps after you've done the maths.

On the issue of the real numbers, which was the main point in the last paragraph of my last post, I'm afraid it'll take quite a bit of space to explain. Most elementary analysis course would define the real numbers passively without saying how to construct them, or indeed what they look like, e.g. every upper-bounded subset of R has a least upper bound, or every bounded infinite sequence of R\mathbb{R} have a convergent subsequence. The actual construction of R\mathbb{R} tends to be even worse. These are all very abstract and don't really give you any intuition at all on what real numbers are, and that's because they aren't intuitive. This is the main point I was trying to make.

Why, you may ask, do we need to do things in such a complicated fashion. After all the real numbers are obviously points on an infinite line: they're totally ordered (given two numbers a,b then one of a<b, a=b or a>b is true) and dense (given two numbers a,b we can find another number x such that a<x<b regardless of how close a and b are)

Let me put forth the following "obvious" fact: let f be a continuous function such that f(0)<0 and f(10)>0. Then we can find some x such that 0<x<10 and f(x)=0. Draw yourself some graphs and you would most likely be able to convince yourself of this. However, if f is defined only on the rational numbers (rather than real numbers) this "obvious" result is NOT true: consider f(x)=x^2-2, then f(0)=-2<0 and f(10)=98>0. But there's no 0<x<10 such that f(x)=0! Of course if we're working over R\mathbb{R} then we can take x=sqrt(2), but this isn't rational! What, then, is the difference between Q\mathbb{Q} and R\mathbb{R}, given the similarities that I've outlined above? This is the reason why we have to define the real numbers in such a peculiar manner: so that we can have these "obvious" properties in our continuous functions.

The lesson to take away from this is, if you REALLY want to understand calculus properly, it's not at all an obvious subject, and things that you find intuitively difficult to understand probably are (limit-taking included). Therefore, don't worry about it too much otherwise you'll be unable to learn the useful methods. (If you want to learn Analysis, it'll be useful to be very familiar with the basic calculus results first.)

BTW the result above is known as the Intermediate Value Theorem. If you're interested you could look up these topics on Real Numbers and Real Analysis on Wikipedia or similar sites. However, chances are you'll end up reading about the epsilon-delta definition of limits and continuity which someone (nuodai?) posted earlier. This definitely takes some time to get your head around and you should've feel at all upset if you can't understand it after a few readings. It took (numerous, very smart) people over 200 years from the invention of calculus to figure out what they've been talking about!


I really can't follow your post at all this time, I'm sorry.
I don't know what passive definition, constuction, upper-bounded subsets, least upper bounds, bounded infinite sequences or convergent subsequences are.

Also, I don't understand your example with f(x) - it seems like y=x-1 satisfies it.

Sadly I also don't understand what it means to work 'over R'.

I'm still re-reading it, trying to work it out, but I'm sorry to say that I still don't understand what you're trying to explain. Which is really embarassing. x_x
There's no reason you should expect to understand those terms - they are all things you'd learn in a 1st (possibly 2nd) year university Maths course.

To my mind, you don't gain the knowledge to give a "good" proof of what you asked originally until at least the 2nd course of Analysis you do at university. And even then, the courses typically pay "lip service" to many of the issues we've touched on in the thread. (That the function they have designed (exp(x) = 1 + x + x^2/2! + x^3/3! +...) actually behaves the same as the function e^x, and ultimately, what does e^x actually mean anyhow).

So without wanting to fob you off, I really wouldn't worry too much about this at this point in your mathematical education. This is why I haven't made many posts in the thread as well.
Reply 96
DFranklin
There's no reason you should expect to understand those terms - they are all things you'd learn in a 1st (possibly 2nd) year university Maths course.

To my mind, you don't gain the knowledge to give a "good" proof of what you asked originally until at least the 2nd course of Analysis you do at university. And even then, the courses typically pay "lip service" to many of the issues we've touched on in the thread. (That the function they have designed (exp(x) = 1 + x + x^2/2! + x^3/3! +...) actually behaves the same as the function e^x, and ultimately, what does e^x actually mean anyhow).

So without wanting to fob you off, I really wouldn't worry too much about this at this point in your mathematical education. This is why I haven't made many posts in the thread as well.


That is a very good point. But I still want to try to understand if I can. And I'll have to understand eventually either way.
But I'll try not to worry. xD
Reply 97
Farhan.Hanif93
I'm not entirely sure where the confusion is but I'm guessing it's where you want us to show that limh0eh1h1\displaystyle\lim_{h\to 0 } \frac{e^h -1}{h} \to 1?
Well first off, let 1n=eh1    h=ln(1+1n),n∉Z\frac{1}{n} = e^h -1 \implies h = \ln \left(1 + \frac{1}{n}\right), n\not\in \mathbb{Z}
Now note that we can substitute these in to get:
limn1nln(1+1n)\displaystyle\lim_{n\to \infty} \dfrac{\frac{1}{n}}{\ln \left(1 + \frac{1}{n}\right)}
=limn1ln(1+1n)n=\displaystyle\lim_{n\to \infty} \dfrac{1}{\ln \left(1 + \frac{1}{n}\right)^n}
I would now use the definition of e, which is e=limn(1+1n)ne=\displaystyle\lim_{n\to \infty } \left(1+\frac{1}{n}\right)^n to rewrite this as:
=1lne=1=\frac{1}{\ln e} = 1, as required and since that part is equal to 1, ddxex=ex×1=ex\frac{d}{dx}e^x = e^x \times 1 = e^x.
Not sure if this is rigorous enough for some but it's the best I can come up with :p:.


just ignore me
seriously
terrible
munn
just ignore me
seriously
terrible

Why? What happened? :p:
DFranklin
There's no reason you should expect to understand those terms - they are all things you'd learn in a 1st (possibly 2nd) year university Maths course.

To my mind, you don't gain the knowledge to give a "good" proof of what you asked originally until at least the 2nd course of Analysis you do at university. And even then, the courses typically pay "lip service" to many of the issues we've touched on in the thread. (That the function they have designed (exp(x) = 1 + x + x^2/2! + x^3/3! +...) actually behaves the same as the function e^x, and ultimately, what does e^x actually mean anyhow).

So without wanting to fob you off, I really wouldn't worry too much about this at this point in your mathematical education. This is why I haven't made many posts in the thread as well.


This!

I was a little reluctant to post the previous reply but I thought if you want to have a shot at trying to understanding it can't hurt. There's no reason to be embarrassed about not understanding the principles of analysis after reading a single post on a forums (which isn't really the best way to explain this stuff anyway :P) Don't let it put you off learning calculus or indeed maths in general! You'll have plenty more opportunities to learn this stuff if you want to.

Pondering about functions and what it means to exponentiate or differentiate is all well and good, but at this stage you should focus on intuitive ideas rather than formal arguments, but you'll have to accept that unfortunately intuitive arguments will only get you so far and that's why people do pure maths. Frankly I've reached a point where I feel that I've sufficiently convinced myself that maths "works" and I'm not taking any more courses on pure maths :P

Good luck!

Latest