The Student Room Group

Expected Value of Probability Distribution

Suppose in the first trial, the probability of success is P%; and with each failure, the probability increases by 10%. This is repeated to an infinite number of trials until success. What is the expected number of trials until success? And what is the equivalent probability?

Which discrete or continuous probability distribution would I use? What would the expected value of the test be?




My thoughts on this are to use the hypergeometric distribution, however the probability goes DOWN, so I would invert this to be the expected number of trials until FAILURE, where probability decreases by 10% every trial. Then I run into the problem of doing trials to infinity.

Can anyone shed some light into this?
(edited 9 years ago)
Reply 1
Bump!
If the probability P increases by 10% after each failure then after enough failures this value will be >= 1, so we'll have a certain success and thus a finite sum.

We have the formula E[X]=i=1P(Xi)\mathbb{E}[X] = \displaystyle \sum_{i = 1}^{\infty} P(X \ge i).
XiX \ge i is when we have i - 1 failures, with probability (1P)(11.1P)...(1(1.1)i2P)(1 - P)(1 - 1.1P)...(1 - (1.1)^{i-2}P). The sum will terminate if (1.1)i2P>1(1.1)^{i-2}P > 1 i.e. i(logP2log1.1)/log1.1 i \ge (\log P - 2 \log 1.1) / \log 1.1. But I'm not sure I see a nice formula. What's this for?
Reply 3
Original post by Glutamic Acid
If the probability P increases by 10% after each failure then after enough failures this value will be >= 1, so we'll have a certain success and thus a finite sum.

We have the formula E[X]=i=1P(Xi)\mathbb{E}[X] = \displaystyle \sum_{i = 1}^{\infty} P(X \ge i).
XiX \ge i is when we have i - 1 failures, with probability (1P)(11.1P)...(1(1.1)i2P)(1 - P)(1 - 1.1P)...(1 - (1.1)^{i-2}P). The sum will terminate if (1.1)i2P>1(1.1)^{i-2}P > 1 i.e. i(logP2log1.1)/log1.1 i \ge (\log P - 2 \log 1.1) / \log 1.1. But I'm not sure I see a nice formula. What's this for?

Is there any specified probability distribution to describe this? Or any distribution that can be adapted to suit this?

It doesn't seem like a complex distribution - probability of success goes up with each failed trail - but after long searches, it might actually be quite hard to find the expected value.
Reply 4
Second Bump!
Original post by Alex:
Second Bump!


I can't see a standard distribution working here. It might help to know some details of what particular area you're studying at present - assuming this is course related. That might shed some light on a different approach.
Reply 6
Original post by ghostwalker
I can't see a standard distribution working here. It might help to know some details of what particular area you're studying at present - assuming this is course related. That might shed some light on a different approach.

There's not much else to say really; just the expected value of an experiment that with every failed trial will increase the probability of success.

Something I have missed off, however, is that after 5 failed trials, the probability stops going up. Thus at this point the distribution becomes geometric.
Original post by Alex:
There's not much else to say really; just the expected value of an experiment that with every failed trial will increase the probability of success.

Something I have missed off, however, is that after 5 failed trials, the probability stops going up. Thus at this point the distribution becomes geometric.


That is rather significant.

It's not going to be very nice though, and I'd think you're going to have to work out the first few terms individually, and then deal with the geometric from basics. It's going to be very messy, but doable.
Reply 8
Original post by ghostwalker
That is rather significant.

It's not going to be very nice though, and I'd think you're going to have to work out the first few terms individually, and then deal with the geometric from basics. It's going to be very messy, but doable.

I believe that it isn't going to be too significant - if P is above 0.5 to begin with, then the trials will stop going up prematurely anyway, and below 0.5, the probability is going to be dominated by the first 3 trials anyway.

Thus, I think it will simplify things by leaving the 'geometrical cap' out.
Original post by Alex:
I believe that it isn't going to be too significant - if P is above 0.5 to begin with, then the trials will stop going up prematurely anyway, and below 0.5, the probability is going to be dominated by the first 3 trials anyway.

Thus, I think it will simplify things by leaving the 'geometrical cap' out.


We seem to have two interpretations of what is meant by an increase of 10%.

I would say multply by 1.1, but I get the impression you mean add 0.1 to the probabilty of success.
Reply 10
Original post by ghostwalker
We seem to have two interpretations of what is meant by an increase of 10%.

I would say multply by 1.1, but I get the impression you mean add 0.1 to the probabilty of success.

Yes! The second one!
Reply 11
Okay, so here's my probability distribution:

f(x)=
(p+0.0) x=1
(1.0-p)(p+0.1) x=2
(1.0-p)(0.9-p)(p+0.2) x=3
(1.0-p)(0.9-p)(0.8-p)(p+0.3) x=4
(1.0-p)(0.9-p)(0.8-p)(0.7-p)(p+0.4) x=5
(1.0-p)(0.9-p)(0.8-p)(0.7-p)(0.6-p)(0.5-p)^(x-6)(p+0.5) x>5


I'm trying to get the expected value for how many trials until success.
Original post by Alex:
...


So, expected number of trials including the success itself is simply sum xP(X=x) from x=1 to infinity - which I'm sure you know, so?
Reply 13
Original post by ghostwalker
So, expected number of trials including the success itself is simply sum xP(X=x) from x=1 to infinity - which I'm sure you know, so?

There comes a problem with the end, where you have a 'sum of a sum', which isn't too easy to work out.

Is there any way of generating this expected value using Markov Chains?
Original post by Alex:

(1.0-p)(0.9-p)(0.8-p)(0.7-p)(0.6-p)(0.5-p)^(x-6)(p+0.5) x>5


Not an expert on Markov chains, but they're not necessary here.

I thought you were going to ignore the geometric.

So considering the quoted line, for the purposes of working out expectation we have:

x=5(1.0p)(0.9p)(0.8p)(0.7p)(0.6p)(0.5p)(x6)(p+0.5)x\displaystyle\sum_{x=5}^{\infty} (1.0-p)(0.9-p)(0.8-p)(0.7-p)(0.6-p)(0.5-p)^{(x-6)}(p+0.5)x

I assume that's correct - not checked it.

We can pull out a lot of the terms as they're constant.

=(1.0p)(0.9p)(0.8p)(0.7p)(0.6p)(0.5p)6x=5(0.5p)x(p+0.5)x\displaystyle =(1.0-p)(0.9-p)(0.8-p)(0.7-p)(0.6-p)(0.5-p)^{-6}\sum_{x=5}^{\infty} (0.5-p)^x(p+0.5)x

What's left in the summation is basically a geometric distribution with probability "p+0.5", bar the first few terms. So, standard formula for the expectation of the geometric minus the sum of the expectation of the first few terms.
(edited 9 years ago)
Reply 15
I've decided to try another approach to see if this simplifies things; for some reason LaTeX breaks for me and adds emoticons everywhere:

p=Probability of initial success

X(n)=Probability of success at the nth trial:
X(0)=Probability of success in the previous trial
X(1)=(1.0-p)X(0)
X(2)=(0.9-p)X(1)
X(3)=(0.8-p)X(2)
X(4)=(0.7-p)X(3)
X(5)=(0.6-p)X(4)+(0.5-p)X(5)

So as we had before:
X(0)= Probability of success in the previous trial.
X(1)=(1.0-p)X(0)
X(2)=(0.9-p)(1.0-p)X(0)
X(3)=(0.8-p)(0.9-p)(1.0-p)X(0)
X(4)=(0.7-p)(0.8-p)(0.9-p)(1.0-p)X(0)
X(5)=(0.6-p)(0.7-p)(0.8-p)(0.9-p)(1.0-p)X(0)+(0.5-p)X(5)
I think the last bit is right for a geometric distribution.

Rearrange X(5):
X(5)=(0.6-p)(0.7-p)(0.8-p)(0.9-p)(1.0-p)X(0)/(0.5+p)

All probabilities add to one:
X(0)+X(1)+X(2)+X(3)+X(4)+X(5)=1
1-X(1)-X(2)-X(3)-X(4)-X(5)=X(0)

Since X(0) is the equivalent success probability, 1/X(0) should be the expected value for the distribution? I think this is correct.
(edited 9 years ago)

Quick Reply

Latest