# Likelihood functions

Watch
Announcements

Page 1 of 1

Go to first unread

Skip to page:

I have vaguely been learning about likelihood functions and was wondering if there is a method for converting them to probability density functions, specifically for the binomial distribution (for example to calculate confidence intervals). I cannot seem to find any suggestion that this is possible or impossible. Any help?

Last edited by Theloniouss; 1 year ago

0

reply

Report

#2

(Original post by

I have vaguely been learning about likelihood functions and was wondering if there is a method for converting them to probability density functions, specifically for the binomial distribution (for example to calculate confidence intervals). I cannot seem to find any suggestion that this is possible or impossible. Any help?

**Knortfoxx**)I have vaguely been learning about likelihood functions and was wondering if there is a method for converting them to probability density functions, specifically for the binomial distribution (for example to calculate confidence intervals). I cannot seem to find any suggestion that this is possible or impossible. Any help?

The properties of likelihood functions lead you off in another direction entirely. Although likelihood functions are not themselves probability distributions, the logarithm of a likelihood ratio is. So if you take the likelihood evaluated at one value of the parameter, divide this by the value of the likelihood at another parameter, and take the logarithm, you do get a random variable. You can then use this fact to either set up a statistical test, or to derive (likelihood based) confidence intervals. This turns out to be a very good thing to do, as by the Neyman-Pearson lemma this statistical test is the most powerful test you can construct (i.e. it is the test that makes the best use of the data you have available).

In fact the log likelihood ratio is asymptotically distributed as a random variable, making these tests very easy to carry out in practice. The likelihood ratio test is ubiquitous in higher statistics!

1

reply

(Original post by

Since likelihood functions are derived from probability distributions, you can simply reverse the process and recover the probability density from a likelihood; but this would be rather pointless and is not, I think, what you mean. In general, the answer to your question is no; there are plenty of likelihood functions that integrate to infinity, and therefore cannot be normalized to be a probability density. (Take the uniform distribution on , for example).

The properties of likelihood functions lead you off in another direction entirely. Although likelihood functions are not themselves probability distributions, the logarithm of a likelihood ratio is. So if you take the likelihood evaluated at one value of the parameter, divide this by the value of the likelihood at another parameter, and take the logarithm, you do get a random variable. You can then use this fact to either set up a statistical test, or to derive (likelihood based) confidence intervals. This turns out to be a very good thing to do, as by the Neyman-Pearson lemma this statistical test is the most powerful test you can construct (i.e. it is the test that makes the best use of the data you have available).

In fact the log likelihood ratio is asymptotically distributed as a random variable, making these tests very easy to carry out in practice. The likelihood ratio test is ubiquitous in higher statistics!

**Gregorius**)Since likelihood functions are derived from probability distributions, you can simply reverse the process and recover the probability density from a likelihood; but this would be rather pointless and is not, I think, what you mean. In general, the answer to your question is no; there are plenty of likelihood functions that integrate to infinity, and therefore cannot be normalized to be a probability density. (Take the uniform distribution on , for example).

The properties of likelihood functions lead you off in another direction entirely. Although likelihood functions are not themselves probability distributions, the logarithm of a likelihood ratio is. So if you take the likelihood evaluated at one value of the parameter, divide this by the value of the likelihood at another parameter, and take the logarithm, you do get a random variable. You can then use this fact to either set up a statistical test, or to derive (likelihood based) confidence intervals. This turns out to be a very good thing to do, as by the Neyman-Pearson lemma this statistical test is the most powerful test you can construct (i.e. it is the test that makes the best use of the data you have available).

In fact the log likelihood ratio is asymptotically distributed as a random variable, making these tests very easy to carry out in practice. The likelihood ratio test is ubiquitous in higher statistics!

0

reply

Report

#4

(Original post by

While that makes a lot of sense, is it technically possible to simply scale a likelihood function (that doesn't integrate to infinity) and have a probability distribution?

**Theloniouss**)While that makes a lot of sense, is it technically possible to simply scale a likelihood function (that doesn't integrate to infinity) and have a probability distribution?

(1) If you follow the frequentist school (where probabilities are considered to be limits of repeated experiments) then the answer is a flat "no". The reason for this is that in frequentist probability, when you write down a probability distribution that depends on a parameter , that parameter is taken to be a (possibly unknown) fixed parameter. It's not a random variable. So when you write you're not making a probability statement, as you can only make probability statements about random variables.

(2) On the other hand, if you're a Bayesian, then everything in sight is considered to be a random variable, including that parameter . So here you can make probability statements about it. How would we do this? You have Bayes' theorem:

If you divide both sides of the equation by , then you have a probability distribution built out of the likelihood function. But note: to make it into a probability distribution in a consistent way, we have to multiply it by that term , which is called the prior probability of .

0

reply

X

Page 1 of 1

Go to first unread

Skip to page:

### Quick Reply

Back

to top

to top