# Likelihood functions

Watch
Announcements
#1
I have vaguely been learning about likelihood functions and was wondering if there is a method for converting them to probability density functions, specifically for the binomial distribution (for example to calculate confidence intervals). I cannot seem to find any suggestion that this is possible or impossible. Any help?
Last edited by Theloniouss; 1 year ago
0
1 year ago
#2
(Original post by Knortfoxx)
I have vaguely been learning about likelihood functions and was wondering if there is a method for converting them to probability density functions, specifically for the binomial distribution (for example to calculate confidence intervals). I cannot seem to find any suggestion that this is possible or impossible. Any help?
Since likelihood functions are derived from probability distributions, you can simply reverse the process and recover the probability density from a likelihood; but this would be rather pointless and is not, I think, what you mean. In general, the answer to your question is no; there are plenty of likelihood functions that integrate to infinity, and therefore cannot be normalized to be a probability density. (Take the uniform distribution on , for example).

The properties of likelihood functions lead you off in another direction entirely. Although likelihood functions are not themselves probability distributions, the logarithm of a likelihood ratio is. So if you take the likelihood evaluated at one value of the parameter, divide this by the value of the likelihood at another parameter, and take the logarithm, you do get a random variable. You can then use this fact to either set up a statistical test, or to derive (likelihood based) confidence intervals. This turns out to be a very good thing to do, as by the Neyman-Pearson lemma this statistical test is the most powerful test you can construct (i.e. it is the test that makes the best use of the data you have available).

In fact the log likelihood ratio is asymptotically distributed as a random variable, making these tests very easy to carry out in practice. The likelihood ratio test is ubiquitous in higher statistics!
1
#3
(Original post by Gregorius)
Since likelihood functions are derived from probability distributions, you can simply reverse the process and recover the probability density from a likelihood; but this would be rather pointless and is not, I think, what you mean. In general, the answer to your question is no; there are plenty of likelihood functions that integrate to infinity, and therefore cannot be normalized to be a probability density. (Take the uniform distribution on , for example).

The properties of likelihood functions lead you off in another direction entirely. Although likelihood functions are not themselves probability distributions, the logarithm of a likelihood ratio is. So if you take the likelihood evaluated at one value of the parameter, divide this by the value of the likelihood at another parameter, and take the logarithm, you do get a random variable. You can then use this fact to either set up a statistical test, or to derive (likelihood based) confidence intervals. This turns out to be a very good thing to do, as by the Neyman-Pearson lemma this statistical test is the most powerful test you can construct (i.e. it is the test that makes the best use of the data you have available).

In fact the log likelihood ratio is asymptotically distributed as a random variable, making these tests very easy to carry out in practice. The likelihood ratio test is ubiquitous in higher statistics!
While that makes a lot of sense, is it technically possible to simply scale a likelihood function (that doesn't integrate to infinity) and have a probability distribution?
0
1 year ago
#4
(Original post by Theloniouss)
While that makes a lot of sense, is it technically possible to simply scale a likelihood function (that doesn't integrate to infinity) and have a probability distribution?
If a function integrates to one (or if it has a finite integral that can be normalized to one) then you can consider it to be a probability distribution. But the interesting question is whether this probability distribution relates in some probabilistic sense to what you started with. There are two answers to this, depending on which school of probability interpretation you follow:

(1) If you follow the frequentist school (where probabilities are considered to be limits of repeated experiments) then the answer is a flat "no". The reason for this is that in frequentist probability, when you write down a probability distribution that depends on a parameter , that parameter is taken to be a (possibly unknown) fixed parameter. It's not a random variable. So when you write you're not making a probability statement, as you can only make probability statements about random variables.

(2) On the other hand, if you're a Bayesian, then everything in sight is considered to be a random variable, including that parameter . So here you can make probability statements about it. How would we do this? You have Bayes' theorem:

If you divide both sides of the equation by , then you have a probability distribution built out of the likelihood function. But note: to make it into a probability distribution in a consistent way, we have to multiply it by that term , which is called the prior probability of .
0
X

new posts
Back
to top
Latest
My Feed

### Oops, nobody has postedin the last few hours.

Why not re-start the conversation?

see more

### See more of what you like onThe Student Room

You can personalise what you see on TSR. Tell us a little about yourself to get started.

### Poll

Join the discussion

#### How would you feel if uni students needed to be double vaccinated to start in Autumn?

I'd feel reassured about my own health (19)
16.96%
I'd feel reassured my learning may be less disrupted by isolations/lockdowns (33)
29.46%
I'd feel less anxious about being around large groups (12)
10.71%
I don't mind if others are vaccinated or not (13)
11.61%
I'm concerned it may disadvantage some students (5)
4.46%
I think it's an unfair expectation (28)
25%
Something else (tell us in the thread) (2)
1.79%