Statistics - Hazard Ratio's and Confidence Intervals?

Watch
Nettled
Badges: 18
Rep:
?
#1
Report Thread starter 3 months ago
#1
I've tried interpreting these but just need double checking if I'm along the right lines. And how to obtain p values from these.

For a treatment with study size of 4,000 this is the results:

(hazard ratio 0.76 [95% confidence interval 0.60-0.97])

Am I right in assuming they are 24% at less risk? Would it be an acceptable study as the confidence interval is small? How would I calculate the p value as it needs to be <0.001 to be significant.

The second result within the study gave -

(hazard ratio 0.58 [95% confidence interval 0.36-0.93])

The hazard ratio has CI that is [0.36-0.93] it is not ideal but as the sample size is very large it is acceptable. It prevents the risk of AMD by 42% (?)
0
reply
DFranklin
Badges: 18
Rep:
?
#2
Report 3 months ago
#2
This isn't my area, but:

That hazard ratio indicates that in the study it was observed that patients in the treatment group were at 24% less risk (i.e. had 24% fewer adverse events).

I'm not sure why you say it's a small confidence interval, because to me it seems quite large. Looking at it I'm not sure you can say "patients in general are at 24% less risk" (because looking at the 95% confidence interval, it seems likely there's at least a 2% chance that the true hazard ratio is > 1 in which case they are not at lower risk at all). Certainly there's a >0.001 probability this effect could be solely from chance.

Similarly in the 2nd case it's somewhat overstating the results to say the risk reduction is 42% when there's a 2% chance the reduction is < 7%. Again, there's clearly a > 0.001 probability of this being by chance.

If you want to calculate exact p values you might find this link helpful: https://www.bmj.com/content/343/bmj.d2304

I also found this article: https://www.bmj.com/content/bmj/343/...rting.full.pdf which talks more about what/how things are typically reported. It does seem quite standard to provide things like hazard ratios with confidence intervals even when not statistically significant, so you may not actually need the level of signfiicance you think to be publishable.

This is all stuff you should really be discussing with your supervisor IMHO - what's acceptable for publication isn't really a question of mathematics.
0
reply
Nettled
Badges: 18
Rep:
?
#3
Report Thread starter 3 months ago
#3
(Original post by DFranklin)
This isn't my area, but:

That hazard ratio indicates that in the study it was observed that patients in the treatment group were at 24% less risk (i.e. had 24% fewer adverse events).

I'm not sure why you say it's a small confidence interval, because to me it seems quite large. Looking at it I'm not sure you can say "patients in general are at 24% less risk" (because looking at the 95% confidence interval, it seems likely there's at least a 2% chance that the true hazard ratio is > 1 in which case they are not at lower risk at all). Certainly there's a >0.001 probability this effect could be solely from chance.

Similarly in the 2nd case it's somewhat overstating the results to say the risk reduction is 42% when there's a 2% chance the reduction is < 7%. Again, there's clearly a > 0.001 probability of this being by chance.

If you want to calculate exact p values you might find this link helpful: https://www.bmj.com/content/343/bmj.d2304

I also found this article: https://www.bmj.com/content/bmj/343/...rting.full.pdf which talks more about what/how things are typically reported. It does seem quite standard to provide things like hazard ratios with confidence intervals even when not statistically significant, so you may not actually need the level of signfiicance you think to be publishable.

This is all stuff you should really be discussing with your supervisor IMHO - what's acceptable for publication isn't really a question of mathematics.
Thank you so much for replying and for the article!

On second thoughts the CI is quite wide for both of them so it does make the study less valuable.

"when there's a 2% chance the reduction is < 7%. Again, there's clearly a > 0.001 probability of this being by chance."

^^ I was just wondering in how you got this?

The exact p value is not important as ik from the study it was less than p<0.05 so it is statistically significant.
0
reply
DFranklin
Badges: 18
Rep:
?
#4
Report 3 months ago
#4
(Original post by Nettled)
Thank you so much for replying and for the article!

On second thoughts the CI is quite wide for both of them so it does make the study less valuable.

"when there's a 2% chance the reduction is < 7%. Again, there's clearly a > 0.001 probability of this being by chance."

^^ I was just wondering in how you got this?
On the assumption the 95% CI means 2.5% chance of being above the upper bound (which is only a 7% reduction). I "fudged" to 2% in case that assumption wasn't quite right.

The exact p value is not important as ik from the study it was less than p<0.05 so it is statistically significant.
You said you wanted p<.001 (and said you wanted to know how to calculate it).
0
reply
X

Quick Reply

Attached files
Write a reply...
Reply
new posts
Back
to top
Latest
My Feed

See more of what you like on
The Student Room

You can personalise what you see on TSR. Tell us a little about yourself to get started.

Personalise

Who is winning Euro 2020

France (106)
27.46%
England (131)
33.94%
Belgium (30)
7.77%
Germany (41)
10.62%
Spain (8)
2.07%
Italy (34)
8.81%
Netherlands (13)
3.37%
Other (Tell us who) (23)
5.96%

Watched Threads

View All
Latest
My Feed