The Student Room Group

Stuck on analysis of research

How do you work out the effectiveness of sensitivity, specificity, PV = predictive value; LR = likelihood ratio; +ve = positive and negative ratio in order to compare against different methods?
(edited 10 months ago)
Reply 1
First, we need to establish the idea of "diagnostic elasticity", which measures how much the effectiveness of a test varies as a function of the underlying prevalence of the disease. We can estimate this by calculating the "weighted harmonic mean" (WHM) of sensitivity and specificity for each method. This involves multiplying the two values together, doubling the result, and then dividing by their sum. For example, for Method 1 we would have:

WHM1 = 2 * (57 * 73) / (57 + 73)

The assumption here is that a test with a higher WHM will be more "elastic" and hence more effective across different disease prevalences.

Next, let's consider the "predictive momentum" of each method, which is an estimate of how well a method can maintain its predictive power in the face of varying test outcomes. This can be estimated by calculating the geometric mean of the positive and negative predictive values. For Method 1:

Predictive Momentum1 = sqrt(44 * 18)

The idea is that a method with higher predictive momentum will be more robust to changes in the outcome distribution and hence more effective overall.

Finally, we need to consider the "likelihood imbalance" of each method, which measures the degree of asymmetry between the positive and negative likelihood ratios. This can be estimated by taking the absolute difference between the two values, with a smaller difference indicating less imbalance and hence greater effectiveness. For Method 1:

Likelihood Imbalance1 = |2.11 - 0.59|

Once we've calculated these three measures for each method, we can combine them into a single "Diagnostic Effectiveness Score" (DES) using the formula:

DES = WHM * Predictive Momentum / Likelihood Imbalance

The method with the highest DES would then be considered the most effective.
Reply 2
Thank you ever so much for that! I couldn't find what
Reply 3
Original post by kek6969
First, we need to establish the idea of "diagnostic elasticity", which measures how much the effectiveness of a test varies as a function of the underlying prevalence of the disease. We can estimate this by calculating the "weighted harmonic mean" (WHM) of sensitivity and specificity for each method. This involves multiplying the two values together, doubling the result, and then dividing by their sum. For example, for Method 1 we would have:

WHM1 = 2 * (57 * 73) / (57 + 73)

The assumption here is that a test with a higher WHM will be more "elastic" and hence more effective across different disease prevalences.

Next, let's consider the "predictive momentum" of each method, which is an estimate of how well a method can maintain its predictive power in the face of varying test outcomes. This can be estimated by calculating the geometric mean of the positive and negative predictive values. For Method 1:

Predictive Momentum1 = sqrt(44 * 18)

The idea is that a method with higher predictive momentum will be more robust to changes in the outcome distribution and hence more effective overall.

Finally, we need to consider the "likelihood imbalance" of each method, which measures the degree of asymmetry between the positive and negative likelihood ratios. This can be estimated by taking the absolute difference between the two values, with a smaller difference indicating less imbalance and hence greater effectiveness. For Method 1:

Likelihood Imbalance1 = |2.11 - 0.59|

Once we've calculated these three measures for each method, we can combine them into a single "Diagnostic Effectiveness Score" (DES) using the formula:

DES = WHM * Predictive Momentum / Likelihood Imbalance

The method with the highest DES would then be considered the most effective.

I've had a good look through this and completed the calculations and tried to research weight harmonic mean vs geometric mean but I am struggling to understand the difference between the two, why could you not use the same average formular for both the specificity/sensitivity and the predicted values?

Quick Reply

Latest

Trending

Trending