(Original post by Ali_1)
I know this sounds like a stupid question lol but I just want to verify as I am not 100% sure what the exact difference is. If for example, a ruler that measures in cm and the lowest possible value is 0.1 cm which is a mm, then does that mean its precision is 0.05 or 0.1 and could the accuracy error be worked out by taking numerous readings and finding out the range. So if the range was 0.004 - would that be the accuracy error.
Thanks in advance
The precision of the instrument is related to the smallest division on the scale that you can read. For a stopwatch it will depend on what type it is (analogue/digital).
The old analogue ones often had a scale going down to one tenth of a second.
A ruler with the smallest division 1mm can usually be read to ± 0.5mm
There is no fixed rule. It's about how confident you are (within reason) at reading the scale.
Accuracy is about 2 things;
*whether the instrument is actually working correctly (giving a true value), and
*random errors you make, or uncertainty in, actually performing the measurement
Taking numerous readings (and finding the mean, for example) is a way of increasing the accuracy by reducing the random error.
You cannot increase accuracy this way if the instrument is faulty.
For example, if you have a stopwatch that reads to 0.01s that would be its precision.
If you used this to time a mass falling off a table you would take a number of measurements and find a mean.
These measurements could be, for example
Clearly, with this spread of values (0.08s ), the uncertainty (random error) of actually making the measurements (±0.04s) is greater than the precision (±0.01s) of the watch. This would be due to the human reaction time error.