Is a stopwatch accurate or precise?Watch
I know this sounds like a stupid question lol but I just want to verify as I am not 100% sure what the exact difference is. If for example, a ruler that measures in cm and the lowest possible value is 0.1 cm which is a mm, then does that mean its precision is 0.05 or 0.1 and could the accuracy error be worked out by taking numerous readings and finding out the range. So if the range was 0.004 - would that be the accuracy error.
Thanks in advance
The old analogue ones often had a scale going down to one tenth of a second.
A ruler with the smallest division 1mm can usually be read to ± 0.5mm
There is no fixed rule. It's about how confident you are (within reason) at reading the scale.
Accuracy is about 2 things;
*whether the instrument is actually working correctly (giving a true value), and
*random errors you make, or uncertainty in, actually performing the measurement
Taking numerous readings (and finding the mean, for example) is a way of increasing the accuracy by reducing the random error.
You cannot increase accuracy this way if the instrument is faulty.
For example, if you have a stopwatch that reads to 0.01s that would be its precision.
If you used this to time a mass falling off a table you would take a number of measurements and find a mean.
These measurements could be, for example
Clearly, with this spread of values (0.08s ), the uncertainty (random error) of actually making the measurements (±0.04s) is greater than the precision (±0.01s) of the watch. This would be due to the human reaction time error.
Ah okay so if the question asks for the accuracy error, could you take repeated readings, and that error would be the range? Or not?
(Try your own here http://www.humanbenchmark.com/tests/...time/index.php
mine was 250ms)
So the ability of the scale of the watch to measure a hundredth of a second is not really the issue if you can't click the button faster that 2 tenths of a second.
Even worse, if you time something and have to press at the start and end, you introduce that uncertainty twice. There is a maximum possible error of 4/10 second.
However, it's possible that these errors at the start and end could cancel out if you press too late by the same amount each time.
And that is the nature of these errors, they are "random" and are a part of the process of taking the measurement.
You reduce this by taking the mean of a number of measurements and recording the range of values. The range of values, as I mentioned in my other post, is *one way of expressing this random error (or uncertainty) in a measurement.
Accuracy is about how near the mean value is to the true value. The more measurements you take (so long as the instrument is not faulty) the nearer it should get.
I hope this helps.
There are other ways. For a detailed look at this go here
All equipment can be precise/imprecise accurate/inaccurate.
Accuracy and precision exist together, something can be Precise and Accurate, Precise but Inaccurate, Accurate but Imprecise or both imprecise and inaccurate.
Definitions (Taken from AQA):
1. Precise measurements are ones in which there is very little spread about the meanvalue.Precision depends only on the extent of random errors – it gives no indication of howclose results are to the true value.
2. Accurate - A measurement result is considered accurate if it is judged to be close to the truevalue.
Random errors can occur from:
Environment - uncontrollable "random" environmental sources (say the Eath's magentic field...it is always changing a little).
Equipment - Lack of equipment sensitivity. An instrument may not be able to respond to or indicate a change in some quantity that is too small or the observer may not be able to discern the change - A beaker vs. a measuring cylinder, a graduated sand timer vs a stopclock - resolution.
Human - Timing an event, such as the time for a tennis player to serve a ball to the point it hits the other side of the court. Sometimes you'll over time, sometimes under. Some people may be more precise than others.
Random Errors can be "smoothed out" by taking many samples, or by using more precise equipment/a better human (biological equipment I guess).
A nice quick experiment to show how this works is:
1. zeroing a balance with dry 250cm3 beaker on it.
2. Face the scales screen away to a partner...so you can not see the result.
3. Fill it to the 50cm3 mark by eye.
4. Your partner records the result (mass of water). They must be as unbiased as possible (no hinting).
5. Take a good 20 repeats. Dry the beaker out and re-zero the balance each time.
Now plot a bar graph with the first: (You can do like the "multiple bars" so they can be compared easily).
You should notice the sample size of 20 causes, generally, more measurements are "closer" to the mean. I.e. You have increased the precision....your measurements are "closer" to the mean value.
So...back to Accuracy.
Accuracy is how close your are to the "True Value".
What can effect this?
Systematic Error - These cause readings to differ from the true value by a consistent amount each time ameasurement is made, can be a sigle amount, or qualtative (double, x1.3,half etc.).Sources of systematic error can include the environment, methods of observation orinstruments used.Systematic errors cannot be dealt with by simple repeats. If a systematic error issuspected, the data collection should be repeated using a different technique or adifferent set of equipment, and the results compared.
An example would be using a plastic measuring cylinder that has been misshapen by being placed in boiling water and re-setting a little larger than it was originally.
All the uses will be over the wanted volume poured.
Another example is someone who attempts to stop a stopwatch when someone crosses a finishing line. The person may consistently stop the watch too late. This is bias.
Sometimes you can get "drift". Say a room with 30 pupils in doing an experiment. The room may get hotter and hotter as the experiment goes on, causing the results to "drift" from the true value.
Now back to the question at hand....
Comparing the Accuracy and Precision of the stopwatch requires some calculation.
You can work out the precision by doing a standard deviation calculation. This is usually shown as the mean value +/- the "confidence limit".
Ie. 40 +- 3 cm3
In normal terms, the confidence limit of precision is 3sigma (97% of results will be within the 3cm3 of the mean)
For accuracy, you need to validate your experiment using different instrumentation, a different technique or simply using the most "accepted true value".
So if you were timing an apple falling off a building to work out the acceleration of gravity....assuming the only random error was the timing, you get the results:
63,65,59,62,61,60,61 The mean is 61 and the 95% CI is 1.5 ish.
So 95% of the time, you can assume with this sample size under the same conditions, you'll get 95% of your measurements between 59.5 and 62.5
So a "variability" of around 3 seconds...about 5% of the mean.
Lets say they tried to weight the apple using an old balance. It had not been serviced, They weighed the apple 1000 times and got a result for its mass of 990 +- 1 g
The thing is, the balance is old and un-calibrated...and the apple (confirmed using a calibrated balance later) was found out to be actually 900g.
The error % due to accuracy is about 10%
So the accuracy of the experiment is more of a concern than the random error in the timing.