I have 10 DS18B20 temperature sensors and want to measure two temperatures fairly accurately. They can provide a reading to 1/16th of a degree centigrade but have an accuracy of only 0.5C. If I connect all 10 up at the same time and log there readings can I assume that the average reading is nearest the correct reading? I.E. if I choose the two sensors nearest the average am I more likely to get better accuracy?
I don't think you can assume the temperature error is random. It may be caused by manufacturing tolerances and calibration errors, so a particular group may all have errors in the same direction.
My water boils at 94.2C. Maybe you can calibrate at 0C and 100C.
Boiling Point of Water at Different Altitudes
Altitude ft. (meters) Boiling Point - Fahrenheit Boiling Point - Celsius
0 ft. (0 m.) 212 ºF 100 ºC
500 ft. (152 m.) 211 ºF 99.5 ºC
1000 ft (305 m.) 210 ºF 99 ºC
1500 ft. (457 m.) 209 ºF 98.5 ºC
2000 ft. (610 m.) 208 ºF 98 ºC
2500 ft. (762 m.) 207 ºF 97.5 ºC
3000 ft (914 m.) 206 ºF 97 ºC
3500 ft. (1067 m.) 205.5 ºF 96 ºC
4000 ft. (1219 m.) 204 ºF 95.5 ºC
4500 ft. (1372 m.) 203.5 ºF 95 ºC
5000 ft. (1524 m.) 202 ºF 94.5 ºC
5500 ft. (1676 m.) 201.5 ºF 94 ºC
As other say, see below, the Ds18b20 is a Low Accuracy High Precision device , so your averaging may not provide the accuracy you need ?
Have generally run them in 11 or 12 bit mode, so no idea if running them in 8 bit mode does give better accuracy as that author states ?
Seem the RDT is the most Accurate, but again as the pdf shows its all about trade offs, dependent on your needs ?
The biggest problem when talking about 0.1c or 0.2c accuracy is finding a reference to work against, freezing or boiling points sound obvious but not very precise to use in diy terms.
Generally use a calibrated analogue thermometer with a quoted 0.2C accuracy as my reference point, or if you are lucky, access to some lab equipment.