Back to Search Start Over

Algorithmic randomness in empirical data

Authors :
James W. McAllister
Source :
Studies in History and Philosophy of Science Part A. 34:633-646
Publication Year :
2003
Publisher :
Elsevier BV, 2003.

Abstract

According to a traditional view, scientific laws and theories constitute algorithmic compressions of empirical data sets collected from observations and measurements. This article defends the thesis that, to the contrary, empirical data sets are algorithmically incompressible. The reason is that individual data points are determined partly by perturbations, or causal factors that cannot be reduced to any pattern. If empirical data sets are incompressible, then they exhibit maximal algorithmic complexity, maximal entropy and zero redundancy. They are therefore maximally efficient carriers of information about the world. Since, on algorithmic information theory, a string is algorithmically random just if it is incompressible, the thesis entails that empirical data sets consist of algorithmically random strings of digits. Rather than constituting compressions of empirical data, scientific laws and theories pick out patterns that data sets exhibit with a certain noise.

Details

ISSN :
00393681
Volume :
34
Database :
OpenAIRE
Journal :
Studies in History and Philosophy of Science Part A
Accession number :
edsair.doi...........5dad3a073bda6ac10acc76b0be91e143
Full Text :
https://doi.org/10.1016/s0039-3681(03)00047-5