The goal of this column is to stimulate some thinking about the nature of science and, therefore, of skeptical inquiry. Usually I focus on a particular aspect of the scientific methods, sometimes discussing a single experiment, to try to learn something about how science works. Occasionally, however, it pays to step back and take a broad look at the entire forest, rather than concentrating on individual trees. In the following, therefore, I will provide a very short history of the major ideas in philosophy of science, which the reader can use as a handy reference and a key to read future and past columns on the subject.
The first philosopher to attempt to ground what today we call science in a rigorous methodology was Aristotle (384-322 B.C.), who emphasized deduction, i.e., the process by which one reaches a conclusion beginning with some specified premises (which are assumed to be true). Deduction is at the basis of logic, but it turns out to be of much more limited use in science, because—while an excellent way of working out the implications of a set of premises—it does not lead by itself to the discovery of new facts about the physical world.
We have to wait until the seventeenth century for Francis Bacon (1561-1626) to propose induction as the core of the scientific method. For Bacon, we are able to make generalizations about the world building on a steadily expanding knowledge base from which we extrapolate and make predictions. The problem with induction is that—unlike deduction—it cannot yield certain knowledge, but something more akin to an educated guess based on past experience (see this column, May/June 2003 and March/April 2004).
During the twentieth century things moved pretty quickly, with several major contributions to our understanding of how science works being published over the span of a few decades. Karl Popper (1902-1994) thought that science makes progress not through the confirmation of theories, but by way of their falsification. Because there is always the possibility that more than one theory can account for the available facts, Popper reckoned that a theory can never be shown to be true; however, if the facts contradict the predictions of a theory, surely it must be discarded, which is how science then makes progress.
Imre Lakatos (1922-1974), one of Popper's students, realized that even falsificationism wouldn't do, because scientists in fact don't throw away a theory at the first sign of difficulty. This is reasonable, since there may be other explanations for why a given prediction failed, including possible problems with the conditions of an experiment, with the analysis of the data, or even with relatively minor aspects of the theory, which could be improved and tested again. Lakatos then proposed that science works by a succession of "research programs," which can be viable and lead to new discoveries, or "degenerate." A degenerate program is eventually abandoned because there is a widespread sense that it is no longer fruitful.
A more radical view of research programs was famously advocated by Thomas Kuhn (1922-1996), who saw science as an alternation of two modes of operation: during "normal" times, scientists work within a generally accepted framework (a paradigm) to solve specific problems, or puzzles. From time to time, however, the dominant paradigm is no longer sufficient, and an increasing number of puzzles go unresolved. This catalyzes a situation of crisis, which is resolved only when a new general framework is proposed that allows science to resume its normal activity: a paradigm shift has then occurred.
Even more radical than Kuhn was Paul Feyerabend (1924-1994), who thought that there really wasn't any such thing as the scientific method, and that all approaches to truth should be given equal access to funding and public resources—the market of ideas would then establish the best ways to go. As appealing as this view can be in some circles, it led Feyerabend to seriously contend that astrology, for example, should be studied regardless of what astronomers say about the illusory nature of constellations.
More recently, several philosophers of science have proposed a way of thinking about science rooted in the mathematics of the Reverend Thomas Bayes (1702-1761), and therefore termed "Bayesianism." According to the Bayesian view, scientists consider several possible hypotheses simultaneously, and continuously confront them with the available data. After each round of datatheory match-up, they re-evaluate the likelihoods of each theory being correct, given the facts. No theory ever reaches a likelihood of one (certainty), in agreement with Popper; but no theory can truly be entirely discarded either (a likelihood of zero), following Lakatos. However, the likelihood of a theory can be orders of magnitude higher than any of its competitors, which means that the theory in question is accepted for all effective purposes as true. Until the next round, that is.
Even the Bayesian scenario, as intuitively appealing as it is to the practicing scientist, is far from providing a problem-free explanation of how science works, and the discussion among philosophers to understand how scientists do it will likely continue for quite some time.
The problem with induction is that—unlike deduction—it cannot yield certain knowledge, but something more akin to an educated guess.