Writing

Essays | Books

Has Popper Solved the Problem of Induction?

The scientific method and the problem of industion

The Problem of Induction is concerned with finding a rational means of justifying our psychological predisposition to expect the future to be like the past. The key question is not whether induction actually works, for in general it proves a reliable means of negotiating a survivable path into the future, but how such inductive projections from past behaviour can possibly be justified. This is an old question which has troubled many minds over the centuries, from Aristotle to Hume and Bertrand Russell, and one which lies at the heart of our conception of the nature of scientific ‘truth’ and the method by which such knowledge is discovered. This question of divining truth about the world through empirical methods takes us ultimately into questioning whether the notion of such truth in an absolute sense is even meaningful. Does the scientific method provide a route towards universal explanation true in all frames of reference or is scientific knowledge ultimately culturally and historically relative?

In this essay we shall briefly summarise the original problem of induction as analysed by David Hume, explain a proposed solution to the problem as presented by Karl Popper, and then discuss some difficulties and misconceptions with Popper’s alleged solution.

Firstly then, what exactly is the problem of induction? David Hume clearly regarded this a two related problems: firstly a logical problem which then leads to a consequential psychological problem. The logical problem refers to the difficulty of identifying some definite principle or method, some Principle of Induction, by which we can rationally infer the nature of unseen events (past or future) on the basis of established past observations. Experience is full of sequences of events which share similarities of form and character. For example, every A is followed by B, or all X’s are Y. It is a fundamental disposition of the human mind to identify pattern and similarity in hitherto unrelated events and to group them together into some class whose members share some common attribute, cause or explanation. Given such categorisations, we habitually attempt to classify all new events as members of one or more of our established epistemological groupings by finding the closest match according to observed attributes and behaviour. Given such a categorisation, that event E ‘belongs to’ category C, then our whole treatment and expectation of the behaviour and properties of E will be determined by the corresponding behaviour/property of the most general member of C. Thus having by observation made the classification E Î C, then, in the absence of direct observational evidence, the truth of any predicate p(E) is assumed to be determined by the truth of p(X), where some X Î C. In simple terms, we generalise from specific past events to infer that future events we judge as members of the same general category will behave identically. If all ravens we have ever seen are black then, given that we are told (and believe that) the closed box contains a raven, we would expect on opening the box to find a black bird inside.

The pervasive regularity and repeatability of the natural world (as perceived by our minds at least) convinces us that the future will be like the past. This axiom of faith allows us to form expectations about unseen events both past (the sun rose on the morning of Jesus’s birthday) and future (the sun will rise at the equator tomorrow). Hume’s logical problem then is to isolate the inferential mechanism which places any inductive conclusion or prediction on a sound rational basis. What is it that allows us to say that the next X we see will have property p because all X’s seen so far are p? Even the qualification of such a prediction with appropriate insertion of the word ‘probably’ does not render our statement any the more logically valid. Hume concluded that there is indeed no such Principle of Induction to which we can appeal and that consequently all such human judgements are based on nothing more than ungrounded faith in a future that resembles the past.

The fundamental problem is that induction justifies its own conclusions by pre-supposing that the future will resemble the past using an inductive inference. We can only justify induction by asserting the uniformity of nature and that all future times will resemble the past. But nature is not uniform is all respects, as the hapless Thanksgiving turkey who has been fed on each of the 100 days before Thanksgiving will unhappily testify. We can only assert that nature is uniform by saying that nature has been uniform in the past and therefore will be uniform in the future – a clear appeal to induction to justify the underlying assumption of induction itself. Clearly we cannot appeal to the uniformity of nature again unless it is to a ‘higher level’ of uniformity. But then we need an analogous justification and a hopeless regress ensues.

Ultimately Hume presents us with a dilemma. If we try to justify induction by means of a deductively valid argument with premises that we can show to be true (without using induction), then our conclusion will be too weak and lack generality. If we try to use an inductive argument, we have to show that it is reliable and any attempt to do that leads to the same dilemma all over again.

Hume then questioned why, if there is no such rational basis to our use of inductive reasoning, do we continually and habitually resort to such methods? Why is it that we have such confidence in our ability to operate in new situations simply by generalising from our past experience? Our conviction that some forms of inductive reasoning are strong is a psychological conviction which is the result of habit, which does not depend on the future in any way. This problem of human psychology Hume said arose through repetition and conditioning – if behaviour X has succeeded following event E then our disposition to again perform X on the next E will be increased on each exposure to E. Thus we use positive re-inforcement to shape our expectation of the future. Hume firmly challenged the view that inductive reasoning in human psychology arises from a chain of reasoning: "But if you insist that the inference is made by a chain of reasoning, I desire you to produce that reasoning." Not only are we not aware of such reasoning, Hume argued that no such chain of reasoning exists. To Hume then, argument apparently plays only a minor part on our understanding of the world and of each other and most of our beliefs derive not from sound deductive inference but from irrational faith.

Hume sought in vain for some principle of reason akin to deductive logic with which to underpin the inductive leap of faith. But deductive logic alone is not sufficient to increase the store of human knowledge for it is not ‘content increasing’. While deductive rules such as modus ponens can prove the soundness of a conclusion or argument, this is achieved by combining existing known facts and no new knowledge is uncovered. It is a primary purpose of Western science to seek a fuller understanding of the universe though the formulation of new and better explanations. This involves not just the recombination of known facts in novel ways but crucially the discovery of fundamentally new knowledge. If induction provides no rational basis from which we can form explanations from experimental observations, then what is a defensible basis of the scientific method? An important question then is whether the scientific method relies on some hidden un-stated appeal to induction which is then rationalised and covered over by retrospective experiments designed to conveniently fit the facts. Alternatively, is there an empirical model of science which conforms to the historical facts, accounts for the enormous success of science in providing strong explanations of the universe, and yet relies wholly on non-inductive reasoning? If such a model cannot be found then, as Russell pointed out, there is no ‘intellectual difference between sanity and insanity’ and we might as well all regard ourselves as poached eggs.

This goal of providing a non-inductive model of the scientific method was tackled by Karl Popper, who famously claimed to have ‘solved the problem of induction’. Most philosophers however would regard the original philosophical problem as posed by Hume still unsolved. What Popper has done is to restate the original problem in a way which allows him to propose a deductive solution. While providing, as we shall see, a useful insight into the scientific method, this is certainly not a solution to Hume’s original question.

Popper’s first move is to view Hume’s problem of induction as the questioning of how true or reliable human knowledge can be amassed successfully if, as both Hume and Popper agree, there is no valid principle of induction. Popper’s restatement asks how science can discover true explanations about the world by using empirical observations without resort to induction. More precisely, can the claim that some explanatory theory is true be rationally justified by assuming the truth of test statements such as experimental observations? Although restated in terms related to the scientific method, this is still Hume’s original problem and still has a negative answer. However, as Popper points out, if we seek whether a theory is true or is false then we get a much more positive answer.

Popper’s key idea here is that science cannot and does not purport to discover truth but can only advance candidate explanations (hypotheses) which are tested against the available evidence. While any principle of induction would derive true statements from the positive evidence, science can only prove the falsehood of a hypothesis when faced with negative evidence for its truth. The refutation of candidate hypotheses here is entirely deductive (using the modus tollens inference rule). There is an essential asymmetry between verification and refutation here - the truth of a scientific theory can never be proven while its falsehood is made definite by a single well-chosen test condition (experimental observation):-

IF T THEN O
not-O
THEREFORE: not-T

This principle of conjecture and refutation is the core of Popper’s evolutionary model of scientific progress. Rather than observation leading to theory (as in an inductive approach), science proceeds by subjecting successive candidate explanations to appropriate testing against well chosen observations. Hypotheses will either be refuted by the evidence or succeed to be tested against competing theories and new and more exacting test conditions. Hypotheses for which no suitable refutation can be devised are not to be regarded as good theories in the scientific sense. This leads Popper to his problem of demarcation – the difference between science and pseudo-science is that good scientific theories lend themselves to experimental refutation. How candidate hypotheses are initially created or derived is not important in Popper’s model – just as the source of genetic variation (mutation) is irrelevant to natural selection. Popper has shifted the problem from justifying predictions from observations to the problem of verifying universal laws that have predictive power. Thus a real issue in science is the selection of appropriate test conditions with which to differentiate between competing theories.

Scientific progress then is an iterative process of refining a ‘theory pool’ by deriving better and better theories. But what does ‘better’ mean in this sense – more likely to be true, more reliable, simpler, more elegant, more unlikely? This definition of the ‘best’ current hypothesis has echoes of Hume’s original problem of induction – are there rational reasons for preferring hypothesis A, say, over hypothesis B given that they have both survived the same tests? Both hypotheses will be equally able to explain the observations which have refuted all competing theories so are there rational grounds for preferring one over the other? Note that their success so far says nothing about their respective likelihood of being refuted in the future or about their reliability (the explanatory value of hypotheses may go down as well as up.) An important distinction here is the notion of preference for a theory as opposed to reliance on a theory. Strictly we can never ‘rely’ on a theory since it can never be shown to be true, while at the same time Popper would claim that there are rational grounds for preferring to act according to one theory over another.

Given any set of empirical evidence there is an infinite set of possible hypotheses available which provide covering explanations. Popper argues that we should prefer hypotheses which have the greatest ‘information content’ and explanatory power in that this makes them more suitable for future testing i.e. there is more scope for devising suitable test conditions by which to refute the hypothesis. Popper asserts that it is rational to place our reliance in the best tested hypothesis, i.e. the one which has survived the most rigorous testing and explains all of the observations covered by competing hypotheses. Strictly speaking, it is the theory which has best survived critical discussion up to the present time. While we know that such a hypothesis can never be finally verified as true, it will however be a member of the set of hypotheses to which any true theory belongs. For all practical and pragmatic purposes Popper suggests it is rational to employ such best tested hypotheses as if they were true until they are either refuted or superseded by a better theory.

But what exactly is it about such a policy that makes it rational and why is this any better that the inductivist’s conviction that the past is a basis for predicting the future? This seeming lack of a definite principle of rational justification in Popper’s use of theories as if proven has been attacked by people such as John Worrall. After all, is it not just as rational to believe in any convenient theory that has the same effects as an ‘established’ scientific theory at times t <= T0 while having dramatically different effects after T0 ? [The traditional example here being the crypto-inductivist’s belief that he might as well expect a gentle descent from the Eiffel Tower since Popper cannot convince him that it is rational to believe that Galileo’s law will apply.]

In effect then Popper is asking us to behave as if we believe that the future will be like the past (our theory will continue to survive against all test conditions) while at the same time denying that this is indeed the case. Is this induction by the back door? It would appear at first sight that Popper has ‘solved’ the problem of induction by allowing us to make allegedly rational choices about predictions of future behaviour based upon past evidence while at the same time denying that we are resorting to any form of inductive reasoning. In Popper’s terms, it is perfectly acceptable, an indeed rational, for us to act as if induction works as long as we never admit to this.

There is a striking parallel here between Popper’s notion of an infinity of possible hypotheses which share the same extension in explanation space and Wittgenstein and Kripke’s notion of an infinity of rules which have overlapping extensions in arithmetic space (or whatever the domain of expression of such rules). Just as in science we can never be certain that we have formulated the one true theory, either in language or in mathematics we can never provide a rational justification that we are using a particular rule (such as ‘+’) rather than another co-extensional rule (such as Å). Regarding language, this problem of rule following drove Kripke towards a sceptical view of of meaning and the impossibility of a private language for an individual in isolation. Linguistic understanding and meaning, says Kripke, arises from rules of correct usage devised and dynamically arbitrated by a community of language users. Is there a parallel in the nature of scientific truth such that no hypothesis is accepted as the current best tested working theory until it has been subject to vetting within the scientific community? Taken to extremes this could lead to a form of cultural relativism in science where theories can only be correctly interpreted in the light of the prevailing sociological context. But is this the case? Surely we are able to understand, interpret and apply any of the great paradigm-shifting scientific theories (such as Newton’s laws or a heliocentric cosmology) perfectly well from a modern perspective. While the development and application of such theories were indeed heavily influenced by the political and social climate of the time, they surely stand apart as clear intellectual achievements in their own right. Many scientists, particularly mathematicians, are Platonists and view science as a gradual groping towards the true forms which exist in some perfect intellectual realm - our current theories being mere approximations to the truth.

This points to an alleged difficulty for Popper’s model of science which attempts to deny that any theory can in fact be effectively refuted merely by conflicting experimental evidence. The Duhem-Quine thesis states that any theory can be effectively defended in the face of contrary observations by adjusting so-called auxiliary hypotheses which provide contributory support for the main hypothesis. The deficiencies of any theory can always be explained away by the convenient invention of some new auxiliary effect or side-phenomenon which accounts for the embarrassing experimental outcome and leaves the main theory in contention. For example, the Newtonian theory of gravity could be defended in the face of the anomalous precession in the orbit of Mercury by proposing some new unobserved planet or some uneven mass distribution within Mercury itself. Any theory whatsoever can be decorated with an increasingly embarrassing set of riders and qualifications to account for the experimental outcomes which would appear to deliver a convincing refutation. But while this is indeed true, surely it is not the case that our set of competing hypotheses remain equally plausible given the whole set of test conditions and observations? In reality the experimental data will accumulate to a level where the amount of supportive baggage (auxiliary hypotheses) carried by some of our candidate theories seriously damages their plausibility against simpler or more elegant hypotheses able to cover the test conditions in a more direct fashion. This is of course just an appeal to a version of Ockham’s razor that treasures intellectual elegance and economy over decorative elaboration.

Popper’s model of science is attractive in that it is another example of the single powerful algorithm that underlies many processes where complexity arises from simplicity. Popper himself was keen to point out the parallel of his ideas with evolution by natural selection. Both processes use (in principle) random variation to seed a population of candidates which are then selected upon the basis of some measure of fitness – either survival in the natural world or survival against experimental observation and critical argument. Both processes require some mechanism for the inheritance of variation by succeeding generations in order than gains made are not lost by future variation – the ‘ratchet effect’. In this way a slow gradual accumulation of useful adaptations to the selection landscape will arise and our theories will acquire greater and greater explanatory power. This highlights the need for new hypotheses to arise largely by the adaptation of existing theories rather than as fully formed mature theories arising spontaneously out of the ether. Such large scale ‘mutation’ does indeed arise occasionally in science – leading to rapid and sudden changes in the scientific landscape (the paradigm-shifts of Thomas Kuhn.)

But the fact that both evolution and science make demonstrable progress surely tells us something about the shape of the ‘fitness landscape’ that both processes explore. We seek to attain the highest peaks of fitness on this undulating landscape by inching our way up the hillsides, taking the line of highest upward gradient at any point. Normally we will explore only the immediate vicinity of our current position and so move incrementally and safely towards the nearest peak. To be effective this simple strategy requires two things. Firstly the fitness landscape must be well correlated, i.e. be smooth and continuous and not containing sudden discontinuous ravines or trenches. Secondly the landscape should contain a few well-identified main peaks whose summits are visible across the undulations of the lower level peaks. In such evolutionarily friendly landscapes we can expect an adaptive process to graduate towards the main peaks over time – the chief danger being stranded on low local peaks where our short-sighted vision prevents us from seeing across the intervening valley to the adjacent high summit. By taking larger strides in our exploration of ‘design space’ we can avoid local minima and find the main peaks more effectively – but only at the expense of more casualties along the way for our fellow travellers (candidate hypotheses.) Just as macro-mutation and the sexual shuffling of genes provides this extended stride length in evolution, so discontinuous ground-breaking theories direct science into wholly uncharted areas of the landscape.

In summary, Popper has not solved Hume’s original philosophical problem of induction but has mapped the problem to a new formulation that is more amenable to a deductive solution. Popper says there is no problem of induction since induction is not a requirement for the growth of human knowledge through empirical means. He sets out a clear and attractive model for the scientific method which, given a set of competing hypotheses, advocates a method of justification based not on verification by positive evidence but on refutation by negative evidence. While agreeing with Hume that we can gain no definitely proven theories from observation, Popper argues that it is still rational for us to act pragmatically on the basis that the best tested theory is true while we know that this probably not the case. There appear however still to be areas in this model which have some tacit element of induction: how are candidate hypotheses generated in the first place if not from some unconscious induction, and how can we prefer and even rely on un-proven hypotheses we know are likely to be refuted if not gaining confidence from their past performance and assuming that this will continue into the future?

The final view perhaps is that while there is indeed no valid principle of induction, our corner of the universe has behaved in a sufficiently uniform manner during the intellectual lifetime of humankind for us to have evolved adaptive behaviours which rely on this fact. We view the universe through the narrow filter of our local spacetime which shapes the laws of physics as we perceive them to be. But surely there are no laws of physics in any absolute sense. Science is ultimately a technology devised by human society in order to devise better compressions of reality into a form which allows us to be more effective at manipulating our environment.

As Kant said, our intellect imposes its laws upon nature.

References

1. David Hume, Enquiry Concerning Human Understanding
2. Karl Popper, Objective Knowledge: An Evolutionary Approach, 1972
3. David Deutch, The Fabric of Reality, 1997
4. John Worrall, Why Watkins Failed to Solve the Problem of Induction
5. Saul Kripke, Wittgenstein on Rules and Private Language, 1982
6. Ludwig Wittgenstein, Philosophical Investigations, 1953