This site sets cookies to help me tweak settings to make things easier for visitors. If you object to cookies, adjust your browser settings to reject them.
This is a slightly expanded version of a talk I gave to the Leicester U3A Science and Technology Group in May 2011. It arose from a discussion I had with Malcolm Glasse about scientific method and the distinction between science and non-science. Peirce had some interesting things to say on the subject. After a brief survey of his life and work, I shall concentrate on what he had to say about science.
Peirce (note the pronunciation to rhyme with Purse) is best known as the originator of Pragmatism. He was unusual in making significant contributions to both Science and Philosophy.
Quite a few people trained as mathematicians or scientists have made their names in Philosophy, and a number of distinguished scientists have commented on Philosophical questions was, but apart from Descartes and Leibnitz, very few people have made significant contributions to both areas of thought (I nearly said, to both fields of knowledge, but then I wondered if Philosophy constitutes knowledge)
Peirce was one of four sons of Benjamin Peirce, professor of Mathematics at Harvard and probably the first American Mathematician to achieve international recognition - he was elected a fellow of the Royal Society in 1850.
The Peirces were members of the East Coast intelligentsia of which it used to be said 'The Cabots spoke only to the Lodges, and the Lodges spoke only to God'. The Emersons were family friends, though that did not prevent Peirce referring to Emerson as a 'Philosophical Soup Kitchen'. Peirce had a lifelong friendship with William James, and was also acquainted with William's brother Henry.
Much of Piece's education was undertaken by his Father though he did attend schools to prepare him for Harvard.
'When, in my teens, I was first reading the masterpieces of Kant, Hobbes, and other great thinkers, my father, who was a mathematician, and who, if not an analyst of thought, at least never failed to draw the correct conclusion from given premisses, unless by a mere slip, would induce me to repeat to him the demonstrations of the philosophers, and in a very few words would usually rip them up and show them to be empty. In that way, the bad habits of thinking that would otherwise have been indelibly impressed upon me by those mighty powers, were, I hope, in some measure overcome.'
When he was 13 Peirce read Whately's Logic one of the standard text books of Logic at the time.
He graduated from Harvard in 1859, and after a year working for the US Coastal and Geodesic Survey, he returned to Harvard to study for a higher degree in Chemistry, graduating in 1863. His studies cannot have been full time because he resumed work with the Survey in 1861, continuing to work there until 1891.
Peirce was a rather prickly man, not disposed to suffer fools gladly, and his matrimonial arrangements scandalised genteel New England society. After his first wife left him, he lived for some years with another lady, who was alleged by his detractors to be a gypsy. That led to his dismissal from a lectureship in Logic at Johns Hopkins University.
Although he was later employed by Harvard to deliver several courses of lectures on Logic, he never obtained a tenured position, despite several attempts.
In Philosophical circles Peirce is best known as the originator of Pragmatism, which he summed up in the Pragmatist Maxim:
'Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have, then, our conception of these effects is the whole of our conception of the object'
Peirce meant that the significance of any idea lies in its relevance to practical questions, questions about what we can observe, and about the consequences of actions
Quite what such abstract principles mean is often not clear until we see what people do with them. In this essay I shall describe how Peirce applied pragmatism to science.
First I need to mention William James. The popular picture of Pragmatism is based mainly on the writings of James. James, who wrote the first text book of experimental psychology and probably originated the idea of the unconscious mind, wrote in lively prose that is easy to follow, but he was inclined to oversimplify, and when he considered religion seemed sometimes to regard Pragmatism as a license to believe whatever we find comforting or congenial. Towards the end of his life Peirce dissociated himself from the word 'Pragmatism', preferring to describe his position as 'Pragmaticism'. He said that that word was so ugly that he doubted if anyone else would want to steal it. No one did.
Peirce was a working scientist who published a number of papers on a variety of scientific subjects, but he published relatively little philosophy in his lifetime. He left behind him several chests full of writings, some substantial and others only short fragments. After his death Hartshorne and Weiss looked through this material and published six volumes of his Collected Papers from 1932 - 1934. Two more volumes produced by Burks were published in 1958.
Much of the material is fragmentary and obscure. William James summed up Peirce's prose style as 'flashes of brilliant light relieved against cimmerean darkness'.
The Cimmeri (kummeries) were mythical creatures supposed to live in perpetual darkness)
Although Peirce had a good deal to say about the philosophy of science, the relevant remarks are scattered all through the Collected Papers, and although volume VII of the Collected Papers is called Science and Philosophy much of the most interesting material on the subject appears in earlier volumes. It was therefore only gradually that people realised the importance of Peirce's contribution to the philosophy of science. I myself did not realise this until I started to browse through the Collected Papers around 1960, and the position was not generally known in Britain until Ayer published The Origins of Pragmatism in 1968.
I now want to describe Peirce's account of science, and need first to sketch the background
Although Peirce graduated from Harvard in Chemistry he appears never to have worked as a chemist. For much of his life he worked for the US Coastal and Geodesic Survey. In the course of that work he suggested improvements in the use of the pendulum to measure the earth's gravitational field, and his experience made him very conscious of the significance of errors in measurements and the importance of controlling and correcting them, and he published several papers on such matters. He seems to have been the first person to suggest taking the wavelength of light as the standard of distance, favouring one of the lines in the sodium spectrum.
He was also a member of the American expedition to Sicily to observe the solar eclipse in 1870, and did some work on stellar photometry.
Psychology - Fechner's Law, (sometimes called the Weber-Fechner law) said that the subjective intensity of a sensation increases as the logarithm of the physical intensity of the stimulus. It would be better called the Weber-Fechner theory, because it has only limited application and where it does apply is only approximate. In many cases a power law seems to work better. In the case of electric shocks the sensation seems to be proportional to the stimulus to power 3.5.
However the theory does seem to apply roughly to intensity of sound, and to frequencies above about 500, so the difference in tone between 512 cps and 1024 cps is perceived as the same as the difference in tone between 1024 cps and 2048 cps. A similar relation applies to the perceived intensity of sound, which is why the decibel scale is logarithmic. If two sounds difference in intensity of D decibels, the ratio of their intensities is:
An increase of one decibel corresponds to multiplying the intensity by 10√10.
Peirce's experiments were designed not to test the law directly, but to test an assumption underlying Fechner's work. Fechner had assumed there would be some threshold that an increase in stimulus has to exceed before we could sense it at all, and his conclusions were based on a determination of what he considered to be those threshold levels. In other words he had attempted to find the smallest additional stimulus needed to produce a perceptible change in sensation.
Peirce suspected there were no such thresholds. His experiments tested our ability to estimate changes in weight by feeling pressure. A blindfolded subject was subjected to two slightly different pressures on the hand, and had to say which of the two was the greater. The experimenter [Peirce] drew cards from a pack to decide which of the two weights in each pair should be the greater. That may be the first use of randomisation in experimental Psychology.
If the subject could not tell which was greater, he was still required to guess. Peirce subjected the results to statistical analysis. If there is a threshold the subject's success rate should fall sharply to 50% at that point. If there is no threshold, the success rate should decline steadily onwards 50% as the difference in weight tends to zero. Peirce's results suggested the latter, from which he concluded that Fechner's assumption was unjustified.
Peirce's experimental work is still referred to as pioneering the use of probabilistic methods in research, though he is now considered to have shown, not that there are no threshold, but that thresholds are lower than Fechner had supposed.
Philosophers have often had a great deal to say about perception and how far it reflects reality, but, so far as I know, Peirce is the only Philosopher of note to have conducted any research into perception.
I want now to discuss Peirce's theory of knowledge and logic.
First I need to say a little about the historical background.
For most of the recorded history of human thought, there has been a consensus that Knowledge should be based on a set of propositions of unassailable certainty.
Opinion had long been distributed between two extremes - Rationalism and Empiricism.
For rationalists knowledge was supported by certain basic propositions that we knew intuitively to have intrinsic certainty. Those included the logically necessary propositions and for some Rationalists also other propositions which they supposed to be beyond doubt.
Empiricists, while they usually accepted logical truths as basic propositions, thought that they alone were not a sufficient basis for the whole of knowledge, and added to the list of incorrigible beliefs the impressions of our senses, sometimes called 'sense data'.
Aristotelians took an intermediate position. Aristotle himself had proposed to use definitions as foundations of knowledge. As we create definitions ourselves, they should not be open to doubt. Aristotle created formal logic as an engine for drawing conclusions from sets of definitions.
Aristotle's logic considered four types of proposition, all of the subject predicate form, equivalent to relations of inclusion and exclusion between sets. Examples:
Some simple inferences can be made between pairs of such statements, for instance:
'Some mice like chocolate' implies 'Some creatures that like chocolate are mice',
But most of the work in Aristotelian logic revolved around what was called a syllogism, in which two propositions, known as the premisses, imply a third, called the conclusion.
All mice like chocolate
All mice are mammals
Therefore Some mammals like chocolate.
That is a syllogism of the form to which the medievals gave the mnemonic name 'Darapti'
In that syllogism 'mice'is called the middle term, because it appears in both premises. The object of the syllogism is to deduce a conclusion that makes no reference to the middle term.
There are various rules which determine which syllogisms are valid. There are other rules which establish prove the validity of valid syllogisms by deriving them from one form that was considered basic. Altogether there were just 24 valid patterns of syllogistic inference.
The basic form, from which other syllogisms can be derived, has the mnemonic name 'Barbara' and follows the pattern:
The Aristotelian system is very weak, too weak to formalise the mathematical definition of a limit, or to represent inferences involving relational propositions like:
''Everyone who plays a game knows someone else who plays another game'
Aristotelian logic could not do justice to the complexity of a such a statement, and so could not license either of the conclusions:
'If anyone plays any game, there are at least two games that people play'and
'If someone plays some game, there are at least two people'
Medieval logicians tinkered with Aristotelian Logic without significantly extending its scope and by the seventeenth century formal logic was becoming unfashionable. The Renaissance humanists disliked it, because to them it looked too much like hard thinking. Less discreditably Francis Bacon objected to it because he thought it did not provide any new knowledge but only re-arranged what we already knew. (A very tricky claim, that).
For Aristotle to produce that logical system in the third century BC was an outstanding achievement, but for people to be still just tinkering with it two thousand years later was rather sad.
Francis Bacon (1561-1626), deplored what he considered to be the backward looking nature of much contemporary thought. The Scholastics looked back to Aristotle, who wanted to construct definitions that could be used as premisses in syllogisms. The Renaissance humanists looked back to the classics of ancient literature and sought elegant phraseology without much caring whether its content was anything worth saying. In particular they objected to the construction of new words, which effectively prevented the development of any new field of knowledge.
Bacon divided earlier thinkers into two groups, both of which he judged unsatisfactory. The speculative philosophers, especially the Aristotelians, he called .spiders., because they built great systems out of their own substance, without taking any notice of what actually happens in the world. The experimentalists, mainly alchemists, he likened to ants, collecting data without any foresight or serious attempt to interpret it. Instead of following either of those models we should be like bees, collecting data systematically and trying to fit it into a comprehensible pattern. Bacon proposed to replace the deductive reasoning of the medieval Aristotelians with what he called Inductive Reasoning
Bacon had identified Logic with Aristotelian Logic, realised that that was very weak, and therefore proposed to dispense with Logic altogether. In the 19th century a number of thinkers decided that had been an over-reaction and investigated the possibility of strengthening Logic instead of abandoning it. Peirce was one of those - others were Boole, de Morgan, Peano, Frege.
Logic is not part of psychology
Logic and meaning apply to signs, not to thoughts
Meaning is a three term relation
In the past Logic had sometimes been described as 'The Laws of thought'Peirce on the other hand considered that Logic applied not to thoughts, but to what he called 'signs'.
Peirce constructed an extremely complicated theory of signs, so I shan't say any more about it than I have to. Peirce used 'sign'to refer not only to words, but also any other means of conveying information. Thus a whole sentence is a sign. So is a gesture, a thermometer, a weathercock or sometimes a picture.
Before Peirce meaning had usually been thought of as a two term relation, on the one hand a word or phrase, on the other a meaning.
A popular view has been that all words are names. That was the view Hobbes put forward in his Leviathan. It generated a great deal of fruitless intellectual endeavour in searching for things that words might name. The problem of Universals was the search for objects that might be named by adjectives. That question occupied much of the attention of the English Empiricists.
There was also great anxiety about the meaning of words apparently referring to things that did not exist. If there are no unicorns, the word 'unicorn'appears to have no meaning. That question was still worrying Bertrand Russell as late as the early twentieth century. How can we meaningfully assert that there are no unicorns? How are we to assess statements such as Russell's favourite example 'The King of France is bald', when there is no King of France to whom it may refer ? How are we to deal with words like 'if'and 'but'; whatever can they name ?
Peirce argued that signs involve a three term relation, a sign doesn't simply have a meaning, it has a meaning to someone or something, and a sign in use typically generates another sign.
If I ask someone the time, they are likely to respond by telling me the time, that is by producing another sign. If someone sounds the fire alarm, people are likely to react by leaving the building; that action is a sign they have understood the warning. Meaning can only be understood in the context of a system of communication.
That was an important insight, but Peirce became sidetracked into constructing an extremely complex theory of signs, with multiple three fold distinctions linked to a strange metaphysics that divided reality into three categories, which he called 'Firstness''Secondness'and 'Thirdness'. I shall say no more about signs or categories and proceed to Logic.
Peirce pointed out that logic is the specification of rules for identifying pattersns of inference that preserve truth, so that if we start from a set of true propositions and operate according to logical rules all the propositions we derive will also be true.
Two steps were needed to extend Aristotelian Logic, and Peirce pioneered them both.
From any proposition we can obtain another by prefixing it with 'not'.
We can combine several statements together into a single more complicated statement using combinations of:
Some truth functions were known to the Stoics, and some special cases were considered by the medieval logicians, but before the nineteenth century there was no systematic treatment of truth functions.
Peirce showed in 1891 that all truth functions can be defined either in terms of:
Neither Nor (NOR)
or Not Both (NAND)
Peirce (winter of 1880–81) showed that NOR gates alone (or alternatively NAND gates alone) can be used to reproduce the functions of all the other logic gates, and that such gates could be represented by electrical circuits, but his work on it was unpublished until 1933. Peirce was a strong admirer of Charles Babbage, so it is conceivable that Babbage's project of performing calculations mechanically inspired Peirce to calculate electrically.
Quantifiers The complexity of statements can be greatly increased by introducing symbols, ∀ for 'all'and ∃ (from E for exists) for 'some'.
'Everyone who plays a game knows someone else who plays another game'
Can be rendered as:
(∀x)( IF g1 is a game & x plays g1 THEN (∃y)((x knows y)& NOT(y = x)&( ∃g2)( (g2 is a game)&(NOT(g1=g2))( y plays g2) ))
Formal logic deals with valid arguments in which the premisses guarantee the truth of the conclusion. Much reasoning is not like that. We are often concerned with probability rather than certainty, and Peirce made an important contribution to the theory of probability.(see the appendix)
Peirce noted that attempts to produce a secure foundation for knowledge were unconvincing, and suggested that such a foundation is neither available nor necessary. He thought that modern Philosophy had taken a wrong turn with Descartes, and that all post Cartesian thought had been crippled by his misconceptions. Peirce argued that nothing is completely beyond question. Furthermore philosophers have been unrealistic in supposing that we can choose our own starting point to base knowledge on our preferred assumptions. We cannot choose our starting point, because we don't start with a sparse set of axioms but with a complete system of knowledge, the totality of human knowledge as it is when we start to think critically about it.
Our contribution to knowledge is not to construct a system ab initio, but to test, criticise, review, correct and extend what we inherit from previous generations. Our belief system is a going concern. It can, and must, be scrutinised and revised, but revision must be piecemeal, because revising some of our beliefs always depends on temporarily taking other beliefs for granted.
In particular Peirce denied that there is any such thing as what he called 'intuitive'knowledge. Today 'intuition'is used to mean something like 'hunch', but that was not how Peirce used the word. For him it meant certain non inferential knowledge; an experience that both includes some belief, and also guarantees the truth of that belief.
In an essay entitled "Questions concerning certain faculties claimed for Man" Peirce denied that we have intuitive knowledge of any of the following:
That perceptions refer directly to their apparent objects,
That we have an intuitive self consciousness,
That we have an intuitive introspection,
That a sign can't have a meaning unless there is something it refers to,
He argued that thought involves the use of signs, and typically the use of language, and self consciousness is learnt by observing our own bodies and the reactions of others to us.
Many people think intuitive knowledge is provided by perception. Peirce thought that all perceptions include an element of inference, though the inference is often unconscious. Peirce's friend William James may well have invented the idea of unconscious mental processes.
I've just read an intriguing book by two neuroscientists who took up conjuring the better to understand human perception and the ease with which it may be misled.
Stephen Macknik and Susana Martinez-Conde Sleights of Hand
This book was particularly interesting to me because I was reading it at the same time as I was preparing this talk. Peirce's claim that the judgements of perception all involve an element of inference, though the inference is usually unconscious seems to agree with the recent findings of neuroscience.
The book abounds in examples of people thinking they see what is not there (sometimes as a result of the persistence of after images, so that for a fraction of a second we see what was there but is there no longer) or fail to see what is there because we are distracted by something else.
Memory is particularly fallible. At best we remember a few details of an event, and when we think we are recalling a memory we use those fragments as a basis for a full account of what we suppose happened.
Suppose that out of the corner of my eye I spot motion on the floor by the skirting board. I turn my head and look carefully, and there's a little mouse.
Have I actually seen a mouse? A mouse is a mammal, with mammary glands, spinal chord, heart, lungs, liver, two kidneys and all sort of other bits and pieces. I haven't actually seen any of those. I infer their presence, having seen just the outside of a mouse. I haven't even seen all the outside of a mouse, just part of it, and I've only seen that, I haven't felt it to check texture and temperature.
As we refine the record of our perception to reduce the element of inference, we create a sequence of statements each one saying less, reducing the possibility of doubt at the expense of reducing the content.
Because many of the inferences involved in perception are unconscious, we cannot control them, and we cannot help feeling at the time a certainty that logic does not warrant.
Thus perceptions are a source of assumptions that help to keep our investigations into the world underway, but they are not foundations, they are material for analysis. Because although we can't doubt a perception at the moment we experience it, we can review it afterwards.
Inductive inference involves starting from a statement summarising observations that takes the form 'some A are B', and deriving a conclusion of the form 'All A are B'
That is not a valid step, and there has been a great deal of discussion about the possibility that there may be special circumstances in which such a step may be justified.
There seems to be no formal criterion for identifying such cases.
Some of the propositions we think we know are universal generalisations, of the form 'All A are B', for example 'Every frog has a heart'Often universal propositions are expressed without the word 'all'as in 'table salt dissolves in water'which means 'all samples of table salt dissolve in water'.
The problem is that our knowledge of such generalisations usually appears to be based on observation, but no amount of observation could suffice to guarantee that all frogs have a heart.
In the following example I use 'every frog has a heart'to mean''Every frog has precisely one heart'so it would be refuted by discovering either a frog without a heart, or a frog with two hearts, though murdered frogs, and frogs produced by multiple heart transplant surgery would not count as exceptions.
We cannot observe all the frogs there are, and even if we could that would not suffice since the generalisation also refers to all frogs there ever will be. All we can observe are individual instances, but however many frogs we examine, it is still conceivable that the next one should either have teo hearts or none. The evidence never entails the truth of the conclusion.
'Some frogs have hearts therefore all frogs have hearts'looks like an instance of the fallacious form 'Some S are P, therefore all S are P'and no one has managed to think of any other, valid, form, that might fit. Strengthening the premiss to 'Some frogs have one heart, and none have been observed to have either more or fewer'is still insufficient to justify the desired conclusion.
It is only comparatively recently that it came to be generally realised that there might be a serious problem with inductive inference. Plato attached little credence to the testimony of the senses, and had no reason to develop a theory of induction. Aristotle, for whom everything can be neatly divided into natural classes, just didn't see the problem.
Francis Bacon, having rejected the Aristotelian model of science realised that we needed to be careful about inductive arguments, and listed rules for inductive inference, but he did not appear to realise that there is a fundamental problem that threatens to undermine the whole enterprise. Hume was the first person to see how serious a problem induction poses, and he thought it insoluble, suggesting we console ourselves with a lively social life. Hume's jollity made him a popular figure in Paris, where he was for several years British Minister. More earnest souls have been reluctant to follow Hume's advice so there have been numerous attempts to solve the problem of induction.
One way of dealing with the problem would be to say that having a heart is part of the definition of 'frog'. If we were to list all the properties something must have to be a frog, having a heart would be among those properties. That would have been Aristotle's answer, and any Rationalist would be likely to say something of the sort. However that at best shifts the problem elsewhere. Suppose there were creatures otherwise just like frogs but with two hearts, and suppose that frog spawn produced by creatures with just one heart hatched into a mixture of one and two hearted animals. If we still defined 'frog'to be, among other things, having one heart, we should still be able confidently to assert 'All frogs have one heart'but the generalisation would not be very much use. It wouldn't tell us that all the offspring of a frog have one heart, and if we saw from the rear a creature of frog like appearance we should not be entitled to conclude it had one heart until we'd dissected it, for until we'd checked that the creature actually had one heart we shouldn't know that it was a frog in the new meaning of the word according to which have one heart is a defining character of 'frog'.
The usefulness of such generalisation as 'all frogs have one heart is that if we notice that something has a fair number of froggy properties we can infer without checking directly that it has the others. That is possible because living things can be divided into natural kinds, collection of individuals that resemble each other much more closely than any of them resembles the members of any other kind. Number of hearts does not as a matter of fact vary among frogs, or indeed within any species of vertebrate. However although that fact is part of the basis of our definition of 'frog'it cannot be made into a fact by making a definition; it is a fact about the world that might be otherwise. How then do we infer it that it is true in general when our only evidence is that we have so far always found it to be true in a number of particular cases? We have now returned to the problem of induction.
Empiricists, who deny that universal generalisations are all true by definition, have difficulty explaining how we know them. They do not satisfy either of the Empiricist's conditions for knowledge; they are not implied by any possible set of observations, nor are they true by virtue of the relations between their component ideas. That is the problem of Induction; empiricists have to confront the possibility that none of our theoretical knowledge can be justified according to their criteria.
Peirce argued that the traditional discussions of inductive inference confused two very different steps, discovery and testing. distinguished two stages in establishing a universal generalisation. He used the word 'abduction' for the initial stage of formulating a hypothesis in response to observations and reserved the word 'induction' for the subsequent testing to decide whether or not to accept a hypothesis.
Peirce held that experimental evidence cannot support a hypothesis or theory unless it is obtained as part of a policy to test the theory in question.
Peirce sometimes toyed with the notion that what appear to be universal generalisations should instead be interpreted as probability statements. Thus 'All frogs have one heart' could be replaced by 'nearly all frogs have one heart' and the problem of inductive generalisation is thus replaced by the problem of estimating a probability from a sample.
The probability interpretation seems odd when applied to something like ohms law.
Instead of asserting: current = voltage/resistance, we should have:
Probability (current = voltage/resistance) = 0.9999
In any case, the probability interpretation at best deals with only one aspect of the problem of induction, namely the inference of a general conclusion from particular examples. It does not deal with the problem of passing from the observation of past events to making assumptions about future events.
There seems to be no justification in logic for assuming that because past observations have agreed with:
current = voltage/resistance
The same relation will hold in the future.
Many are the supposedly infallible investment strategies that have worked only in hindsight.
It is easy to underestimate how large a proportion of our knowledge is challenged by doubts about induction, until we recognise that universal generalisations are assumed, although often not stated, whenever we identify a physical object. Whenever we see a small object moving across the floor and say 'there's a mouse'we are asserting far more than we've seen. We have plenty of beliefs about mice; they are mammals, have hearts, lungs, brains... but that is far more than we actually see in any single encounter with an individual mouse, so whenever we identify something as a mouse we are assuming some generalisation on the lines 'all scuttling creatures of superficially mouse like appearance have the other properties we consider normal for mice.'
'It is a great mistake to suppose that the mind of the active scientist is filled with propositions which, if not proved beyond reasonable cavil, are at least extremely probable. On the contrary, he entertains hypotheses which are almost wildly incredible, and treats them with respect for the time being.......because any scientific proposition whatever is always liable to be refuted and dropped at short notice........the best hypothesis.....is the one which can be most readily refuted if it is false. This far outweighs the trifling merit of being likely. For after all, what is a likely hypothesis? It is one which falls in with our preconceived ideas.' (from The Collected Papers of Charles Sander Peirce, volume 1 section 120, dots indicate my omissions)
For Peirce research amounted to considerably more than just observing whatever is happening around us and then forming generalisations. Some such process may sometimes give us a helpful clue, but observations carry little weight as confirming evidence unless they are collected for a specific purpose. Seeing a number of A's all of which are B does not provide much support from 'All A are B'unless we say before making the observations that we are testing 'All A are B'Peirce sums that up by saying that A and B must be predesignated qualities. He illustrated the point thus:
'I take the ages at death of the first five poets given in Wheeler's Biographical dictionary. they are: Aagard, 48; Abeille, 70; Abulola, 84; Abunowas, 48; Accords, 45
These five ages have the following characters in common.
1. The difference of the two digits composing a number, divided by three, leaves a remainder of one.
2. The first digit raised to the power indicated by the second digit and divided by three leaves a remainder of one'
3. The sum of the prime factors of each age, including one, is divisible by three.
It is easy to see than the number of accidental agreements of this sort would be quite endless.' (The Collected papers of C. S. Peirce, volume 6 section 408)
Thus any collection of data will show some accidental patterns. If we just happen to notice a pattern we need to make more observations specifically to test it before considering it established.
In its simple form, inductive inference is incoherent, allowing the confirmation of a generalisation by what would normally be considered irrelevant data.
Consider the generalisation:
'Every rhinoceros is a sewing machine'
That is logically equivalent to:
'Everything not a sewing machine is not a rhinoceros'
In this room there are many objects that are not sewing machines, and none of them is a rhinoceros, apparently confirming the strange hypothesis.
Yet if we insist on predesignating the properties 'rhinoceros'and sewing machine'and seek to test the hypothesis, we shall search for rhinoceroses, and once we find one, the hypothesis will be refuted.
Discussions of the methodology of science and the philosophy of science are easily misunderstood. They often sound like advice to scientists, yet scientists do not appear to need any such advice; they usually manage perfectly well.
Those who need to hear explanations of scientific method are people with little experience of science, who try to follow what they consider to be scientific methods, and cite what they think are scientific precedents for some rather dubious reasoning.
For instance people have recently cited records showing that people imprisoned for short periods are more likely to re-offend than people imprisoned for longer periods. The lengths of those sentences will have been determined by many factors, but I doubt if a wish to test the effectiveness of prison was one of them.
It is not necessary to tell scientists how important it is to test theories, because once new theories have been published many scientists will try to reproduce the relevant observations just because they want to see for themselves or, if they are academics, demonstrate to their pupils. Even though their motives may not be to test the theory, the theory will still be tested.
Where it is important to stress predesignation is in assessing the casual generalisations people make outside the laboratory, and in the appeals to stale historical evidence sometimes made by non-scientists. For instance critics of the theory of Evolution by Natural Selection sometimes criticise Darwin's evidence. What evidence Darwin had is neither here nor there; what is important is the continuing stream of evidence that is available now.
Peirce held that we do not need a general principle of the uniformity of nature to assure us that there actually are laws of nature, and no such principle is available to us however much we want it. Peirce likened research to using an iterative method to solve an equation, where we generate a sequence of approximations that converge on the solution.
to find the positive square root of a positive number k is equivalent to finding the positive solution of x2 = k, k > 0.
(the example is mine; Peirce described a more complicated algorithm)
The following process always converges.
Let x0 = t ≠ 0
set xn+1 = (xn + k/xn)/2
Continue until successive values differ by less than the permitted error
t is our first guess of the solution and may in fact be any positive number (or any negative number if we want the negative square root) though t may not be zero!
Notice that even an equation as simple as x2 = k, k>0 can have two solutions, and which one we obtain depends on our starting approximation. For some iterative procedures, certain starting approximations do not lead to any solution.
Peirce thought that it is not possible to prove that inductive policies will lead to true conclusions, but argued that it is still possible to justify following such policies. The justification is that if there are any general laws of nature that we are capable of comprehending, the critical scientific method of inquiry will eventually lead us to them.
While there is no guarantee that our investigations will be successful, we lose nothing by trying, because there is no danger that we shall miss some discovery that might have been reached by a non-scientific method. For if some technique usually considered unscientific, like inspecting tea leaves or entrails, actually could reveal the secrets of the universe, the records of its success would provide inductive grounds for using it, and it would eventually become part of science. The non-scientific methods of investigation are just the ones that don't work. To demand an alternative to science is to demand an alternative to success.
One thing we have to learn by experience is what sort of information may be generalised. If we find that one sample of a substance S melts at 73o we may infer that all samples of S will do so, but the observation that our television set broke down on a Tuesday, does not justify the generalisation that all television breakdowns happen on that day. There is no formal rule of inductive inference that licenses the first inference but not the second; we just have to learn by experience, though our learning process would probably never start were we not impelled by instinct to generalise experiences of certain types.
Rom Harré said that Peirce's view of knowledge was dynamic, contrasting it to the views of all his predecessors, which Harré described as static. Peirce put the process of enquiry at the centre of his theory of knowledge. We criticise our theories and test them experimentally, changing them or replacing them when they prove inadequate. Inquiry so conducted is a self correcting process. The force of a particular piece of experimental evidence depends not just on its content, but on its part in the process of enquiry, in particular on how we obtain it.
An intriguing question arises. Could there be more than one science so that which one we create depends on where we start? I haven't found any discussion of that question in Peirce. I suspect he'd have said that the starting points of different humans are sufficiently similar for there to be only one possible human science, but I'm not sure what he'd have made of the suggestion that different intelligent species might conceivably build completely different pictures of the world.
The theory of probability was first developed in connection with games of chance, and it has been hard to outgrow of that dubious ancestry.
Games of chance are usually governed by a concept of fairness that dictates that at a certain level simple outcomes are equally likely. For instance dice are constructed so that each face will appear uppermost as often as any other, though I once read that the oldest known die, found in an Egyptian pyramid, was biased. Cards are shuffled in the hope that no player will be able to make useful predictions as to how the cards will fall.
That history led to a belief in what is called 'The Principle of Indifference', according to which, if we have no evidence to the contrary, the various possible outcomes are equally likely. That principle has in turn been associated with the belief that all statements assigning probabilities to events are logically necessary, having the status of mathematical theorems. Maynard Keynes defended that view as late as 1921 when he published A Treatise on Probability.
Peirce was one of the pioneers of the Frequency Theory of probability, according to which the probability of an outcome must be somehow analysed in terms of the frequency with which it occurs.
Some people object that Peirce's example of the poets uses only a small sample of 5, and proponents of inductive logic usually demand large samples. However that is unrealistic, When we do think ourselves justifies in generalising from samples those samples are often small. In the chemistry lab two samples are usually considered sufficient to establish a melting point.
Suppose zoologists examining the fauna of a hitherto unexplored island came upon a small quadruped not belonging to any known species. They kill one and dissect it, finding it has 1 heart, 2 lungs, a liver, two kidneys, digestive tract, and the other equipment one expects to find in a mammal. How many more animals would they need to dissect before concluding that all those animals were similarly equipped ?
It appears that there are some circumstances in which generalisation from even a small sample can be fairly reliable, but only experience can tell us what those circumstances are, so any attempt to produce a justification of induction appears to be circular.