Can a machine think? Firstly, what's a machine? In l'esprit géométrique we're all machines, but in l'esprit tradition we're animals. Animals are sentient organisms; machines are inanimate artefacts. To speak in the tongue of the tradition, organisms qua organisms have intrinsic teleologies and substantial forms; artefacts qua artefacts do not. A wooden bed (e.g.) is an artefact. Its form is accidental; its teleology is extrinsic. Wood qua wood is not directed toward bedness. Bedness must be forced upon it from without. Wood 'wants' to be woody, to be treelike, not bedlike. Thus Aristotle: 'If you planted a bed and the rotting wood acquired the power of sending up a shoot, it would not be a bed that would come up' (Phys., II.193a).
![]() |
Artefacts contra organisms |
Secondly, what's thinking? Like all psychologico-experiential concepts, thinking is both polygenic and polyspecific. 'To think,' 'To think of,' 'To think that,' 'To think through,' 'To think with,' 'To think for,' 'To φ thoughtfully,' 'To φ with aforethought,' 'To φ without thinking.' Etc., etc. Note, also, that cogitative verbs (like 'To think') can't be dissociated from cognitive, conative, and affective verbs. If a cogitative verb is logically predicable, an affective verb is too (like 'To grieve'). If it makes sense to predicate cogitation then it also makes sense to predicate affectivity. (And if it doesn't, it doesn't.) Hence Wittgenstein: 'What a lot of things a man must do in order for us to say he thinks' (RPP, §563; unless otherwise noted, all quotes hereafter from the L.W. corpus). Thinking, knowing, learning, believing, trusting, understanding, befriending, forgiving—these are potentiæ (powers, aptitudes, abilities) of animate beings in normative contexts. We can't literally apply them to machines, for machines are not alive; they couldn't possibly satisfy the criteria for literal application. And it's not as if they could think, in a futuristic tomorrowland; it's that 'machines can think' has no sense. ('Machines can't think' is not comparable to 'Dodo birds can't fly.') So why do AI experts tell us that machines can (and do) think?
'We compare "I think" with the wrong paradigm, e.g. with "I eat" ... We know in general what [digestion] means, we want to be given a detailed account of what goes on when this process which we understand in a rough way occurs: we want more detail, the detailed working of the mechanism. "What is thinking?" is similar in verbal form to "What is digestion?" The answer is a matter of X-rays, clinical tests, etc., a matter of experimental procedure. Is this so with "What is thinking?" ' (LPP, pp. 48, 236).AI theorists believe that machines can think because they also believe that thinking can be explained mechanically. They take 'What is a thought? What is it to think?' to be like 'What is a sewing machine? How does it work?' (Cf. PG, V.§63 ). (And if they're right, the real question is: can humans think?) The basic problem is that AI's vocabulary is delimited by (a reductive version of) efficient causation. If it speaks at all, it speaks efficient-causally. Why can't a percept be both all-over red and all-over green? Come, let's ask efficient causation.
'Because the central tendency of activity in a cortical mapping of reflectance spectra cannot simultaneously lie on both sides of an anatomical axis of the mapping, the axis that divides the spectra judged red from the spectra judged green' (C.R. Gallistel).
'In the consideration of our problems one of the most dangerous ideas is that we think with, or in, our heads ... "Thinking takes place in the head" really only means "The head is connected with thinking." Of course, one says also "I think with my pen" and this localisation is at least as good.' (PG, V.§64).Compare Schrödinger:
'We have got used to localizing the conscious personality inside a person's head—l should say an inch or two behind the midpoint of the eyes ... It is very difficult for us to take stock of the fact that the localization of the personality, of the conscious mind, inside the body is only symbolic.' ('Mind and Matter,' in What is Life, pp. 122-123)[So where does thinking happen? Try: in the library, at the study hall, on the commuter train, up the walkway—places in which we earthlings find ourselves in the circumstances of life.]
AI theorists follow their overlord Mssr. Descartes in understanding observable behaviour to be no more than correlatively suggestive of (or as Herr Wittgenstein would say 'symptomatic' of) antecedent unobservable activity. Inner brain states and processes cause outer behaviour, which behaviour is but kinematic motion. The inner-outer connection (which is already a literalised metaphor) is contingent, arbitrary, and severable. All is au fond Cartésien. (Cf. posts here and here.) What we're left with—public macro-mechanical movement here and secret micro-mechanical movement there—is not what we started with. We started with thinking. And 'in order to find the real artichoke, we divested it of its leaves' (PI, §64). Thinking just is a unitized micro-mechanical process? But (arbitrarily limiting ourselves to ratiocinative thinking) there are no efficient-causal connections in a logical inference or an arithmetic calculation. (Aristotle: 'In the field of what is unmoved there cannot be this kind of cause, which is why in mathematics nothing is proved by its means' [Met., II.995a].) Premises do not cause conclusions to follow, as A causes B in contact mechanics. And 'when we say "This proposition follows from that one" ... "to follow" is being used non-temporally' (RFM, I.§103). But on AI theory 'if p then q' means 'if p at time¹ then q at time².' That which was logical becomes causal. A logical mistake (which wants correcting) becomes a mechanical malfunction (which wants repairing). And the question 'How did you come to that conclusion?' can only be answered efficient-causally: 'A caused B, after that B caused C, after that ...' [And now there's the amusing question: where did you come to that conclusion—i.e. what are the intracranial coordinates, ye latitude and longitude?] But here we're not looking for causes; we're looking for reasons (for becauses). We don't say that so-and-so calculated because an efficient-causal A→B→C happened in his brain. It's only in a context of normativity that we can literally predicate calculation of him, elsewise there'd be no justification. And a machine, an inanimate artefact with an accidental form and an extrinsic teleology, can't (logically can't) behave normatively. A computer can be reprogrammed but not reproved. (Even the concept of mechanical malfunction [that it is thus] is borrowed from normativity!)
Does an abacus calculate? No. Does a nineteenth-century comptometer calculate? No. Does a present-day computer calculate? No (introduction of electrical circuitry notwithstanding). Will a next-generation computer calculate? No. Consider that—
—'One could readily build a computer from a very large toy railway-set with a huge number of switch points and storage depots for different types of carriages to be shunted into until called upon for further operations (i.e. a "computer memory"). This computer would be cumbersomely large and slow, but in essence its operations would not differ from the latest gadgetry on the computer-market. Would anyone say, as hundreds of trains rush through complex networks of on/off points according to a prearranged timetable (a programme), depositing trucks in sidings or depots and collecting others, "Now the railway-set is calculating," "Now it is inferring," "Now it is thinking"? Does it make any difference if the railway-set is miniscule and the "trains" move at the speed of electric current?' (P. Hacker, Meaning and Mind, pp. 78-79).
Before AI can so much as begin, it has to reduce its analysanda to the efficient-causal. But the analysanda are irreducible. In the would-be reduction they're eliminated; they're replaced by the efficient-causal. (Just as qualia are not reducible to quanta, just as they are eliminated in the reduction, replaced by quanta.) And in so doing thinking (e.g.) is eliminated.
Note that the mistakes here are not scientific, but philosophical. AI theory isn't bad science (inasmuch as it is science); it's bad philosophy, grounded in behaviourism (i.e. in Cartesianism minus cogitantes substantiae). The problem is the mathematico-mechanical metaphysic on which AI theory is premised, a metaphysic that lost its tenability a long time ago and has now degenerated into adhockery and meaninglessness. 'What is the sense of talking about a mechanical explanation when you do not know what you mean by mechanics?' (Whitehead, Science and the Modern World, p.21). If we're to arrest and reverse the degeneration, we'll have to put the question marks deeper down.
PS: When today's thinkers do put the question marks deeper down, they're publicly arraigned, as if they were impugning science itself. Look at Nagel's Mind and Cosmos, 'the most despised science book of 2012' (The Guardian), which criticised the defunct premises. How did the academy react? Like an angry zealot denouncing a heretic. Mind and Cosmos was: 'The shoddy reasoning of a once-great thinker' (Steven Pinker of Harvard U); 'Absurd ... If you want arrogance and dogmatism you have to look to the [...] Nagel’s of the world. They’re the ones claiming, on the basis of some asinine armchair cogitation, that they have refuted an enormously successful scientific paradigm' (Jason Rosenhouse of James Madison U); 'Disturbing' (Jerry Coyne of U of Chicago). Etc. Such was the obloquy. Alright, these professors acknowledged that they hadn't read the book. But who needs to read books? Publisher's blurbs are more than enough—for social media (Pinker), newspaper articles (Rosenhouse), and blog posts (Coyne). (To be fair, Coyne did say that he was going to read it later. And later: 'I never got around to reading Mind and Cosmos ... I'm glad I didn't.') Sigh.
No comments:
Post a Comment