COMMUNICATING WITH COMPUTER PROGRAMS: The Pragmatics of Human-Computer Interaction1TREVOR PATEMANAbstract: Gives an account of human-computer interaction using concepts from pragmatics (Grice, Watzlawick) and Bateson's concept of play. Argues that the impersonality of computer programs has advantages in relation to learning and creativity. Connections are made to the philosophy of science (Feyerabend, Popper) and to Habermas. Note to the website version 2004: I was not planning to add this essay to the website, thinking that it was just obsolete in relation to the topics it discusses. The reader will discover, for example, that my own experience of programming took place back in 1979 and in front of something I call a "teletype terminal".... But recently I came across a favourable reference to this essay, as an early argument for treating computers as protheses for humans. Looking through my work, it also occurred to me that the arguments deployed here could also be re-deployed to make sense of the way in which millions of humans now use computer chatrooms, engage in cybersex, and such like. So here is the essay, but not brought up to date: 1.IntroductionCreators of artificial intelligence (AI) and writers about AI have succumbed to the temptation to personify the programs they write and to describe in mentalistic language the procedures those programs contain. This is evident in the acronyms by which programs are identified (ELIZA being the type case); in the use of the pronouns `he' or `she' rather than `it' (see, for instance, Boden 1977) to talk about them; and in the description of what computer programs do, where mentalistic language, and especially intentional verbs, is universally employed. (If `do' itself is read as an intentional verb, allow me to invent a homonymous verb `to do' which is neutral as between an intentional and a non-intentional interpretation. Likewise for any other intentional verbs which I may fail to avoid using in contexts where it is clear that to use them that way begs the questions I want to ask.) The persistent personification of AI simply reflects what workers in AI take to be their goal: not merely the simulation of human intelligence, but the actual production of intelligence to which mentalistic predicates can be applied without category mistake. Some writers even look forward to the production of artificial intelligences to which we would have no good reasons for denying the legal, political and moral rights of human beings (see Sloman 1978, Epilogue). But it is important to remember that as yet these goals have not been achieved (AI workers don't dispute this claim). It may be a necessary truth that the goal is unachievable, though it may be that we won't be able to know this for a good while yet. For necessary truths about the world, as opposed to analytic truths about language, are the subject of empirical discovery. Aaron Sloman remarks that when philosophers try to set up non-empirical demonstrations that something (e.g. a non-human mind) is impossible, they usually fail (Sloman 1978, p.64; cf. Kripke 1972). Even if the goal of creating a non-human mind proves to be attainable by being attained at some remote future date, this does not undermine my argument, which is simply that some currently available advantages of AI are forfeited if we insist on personifying the programs actually available. These advantages are quite practical and have to do with the possible uses of programs in education, counselling and medical diagnosis, for instance. The history and philosophy of science has taught us to expect that all revolutionary science will be a mixture of evidence, argument and propaganda (see Feyerabend 1975). AI has proved to be no exception: the propaganda consists in thinking and writing as if the goals of AI had already been attained. But the task I am taking on is to suggest to workers in AI and, indirectly, to users of computer programs that there are good reasons for abandoning the particular propaganda line they are pursuing. The good reasons have to do with realising the liberating potential which existing AI programs have for humans. 2. The importance of being inhumanBriefly, the liberating potential of existing AI programs consists in the possibility of playing with them, without it being the case that the play may go wrong in the way that play among humans characteristically goes wrong: when one partner decides that a given action x no longer constitutes play, but is a `real' action subject to first-order moral or political appraisal as the action of a responsible agent. Thus, a ritual insult turns sour when it is taken as a real insult (Labov 1972), and a playful bite misfires when it is taken as a real bite (Bateson 1972). Playful actions are those actions which `do not denote what those actions for which they stand would denote', if they were not playful actions (Bateson 1972, p. 152). In this sense of play, psychoanalytic therapy is play, for in psychoanalysis specific organisational measures are taken to `bracket' or `suspend' the pressures of reality, in order that the analyst does not risk having his or her discourse constituted as the kind of discourse it would constitute outside of the analytic session - for instance, obscene or aggressive discourse. My argument is that in interacting with or, better, using a computer program, the human interactant can engage in play which does not risk going wrong. S/he can engage in play in that s/he can perform actions which `do not denote what those actions for which they stand would denote', if they were not actions performed using a program, and there is no risk of them going wrong because the computer program cannot perform intentional actions (actions done for a reason) of any kind. But why should play as defined be considered a good thing? The argument which is relevant to the purposes of this paper is this. In play a field is created in which feelings can be expressed, thoughts explored, and creativity exercised, free of moral and political constraints which are either irrelevant or, more strongly, undesirable. In psychoanalysis, such constraints are an obstacle to the provision of diagnostically important information. In education, they are an obstacle to thinking and learning. In these domains and others, `play' is that form of activity in which moral and political constraints (and politeness requirements) are legitimately suspended to permit the fuller realisation of important purposes. In sections 4 and 5 I shall develop these points, but first I shall consider a potential objection to the general line of thought I wish to develop. 3. Intention and representationAbout the first thing a philosophy student learns is that there can be no representation without an intention to represent. The argument goes something like this: if it is the wind which etches `2 + 2 = 4' in the sands of the desert, then these do not constitute symbols, but merely provide a sign (or index, in Peirce's sense) of the activity of the wind. For they are non-intentional marks, and a state of affairs cannot be (meaningfully) represented other than intentionally. This is the line of thought H. P. Grice develops in his famous article on `Meaning' (Grice 1957). This approach gives rise to well-known problems about the relation between intentions and (conventional) representations, and these were recognised by Grice in his original article as being especially puzzling in the case of collective or impersonal representations: he writes ` "x meant something" is (roughly) equivalent to "somebody meant something by x". Here again there will be cases where this will not quite work. I feel inclined to say (as regards traffic lights) that the change to red means that the traffic was to stop; but it would be very unnatural to say "somebody (e.g. the Corporation) meant by the redlight change that the traffic was to stop." Nevertheless, there seems to be some sort of reference to somebody's intentions' (Grice 1957, p. 46). The same kind of difficulty arises if it is argued that the output of computer programs is meaningful or represents events or states of affairs and that therefore in understanding it as meaning or representing something we must necessarily make reference to somebody's intentions. For then the question is, whose intentions? At the present stage of program development, it doesn't seem plausible to speak of the intentions as belonging to the program, which leaves only the author of the program as a possible candidate. But here for different reasons it is implausible to attribute the relevant intentions to the author of the program. For unlike our traffic light system, computer programs are (already) fully generative structures, capable of generating an infinite number of well-formed strings from a finite set of rules or procedures. From which it follows that output from the program cannot in principle be perfectly foreseen by its author, for s/he is not omniscient. Hence, it cannot be intended by its author, at least under a speech act description. By this I mean that though ELIZA may produce as output (what I take to be) an insult, it would be wrong to accuse Joseph Weizenbaum, ELIZA's creator, of insulting me. He may not even have had the intention of producing a program which would `insult' anyone. And though Weizenbaum clearly had more global intentions in writing ELIZA, reference to these will not necessarily allow us to disambiguate the illocutionary force, meaning, or reference of program output. These kinds of problems suggest to me that it is worthwhile to look for an alternative to the intentionalist account of meaning and representation. Though the possibility of referring to intentions is humanly useful in the course of such activities as achieving disambiguations, maybe meaning, or at least, representation, can be uncoupled from intentions, at least in some domains. Indeed, it has to be uncoupled if it is going to be intelligible to say that an understanding of computer output can be achieved independently of any reference to the intentions of the program writer. I suggest that the intentionalist account can be chipped away at by asking if the sharp distinction between the merely informative and the fully communicative (Lyons 1972) is always as significant for humans as it seems when we are interested in assignments of responsibility and, hence, of rights, duties, commitments, etc. (I own the idea of focussing on responsibility to Roy Harris2.) In this connexion, consider another sand and wind example. Suppose, then, that you are a deeply puzzled physicist, walking in the desert, intent on solving a problem of physics. Suddenly, you see etched in the sand marks which can be rendered as `E=MC2'. Do you reject these marks as a solution to your problem, just because they have been etched by the wind? Does it matter that they are not communicative, that there is no one else to take responsibility for them? Does it matter that they are merely informative? Not a bit of it. All you require is that they be informative, in the sense that they yield a solution to the problem. It seems perfectly natural to say that the marks represent an answer to the problem, in a way which gives you no cause to convert to anthropomorphism, and start being grateful to the wind, or worry about how responsible it is for its utterances. In other words, `representation' can be viewed independently from the side of production (where it seems someone must represent something to somebody by means of something - this is a paraphrase of C S Peirce) but equally from the side of consumption, where representation exists just in virtue of the use made of something by someone (an idea familiar in literature and art). On this basis, I argue that the output of computer programs represents only in the sense that it yields solutions to the puzzles, questions etc. of those who use computer programs. And that it yields solutions is logically (synchronically) independent of the fact that anyone intended it to yield solutions, though that it yields solutions is no doubt causally (diachronically, historically) explicable in such terms. This argument should not be confused with a logically independent one, familiar from the work of Brentano and Husserl onwards, that `all intentional states are representations of objects and states of affairs' (Searle 1979, p. 184). For it does not follow from this that all representations are intentional states, and so my argument that some representations are not intentional states does not involve me in disputing the truth of the intentionalist thesis propounded by Searle3. My argument also seems consistent with other positions adopted by AI workers (cf. Sloman 1979, and the quotation from Fodor in note 2). 4. How to do things with computer programsEven if it is granted that a computer program represents without the representation being intentional, it may seem that the user of computer programs must (logically) make use of intentional representations in using the program, and, specifically, interact with it by means of questions, answers, commands etc. However, I shall argue that though the user, of course, engages in intentional actions in which s/he produces representations, these are (for programs currently in use) mistakenly described as questions, answers, commands etc., and that they are correctly described as representations of questions, answers, commands etc. In other words, the input to the program by its user does not constitute a `first-order' use of language, but rather is a `second-order' use of language to perform actions fully explicable under other descriptions than the first-order ones. By this I mean that the action of a program user is fully explained by statements of the form, `A aimed to get a print-out of the data base and sought to achieve this aim by means of producing a representation "x", where no speech-act concept appears outside the quotation marks around x. In contrast, if I say, `I bet you sixpence' in order to bet you sixpence, the first-order speech-act concept of betting occurs outside the quotation marks as well. Only where I bet you sixpence in order (say) to confuse you - that is, use the first-order speech-act instrumentally or strategically - is it possible to explain fully the action without using first-order speech-act concepts. In speech-act theoretic terms, the speech acts (apparently) performed when using a program do not have the illocutionary force which they would have if they were not utterances made in using a computer program (compare Bateson's definition of `play' in section 2 above). If this is correct, it has far reaching consequences for what the program user does and can do. In particular, it is not required in order for a human to use a computer program appropriately, felicitously etc. that s/he satisfy the conditions on the appropriate, felicitous etc. use of the first-order speech acts which s/he uses in the actions s/he performs upon the computer program. For example, a user can felicitously `order' or `command' a program to do x though s/he is not in authority over the program (or the computer). And this is because his/her action is not correctly described when it is described as an order or a command. It just happens that in order to achieve his/her goals in using the program, it is (strategically or instrumentally) necessary that the user resort to making representations which would count as orders, commands etc., were they to be made to another human, and not to a computer program. It happens this way; but it could happen other ways, if programs were differently constructed. How do I know this is the correct analysis? One justification for it is that it can explain, in conjunction with the account in the next section, the actions and reactions of (some) users of computer programs more successfully than an account which took the interaction between humans and programs as qualitatively no different from interaction between humans. For example, it helps explain the willingness of program users to make another move when the first move they try fails to produce the result desired. If a 'command' doesn't work, I may try a'question', and so on. But if as an army sergeant I command you to move at the double, and you don't, I don't respond to this by trying another move (like, cajoling you). Either I repeat the order, giving you one more chance, or I punish you for failing to comply4. Put differently, the willingness to make another move is (whatever else it may be) a condition of the possibility of creativity. Interaction with a computer program removes the user from the domain of social relations where creativity is inhibited by the desire or obligation to preserve (reproduce) those relations as presently constituted, a desire or obligation which may be motivated by nothing better or worse than the `face wants' of the interactants. Interacting with a computer program frees the user from the need for certain kinds of first-order consistency which are inseparable from face-saving. In addition, it frees the user from the need for the politeness whish is inseparable from satisfying the face wants of others (compare Brown and Levinson 1978). And this, of course, has to do with what a computer program is, a topic to which I now turn. 5. What is a program?5The reaction of users of computer programs to failure to achieve their goals tells us a great deal about the implicit theory of programs with which they are working. Ask a human being a stupid question and you'll get a stupid answer. But ask an intelligent question, and you won't necessarily get an intelligent answer! In the case of computer programs humans assume that if their question was intelligent the answer they get will be so too. In other words, failure to achieve my goals in computing reflects on my competence, and not on the competence (or rudeness, stubborness, uncooperativeness, ignorance or stupidity) of the program6. In consequence, the frustrations of using programs are more like the frustrations of conversing with a deaf person, or in bad French with a French person, than it is like (say) the frustrations of teaching children. Of course, the frustrations and feelings of incompetence generated by unsuccessful attempts to use a program are perfectly real, and sometimes demoralising. But they do not give rise to one important class of problems to which failures to achieve communication with the deaf or the French give rise: they do not create relationship problems. And they do not create relationships problems because whatever a program is, it is not a person. By `relationship problem' I have in mind what writers, like Watzlawick et al. (1967), call such, namely those which specifically arise out of the relationship aspect of all communications7 - what in speech act terms, is the illocutionary as opposed to the locutionary aspect of communication, illocutionary force as opposed to propositional content. Thus, while failure to achieve a goal by means of an `utterance' produced in using a program may force me to renegotiate my relationship to myself (and, hence, my attitude to programming, computers etc.), it does not oblige me to renegotiate my relationships to the program, for there is no such relationship. In this fact especially, I see the liberating advantages of programs in education and counselling, and it may be worthwhile to illustrate the point. When a child asserts p in the classroom in response to a teacher's question, and the teacher responds to the assertion with `false' or `wrong' (say, the child has misspelt a word), then it is not merely empirically the case that the child has got a relationship problem; it is a logical entailment that s/he has a relationship problem. It can only apparently be dissolved, for instance, by pretending that the teacher does not belong to a reference group of significant others (`I don't care what you think'). In contrast, if a child misspells a word to its spelling computer, it has made a false move in the spelling game, and may feel bad about it, but it has not logically got a relationship problem. And this, I suggest, will generally make it easier to go on to try another move, and, hence increase the probability of the child learning (getting it right). However, to make this suggestion plausible I need to answer the objection (made by Carolyn Miller), What motivation does the child have for getting the answer right when the `relationship' aspect of the learning situation is removed? Clearly, the spelling computer indicates the `right' answer, but why should the child aim to achieve that answer? (Why shouldn't it just play?) In response, I think I have to say that the motivation to learn and the understanding of the nature of learning tasks is empirically and, perhaps, necessarily prior to the use of the spelling computer. This explains how the child can feel bad about a false move, and also supplies the motivation the objection was looking for. But though this implies that personal inhibitions can still obstruct learning (as well as supplying motivations to learn), it is still true that obstructions arising out of direct interpersonal relationships are avoided. 6. A cautious analogy: computer program users and Popperian Scientists8Popper says that, `the scientist, I will call him "S", neither knows nor believes. What does he do? I will give a very brief list:
Popper makes these points in connection with his critique of the preoccupation of `second-world' epistemology with such locutions as `I know' and `I believe', to which he opposes the idea of a `third-world' epistemology of the produced results of scientific activity, which are representations of events, states of affairs, etc.9. Though it may not be possible to sustain Popper's account of scientific activity against the criticisms I shall consider in a moment, it does seem to me that it is possible to sustain something very much like it as an account of human use of computer programs, and do so in a way which shows both the benefits of such a way of using programs, and also why Popper's scientific attitude would be an attractive one to adopt, were it possible. (No doubt there are drawbacks to the attitudes I want to promote, but let others discover those.) However, there are problems with Popper's account. The following are freely adapted from some remarks by Roy Edgley on a draft of this essay:
Now in response to these questions, I want to show how even if they cannot be satisfactorily answered with reference to Popper's account of the scientific attitude, they do not undermine my account of using a computer program. The immediate aims of the scientist and the user of computer programs alike, are no doubt subordinated to a directing goal of coming by some beliefs, including practical beliefs. If they were not so subordinated, they would be pretty pointless (`mere play', perhaps). Even the Feyerabendian slogan `Anything goes' (Feyerabend 1975, p.23) is methodological advice which makes sense only because Feyerabend argues that it helps us achieve what it is we want from scientific activity (Feyerabend thinks that what we want is 'progress' (Feyerabend 1975, p.23)). So point (a) is taken. What remains for both the scieiltist and the user of programs is a prescription of flexibility. However, (b) is rather different. The user of computer programs can perfectly well understand propositions without understanding them as expressions of belief. This follows from the argument of section 3 above. All the user has to understand is what states of affairs are represented by the representations s/he uses. It is arguable that the same goes for the propositions the scientist is concerned with: if a proposition can be understood in terms of its truth conditions, and if the `is true' in `p is true if q' can be analysed without bringing in belief then objection (b) would fall. Objection (c) is most relevant to my present concerns. For I should like to argue that there would be advantages in separating criticisms of propositions from criticisms of people, if we could make the separation. To claim to know or to profess a belief is to commit oneself personally in the world of persons, and to expose oneself (deliberately) to the possibility of refutation or other criticism. To claim to know or believe is to engage in transitive, interpersonal relations, in which relationship problems are possible. In contrast, if the Popperian attitude were possible, the engagements of the scientists would be with ideas, not people (roughly speaking). They are (excepting `proposing an experimental test') intransitive and nonpersonal. And though a culture may well despise or otherwise obstruct engagement in such activities, relationship problems of that sort would be logically external to the activity itself, not internal to it. In the absence of cultural hostility to `trying to understand' etc., these activities would not carry the personal risks which professions of knowledge and belief characteristically carry, and that, social psychologically speaking, would increase the possibilities of being creative in Science. Edgley's position is that the separation of the evaluation of arguments and of people which Poper envisages is impossible: `Anything people do will involve them, and evaluating their products necessarily evaluates them', he argues (see also Edgley 1969, and Searle 1969, chapter 8). Now I accept this argument for people, though I would just add the qualification that, social psychologically speaking, the logical connection between criticising arguments and criticising people can be realised in different ways which place more or less emphasis either on the people or the arguments. Either people or arguments can be thematized (cf. Habermas 1976). However, the argument does not apply in the relation between the user of a program and the program. For the program is not a person, and so criticism of its output is not criticism of a person but of a program. And, more importantly, the program cannot criticise the user, though it can cause the user to engage in self-criticism. I consider this an important aspect of using computer programs. Objection (d) raises a large number of issues. All I wish to say is that the program cannot be held responsible for its output (as Roy Harris pointed out to me), nor can the program hold the user responsible for his/her input: it is misleading to say that programs are very tolerant and very patient when being messed around and worked hard by humans, but it is misleading only as an anthropomorphic way of expressing what can be otherwise expressed. 7. Some implicationsIn this concluding section, I want to say a few things about the value of non-human programs in education, therapy and medicine. Schools and teachers characteristically evaluate claims to knowledge and professions of belief in a way which focusses on persons claiming to know and expressing beliefs, rather than on what is claimed or expressed. It is important to distinguish the contribution made by the institution of school to the constitution of education as a world of authority, responsibility, judgement and evaluation from the contribution made to this constitution by interpersonal relationships as such. But it is my view that the two together, centering as they do so strongly on persons, set up unnecessary obstacles to thinking, creativity, invention, discovery and learning - obstacles which could be reduced in relationships and schools focussing on ideas and theories rather than persons10. In this connexion, it is to miss completely the possibilities opened up by AI to write programs as if they were pseudo-persons and talk about them as if they were real persons. No one has any problem in seeing that an adding machine is a thing to add with, and not a machine which adds, but by the use of `plausibility tricks' (Boden 1977, p. 471 for criticism of these) it is being made unnecessarily difficult to see that (existing) computer programs are things to think with, not things which think (and, hence, minds) And this is their virtue, just as it is the virtue of arabic numerals, that you can do mental arithmetic in them. Nor is it this non-human, non-personal character of programs which will alienate humans from them; rather it is partly because we are encouraged to see them as rival minds, and not as extensions of mind, that we rapidly become alienated from them. Of course, it is also because they are institutionalised as rival minds that we see them that way. (Had Marx confused machines with their use he would have been a Luddite. He wasn't. No more need we be AI Luddites.) In relation to therapy and medicine, there is some evidence that people find it easier to tell their problems and symptoms to a program than to a counsellor or doctor (see Boden 1977, p. 458 and footnote 27 there). This seems to be because they don't see the program as a person full of all the human qualities which make shame, guilt and embarrassment permanent possible consequences of interaction. Among the unreasonable reactions to such a fact, if it is a fact, I would include the following:
Among the reasonable reactions, I would include:
NOTES1 In the Summer Term 1979 I took an undergraduate course at the University of Sussex which introduced students to elementary interactive programming in POP 11. This essay grew out of thinking about what I was doing in front of the teletype terminal. I am grateful to Max Clowes for taking me on to his course, for making it a stimulating experience, and for comments on a first draft of this paper. I am also grateful to Roy Harris for comment and criticism, and to my colleagues Roy Edgley, Michael Fraut, Carolyn Miller and Aaron Sloman. I have tried to respond to some of their numerous points in this final version of my paper. 2 Cf. the following quotation: `the states of the organism postulated in theories of cognition would not count as states of the organism for purposes of, say, a theory of legal or moral responsibiltiy. But so what? What matters is that they should count as states of the organism for some useful purpose. In particular, what matters is that they should count as states of the organism for purposes of constructing psychological theories that are true' (Fodor 1976, p. 53). 3 Compare Roy Bhaskar's definition of `mind': `An entity x may be said to possess a mind at time t if and only if it is the case that it possesses at t the capacity either to acquire or to exercise the acquired ability to creatively manipulate symbols' (Bhaskar 1979, p. 103). The point here is that the intentional verbs (`acquire', `exercise', `manipulate') are indispensable to the definition, not the reference to symbols. A computer program creatively manipulates symbols, but it does so non-intentionally. (It has no reasons for what it does.) 4 Aaron Sloman objects top this comparison as follows, `the wild generalisation from the rigid behaviour of some army sergeants to all human interactions is quite unjustified. I often replace a request or question or assertion which appears not to have had its intended effect with a quite different utterance e.g. a suggestion that we go and talk some place where there are fewer distractions. I don't need to assume I am talking to a machine for this to be reasonable. Any reasonable teacher will often replace an unanswered question with a suggestion, reminder or assertion. This does not require any assumption that pupils are inhuman.' In reply to this, I want to argue that it is characteristic, and maybe defining (see Watzlawick el al. 1967; Habermas 1976), of interhuman communicative relationships that the range of possible next moves in an ongoing game is normatively constrained. The range may be larger than one move, and may not form a closed set. Nonetheless it is not the case that the next move can be determined on an exclusively rational action (praxiological) basis. The constraints may derive either from politeness considerations, or else from conditions upon maintaining mutual agreement that the validity claims raised by all speech acts (comprehensibility, truth, sincerity, right, according to Habermas 1976) are actually being fulfilled. 5 Aaron Sloman correctly observes that I nowhere answer this question; and he argues that my discussion of programs `as if programs were only one kind of thing... is about as mistaken as trying to discuss the kinds of things "an organism" can do. Organisms vary enormously in structure and abilities and so can computer programs. There is no reason to doubt that there is as at least as wide a range of types of possible programs as there are organisms. Trying to generalise about the kinds of ways it is appropriate to relate to them is therefore quite pointless'. My response to these two criticisms is to say that (1) I am interested in the question, what can existing computer programs plausibly betaken to be? at a level of generality such that (2) it is plausible to consider them as a class distinct from persons (as organisms are distinct from non-organisms). 6 But can't a program be stupid in the sense of limited? Yes, it can be and if I try to do things with it which it can't do, this reflects on my lack of appreciation of the nature of the program, not on the program. 7 'Every communication has a content and a relationship aspect such that the latter classifies the former and is therefore a metacommunication' (Watzlawick et al. 1967, p.54. Italics omitted). 8 This whole section has been rewritten in response to detailed criticisms from Roy Edgley. 9 Cf. Aaron Sloman's interesting comments, which bear on the rest of this section and on my conclusions. `The third world is supposed to contain objective entities, like books, maps, libraries, valid or invalid arguments, theories and so on. These all are, or are the content of, symbolic structures of one kind or another. One way of looking at Artificial Intelligence is as an attempt to reduce the second world (of subjective inner states) to this third world, by showing how apparently subjective states and processes, like beliefs, feelings, intentions etc. exist by virtue of the existence of certain sorts of complex processes manipulating symbolic structures. These internal "computational" processes may be in practice impossible to observe because of their complexity but that doesn't mean they are not as objective as calculations on a blackboard. One of the consequences of accepting this reduction of World Two to World Three is that we should perhaps begin to treat people more as if they were programs than we have hitherto done for instance in teaching and therapy. To take a simple example if I want a program to make subsequent use of some information I give it, I may need to formulate the information in such a way as to take account of the program's indexing and searching strategies. If it is not able to make full use of its information I may need to find ways of extending or improving its indexing and searching strategies. Similarly, by adopting an information processing model of their pupils, teachers may be less inclined to label as "stupid" those who apparently don't retain, or apply, what they have been taught. Instead, the teacher may be able to devise new ways of extending the pupils' abilities by helping them to reprogram themselves. In this sort of way, by treating people as programs, we may more effectively help them to develop their full human potential.' 10 Whereas deschoolers and freeschoolers respond to the `hidden curriculum' of (what they take to be school-specific) interpersonal relationships, which they also see as an obstacle to learning, by proposing deinstitutionalisation, my response is to suggest what can only be called impersonalisation. (I owe this point to Michael Eraut.) BibliographyBATESON, G. 1972 Steps to an Ecology of Mind. Paladin, London. BHASKAR, R. 1979 The Possibility of Naturalism. Harvester Press, Brighton. BODEN, M. 1977 Artificial Intelligence and Natural Man. Harvester Press, Hassocks. BROWN, P. and LEVINSON, S. 1978 Universals in language: politeness phenomena. In E. N. Goody (Ed.), Questions and Politeness. Cambridge University Press. EDGLEY, R. 1969 Reason in Theory and Practice. Hutchinson, London. FEYERABEND, P. 1975 Against Method. New Left Books, London. FODOR, J. 1976 The Language of Thought. Harvester Press, Hassocks. GRICE, H. 1957 Meaning. In Philosophical Review. (Reprinted in and quoted from P. Strawson, (Ed.), Philosophical Logic. Oxford University Press, 1967). HABERMAS, J. 1976 Was heisst Universal pragmatik? In K O. Apel, (Ed.), Sprachpragmatik and Philosophic. Suhrkamp, Frankfurt. (Trans. 'What is Universalpragmatics? In J. Habermas, Communication and the Evolution of Society. Heinemann, London, 1979.) KRIPKE, S. 1972 Naming and necessity. In D. Davidson and G. Harman (Eds.), Semantics of Naturat Language, 2nd edition. D. Reidel, Dordrecht. LABOV, W. 1972 Rules for ritual insults. In T. Kochman (Ed.), Rappin' and Stylin' Out. University of Illinois Press, Urbana. LYONS, J. 1972 Human language. In R. Hinde (Ed.), NonVerbal Communication. Cambridge University Press. POPPER, K. 1979 Objective Knowledge. Revised edition. Oxford University Press. SEARLE, J. 1969 Speech Acts. Cambridge University Press. SEARLE, J. 1979 Intentionality and the use of language. In A. Margalit, (Ed.) Meaning and Use. D. Reidel, Dordrecht. SLOMAN, A. 1978 The Computer Revolution in Philosophy. Harvester Press, Hassocks. SLOMAN, A. 1979 The primacy of noncommunicative language. Paper to the ASLIB/BCS Conference, Oxford, March 1979. WATZLAWICK, P., BEAVIN, J. and JACKSON, D. 1967 The Pragmatics ofHuman Communication. Norton, New York. Very lightly edited for the 2004 website version. Originally the lead article in the first issue of the journal, Language and Communication, 1981, vol 1, number 1, pages 3 - 12. Another version subsequently (1982) appeared in the Journal of Pragmatics, but I do not have a copy of that version to hand and do not recall how it differs. |