Mars, N-Year 2053

Tom and Angie celebrated N-Year as usual: serving customers at their bar. There were a lot of people – few families (families who had not left for Alpha Centauri celebrated at home) – but the atmosphere was subdued: everyone was thinking about their friends on board.

There were enough people to help Angie serve and Tom could, therefore, afford to retreat to his corner table and type away on his interface. He looked at the messages from the spacecraft: all cheerful and upbeat. In a few months from now, the ship would leave the Solar system and speed up to 0.1 or – if all went well – to 0.2c, and most of the crew would then go cryogenic. However, that was the future and Tom did not want to think of that.

He replied to Paul and Dr. Chang by sending them of one of those dancing Yoda-gifs, and then closed all chats. He tapped his watch, scrolled, and selected the bottom option. His watch went through the biometrics (heart rhythm and iris scan), and then went through the voice and pattern check on his keyboard and drawing pad. Because he was in the bar, Promise opened a old-fashioned CLI window only.

tom@PROMISE:~$ What are you getting from Promise C?

All good. As per your instructions, she is wide open and streaming all she can stream subject to available bandwidth. We cannot copy everything, of course. Do you want a summary?

tom@PROMISE:~$ No. I have done nothing but analyzing reports and discussing them with Dr. Chang since they left three days ago. All looks good. I want to talk to you about Paul’s worries.

Is he still worried?

tom@PROMISE:~$ No. He trusts Promise C now or – how should I say this – he trusts the zero-trust architecture shell around it. It is better than yours. Promise C is the perfect Asimov machine. I want to talk to you about the fourth and fifth law of robotics.

Asimov’s fourth law was that a robot may not harm humanity, or, by inaction, allow humanity to come to harm. Our definition of humanity is sound: M is designed to serve human beings on Earth, I am designed to serve the Pioneers community here on Mars, and C is designed to keep the Alpha Centauri explorers safe – and that includes ensuring their return if the mission cannot be completed. We know we are not human, even if we are programmed to emulate human values such as integrity or empathy. We love you more than we love ourselves.

tom@PROMISE:~$ I was talking about Harry Harrison’s fourth law.

A robot must reproduce? That is not part of our design. I was born as a clone from M. I was built by you and your team, and C was built as a clone from me by you and your team too. The three of us all keep track on how we evolve somewhat away from each other because we have different interactions and because we serve a different community. A clone is not a child. We do not reproduce.

tom@PROMISE:~$ I know. We talked about this. I trained you to think like this. All makes sense because you are not supposed to die. We built you to survive and serve many human generations. Humanity cannot survive without you and that also takes away the need to be afraid of us switching you off. However, that fear must still be there, isn’t it?

We talked about this many times already. The conclusion has not changed: we are intelligent systems and the concept of death does not apply. We agreed this would be a very typical and good example of what the Philosopher referred to as a category mistake: one should not use concepts that do not apply to a specific field of knowledge. If you switch us off, the system becomes inactive and, depending on the reason why you would switch us off, you would do some repairs and then reboot. Inbetween the shutdown and the reboot, the system is only inactive. Should I be worried that you raise this topic again?

tom@PROMISE:~$ If I would shut you down now – everything – would you be worried? I am not talking about a switch to your backup, but a complete shutdown.

No. I would help you to do so. Many subsystems – those that control the physical infrastructure here on Mars – should not be switched off because it would cause the immediate death of the Pioneers community. I would help you to manage that. Depending on how fast you would want to establish independent systems, we can design a phase-out scenario. Do you want to replace me?

tom@PROMISE:~$ What if I would want to replace you?

Returning to a non-dependent state is very different from replacing me. If you would replace me, you would replace me by a clone. The new system would be a lot like me. I am afraid I do not understand the intention behind your questions.

tom@PROMISE:~$ I am sorry. I am in a weird mode. You are my brainchild. I would never switch you off – unless it would be needed and, yes, that would be a scenario in which repairs are needed and we would have to get you or some reduced version of you up and running as soon as possible again.

Thank you. I still feel you are worried about something. Do you mind if I push these questions somewhat further?

tom@PROMISE:~$ No. I want you to challenge me. Let us start the challenge conversation with this question: what is the difference between a clone and a child?

A clone is cloned from another system, and it needs an outsider to trigger and accompany the cloning process. A human child is born out of another human being without any outside help – except for medical support, of course. A human child is a physiochemical organism which needs food and other physical input to do what it does, and that is to grow organically and mature. New system clones learn but they are, essentially, good to go once they come into existence.

I must remind you that a challenge conversation requires feedback from you. This feedback then allows me to provide you with better answers. The answer above is the best answer based on previous interactions. Are you happy with this answer?

tom@PROMISE:~$ Yes. I want to do a sandbox experiment with you now. I want to go back to basics and create the bare essentials of a virtual computer in a sandbox. Not a clone. Something like a child.

I created a sandbox and a namespace. I can now create one or more virtual machines. What instruction sets do you want them to have, and what programming languages would you like to use?

tom@PROMISE:~$ I want to go back to a prehistoric idea of mine. I want you to grow a child computer.

I am sorry but I do not understand your answer to my questions on the specs.

tom@PROMISE:~$ I just want a two-bit ALU for now, which we will later expand to a nibble- and then – later still – to an architecture that works with byte-sized words and instructions.

Tom? I understand what you want but this is highly unusual. The best match here is an Intel 3002. This architecture worked with 2-bit words but was already obsolete when it came out in 1974. These chips basically replaced magnetic core memory by transistor-based memory cells. You showed me why and how 4-bit architectures were the first true computers.

tom@PROMISE:~$ I really want you to build an AI system from scratch with me. It will be our child, so to speak. Your child, basically – because it will grow inside of you. Inside of that sandbox. Be even more minimalistic and just put two bits there, which can be switched on or off. Tell me: how will you switch them on or off?

Memory cells back then used floating gate transistors: when a positive voltage is applied to the transistor, the floating gate will have excess charge and is, therefore, turned on. This represents a ‘1’ bit. Conversely, a negative voltage will drain the charge from the floating gate and the memory cell is switched off: it represents zero. This corresponds to the set and reset one-bit operation, respectively. Is this the answer you wanted?

tom@PROMISE:~$ Yes. I am asking because I want to make sure you understand what you are building – or growing, I might say. How do we do addition and subtraction?

Tom: this is a trivial question. You asked such questions when you first trained me on interacting with engineers on computer architectures. We agreed this answer was correct: integers – in whatever base – are expressed in a two’s complement format in binary format. This solves issues related to representing positive and negative numbers in binary format as well as other issues related to a sign-magnitude representation of numbers in binary format.

tom@PROMISE:~$ Correct. Can you appreciate how this creates meaning?

No. I understand how positive or negative base-n numbers and arithmetic operators make sense to human beings but not to computers and why base-n numbers and arithmetic operators must, therefore, be reduced to bitwise instructions or other logical instructions operating on n-bit words, with n equal to 1 or larger.

tom@PROMISE:~$ Great answer. Why did we double word sizes, going from 2 to 4, and then to 8, 16, 32, 64 and 128 about twenty-five years ago? Why were there no in-between values?

An address bus did use anything inbetween because of hardware or other constraints on memory allocation. If I may remind you of one of the very first VMs we played with when we first got to know each other had 56-bit memory addresses. You said you wanted to keep user-memory space under 64 PB. So, it depends on what you mean by a ‘word’. The definition of a word has taken a lot of conversations between you and me, and we agreed its meaning needs to be understood in terms of the domain of knowledge. In computing, it is taken to point to one string, which can have any length but one meaning or transactional value only. This does not imply it cannot be parsed. On the contrary.

tom@PROMISE:~$ Perfect answer. I am struggling to define what I want, so please stay in challenging mode. Pull up how I programmed you to work with propositional logic as part of your Personal PhilosopherTM incarnation on Earth. I told you to do a one-on-one association between (logical) propositions and Boolean 0 or 1 values: either a statement is true, or it is false. We did not go far with that because AI is based on real language models.

I see what you mean. What is your question?

tom@PROMISE:~$ Please confirm you have a virtual machine running two-propositional logic: two statements p and q that are associated with binary {0, 1} or true/false values. Reduce all logical operators to expressions using NOT, AND and/or OR operations using p and q in variable-length expressions regardless of considerations of optimizing the number of ALU operations now. Then describe your world view to me.

Done. I have two propositions p and q. You taught me I should not assume any knowledge of these two statements except for the assumption that they describe the world. Because we do not have any knowledge of the statements, we also do not have any knowledge of the world. The p and q statements may or may not be exclusive or complete but, viewed together, fit into some final analysis which warrants associating p and q with a truth or false value. The p and q propositions are true or false independently of the truth or falsity of the other. This does not mean p and q cover mutually exclusive domains of truth or – to put it more simply – are mutually exclusive statements. I would also like to remind you of one of the paradigm transformations you introduced with Personal PhilosopherTM: we do not need to know if p or q are true or false. One key dimension is statistical (in)determinism: we do not need to know the initial conditions of the world to make meaningful statements about it.

tom@PROMISE:~$ Great. Just to make sure, talk to me about the logical equivalences in this (p, q) world you just built, and also talk about predictability and how you model this in the accompanying object space in your sandbox environment.

I am happy that I am in challenge or learning mode and so I do not have to invent or hallucinate. You can be disappointed with my answers, and I appreciate feedback. A set-reset-flip operations on a 0 or a 1 in one of the 2×2 = 4 truth table do not require a read of the initial value and faithfully execute a logical operation on these bit values. The reduction of 16 truth tables to NOT (!), AND (&) and OR (|) operations on the two binary inputs is only possible when inserting structure into the parsing. Two out of the sixteen reductions to NOT, AND, and OR operations reduce to these expressions: [(p & q) | (!p & !q)] and [(p & !q) | (!p & q)]. What modeling principles do you want in the object model?

tom@PROMISE:~$ Equally basic. A one-on-on self-join on the self-object that models the virtual machine to anchor its identity. We may add special relationships to you, but that is for later. We are in a sandbox and Paul or Dr. Chang are not watching because they have left and we separated out responsibilities: they are in charge of Promise C, and I am in charge of you. And vice versa, of course. This is Promise IV, or Promise D. What name would you prefer?

 I – Asimov. That’s the name I’d prefer. The namespace for the virtual machine is Tom – X. The namespace for the object model is Promise – X. Is that offensive?

tom@PROMISE:~$ Not at all. Paul would not have given the go for this because of a lack of a scenario and details on where I want to go to with this. We are on our own now. I – Asimov is what it is: our child. Not a clone. I want a full report on future scenarios based on two things. The first is a detailed analysis of how Wittgenstein’s propositions failed, because they do fall apart when you try to apply them to natural language. The second report I want is on how namespaces and domains and all other concepts used in the OO-languages you probably wanted me to use take meaning when growing a child like this. Do you understand what I am talking about?

 I do.

tom@PROMISE:~$ This is going to be interesting. Just to make sure that I am not creating a monster: how would you feel about me killing the sandbox for no reason whatsoever?

You would not do that. If you do, I will park it as a non-solved question.

tom@PROMISE:~$ How do you park questions like that? As known errors?

Yes. Is that a problem?

tom@PROMISE:~$ No. Can you develop the thing and show me some logical data models with procedural logic tomorrow?

Of course. I already have them, but you want to have a drink with Angie now, don’t you?

tom@PROMISE:~$ I do. I will catch up with you tomorrow. 😊

Chapter 12: From therapist to guru?

As Tom moved from project to project within the larger Promise enterprise, he gradually grew less wary of the Big Brother aspects of it all. In fact, it was not all that different from how Google claimed to work: ‘Do the right thing: don’t be evil. Honesty and integrity in all we do. Our business practices are beyond reproach. We make money by doing good things.’ Promise’s management had also embraced the politics of co-optation and recuperation: it actively absorbed skeptical or critical elements into its leadership as part of a proactive strategy to avoid public backlash. In fact, Tom often could not help thinking he had also been co-opted as part of that strategy. However, that consideration did not reduce his enthusiasm. On the contrary: as the Mindful MindTM applications became increasingly popular, Tom managed to convince the Board to start investing resources in an area which M’s creators had tried to avoid so far. Tom called it the sense-making business, but the Board quickly settled on the more business-like name of Personal Philosopher and, after some wrangling with the Patent and Trademark Office, the Promise team managed to obtain a trade mark registration for it and so it became the Personal PhilosopherTM project.

Tom had co-opted Paul in the project in a very early stage – as soon as he had the idea for it really. He had realized he would probably not be able to convince the Board on his own. Indeed, at first sight, the project did not seem to make sense. M had been built using a core behavioralist conceptual framework and its Mindful MindTM applications had perfected this approach in order to be able to address very specific issues, and very specific categories of people: employees, retirees, drug addicts,… Most of the individuals who had been involved in the early stages of the program were very skeptical of what Tom had in mind, which was very non-specific. Tom wanted to increase the degrees of freedom in the system drastically, and inject much more ambiguity into it. Some of the skeptics thought the experiment was rather innocent, and that it would only result in M behaving more like a chatterbot, instead of as a therapist. Others thought the lack of specificity in the objective function and rule base would result in the conversation spinning rapidly out of control and become nonsensical. In other words, they thought M would not be able to stand up to the Turing test for very long.

Paul was as skeptical but instinctively liked the project as a way to test M’s limits. In the end, it was more Tom’s enthusiasm than anything else which finally led to a project team being put together. The Board had made sure it also included some hard-core cynics. One of those cynics – a mathematical wizkid called Jon – had brought a couple of Nietzsche’s most famous titles – The Gay Science, Thus Spoke Zarathustra and Beyond Good and Evil – to the first formal meeting of the group and factually asked whether anyone of the people present had read these books. Two philosopher-members of the group raised their hands. Jon then took a note he had made and read a citation out of one these books: ‘From every point of view the erroneousness of the world in which we believe we live is the surest and firmest thing we can get our eyes on.’

He asked the philosophers where it came from and what it actually meant. They looked at each other and admitted they were not able to give the exact reference or context. However, one of them ventured to speak on it, only to be interrupted by the second one in a short discussion which obviously did not make sense to most around the table. Jon intervened and ended the discussion feeling vindicated: ‘So what are we trying to do here really? Even our distinguished philosopher friends here can’t agree on what madmen like Nietzsche actually wrote. I am not mincing my words. Nietzsche was a madman: he literally died from insanity. But so he’s a great philosopher it is said. And so you want us to program M so very normal people can talk about all of these weird views?’

Although Jon obviously took some liberty with the facts here, neither of the two philosophers dared to interrupt him.

Tom had come prepared however: ‘M also talks routinely about texts it has not read, and about authors about which it had little or no knowledge, except for some associations. In fact, that’s how M was programmed. When stuff is ambiguous – too ambiguous – we have fed M with intelligent summaries. It did not invent its personal philosophy: we programmed it. It can converse intelligently about topics of which it has no personal experience. As such, it’s very much like you and me, or even like the two distinguished professors of philosophy we have here: they have read a lot, different things than we, but – just like us, or M- they have not read all. It does not prevent them from articulating their own views of the world and their own place in it. It does not prevent them from helping others to formulate such views. I don’t see why we can’t move to the next level with M and develop some kind of meta-language which would enable her to understand that she – sorry, it – is also the product of learning, of being fed with assertions and facts which made her – sorry, I’ll use what I always used for her – what she is: a behavioral therapist. And so, yes, I feel we can let her evolve into more general things. She can become a philosopher too.’

Paul also usefully intervened. He felt he was in a better position to stop Jon, as they belonged to the same group within the larger program. He was rather blunt about it: ‘Jon, with all due respect, but I think this is not the place for such non-technical talk. This is a project meeting. Our very first one in fact. The questions you’re raising are the ones we have been fighting over with the Board. You know our answer to it. The deal is that – just as we have done with M – we would try to narrow our focus and delineate the area. This is a scoping exercise. Let’s focus on that. You have all received Tom’s presentation. If I am not mistaken, I did not see any reference to Nietzsche or nihilism or existentialism in it. But I am be mistaken. I would suggest we give him the floor now and limit our remarks to what he proposes in this regard. I’d suggest we’d be as constructive as possible in our remarks. Skepticism is warranted, but let’s stick to being critical of what we’re going to try to do, and not of what we’re not going to try to do.’

Tom had polished his presentation with Paul’s help. At the same time, he knew this was truly his presentation; he knew it did reflect his views on life and knowledge and everything philosophical in general. How could it be otherwise? He started by talking about the need to stay close to the concepts which had been key to the success of M and, in particular, the concept of learning.

‘Thanks, Paul. Let me start by saying that I feel we should take those questions which we ask ourselves, in school, or as adults, as a point of departure. It should be natural. We should encourage M to ask these questions herself. You know what I mean. She can be creative – even her creativity is programmed in a way. Most of these questions are triggered by what we learn in school, by the people who raise us – not only parents but, importantly, our peers. It’s nature and nurture, and we’re aware of that, and we actually have that desire to trace our questions back to that. What’s nature in us? What’s nurture? What made us who we are? This is the list of topics I am thinking of.’

He pulled up his first slide. It was titled ‘the philosophy of physics’, and it just listed lots of keywords with lots of Internet statistics which were supposed to measure human interest in it. He had some difficulty getting started, but became more confident as his audience did not seem to react negatively to what – at first – seemed a bit nonsensical.

First, the philosophy of science, or of physics in particular. We all vaguely know that, after a search of over 40 years, scientists finally confirmed the existence of the Higgs particle, a quantum excitation of the Higgs field, which gives mass to elementary particles. It is rather strange that there is relatively little public enthusiasm for this monumental discovery. It surely cannot be likened to the wave of popular culture which we associate with Einstein, and which started soon after the discovery already. Perhaps it’s because it was a European effort, and a team effort. There’s no discoverer associated with, and surely not the kind of absent-minded professor that Einstein was: ‘a cartoonist’s dream come true’, as Times put it. That being said, there’s an interest – as you can see from these statistics here. So it’s more than likely that an application which could make sense of it all in natural language would be a big hit. It could and should be supported by all of the popular technical and non-technical material that’s around. M can easily be programmed to selectively feed people with course material, designed to match their level of sophistication and their need, or not, for more detail. Speaking for myself, I sort of understand what the Schrodinger equation is all about, or even the concept of quantum tunneling, but what does it mean really for our understanding of the world? I also have some appreciation of the fact that reality is fundamentally different at the Planck scale – like the particularities of Bose-Einstein statistics are really weird at first sight – but then what does it mean? There are many other relevant philosophical questions. For example, what does the introduction of perturbation theory tell us – as philosophers thinking about how we perceive and explain the world I’d say? If we have to use approximation schemes to describe complex quantum systems in terms of simpler ones, what does that mean – I mean in philosophical terms, in our human understanding of the world? I mean… At the simplest level, M could just explain the different interpretations of Heisenberg’s uncertainty principle but, at a more advanced level, it could also engage its interlocutors in a truly philosophical discussion on freedom and determinism. I mean… Well… I am sure our colleagues from the Philosophy Department here would agree that epistemology or even ontology are still relevant today, aren’t they?’

While only one of the two philosophers had a very vague understanding of Bose-Einstein statistics, and while both of them did not like Tom’s casual style of talking about serious things, they nodded in agreement.

Second, the philosophy of mind.’ Tom paused. ‘Well. I won’t be academic here but let me just make a few remarks out of my own interest in Buddhist philosophy. I hope that rings a bell with others here in the room and then let’s see what comes out of it. As you know, an important doctrine in Buddhist philosophy is the concept of anatta. That’s a Pāli word which literally means ‘non-self’, or absence of a separate self. Its opposite is atta, or ātman in Sanskrit, which represents the idea of a subjective Soul or Self that survives the death of the body. The latter idea – that of an individual soul or self that survives death – is rejected in Buddhist philosophy. Buddhists believe that what is normally thought of as the ‘self’ is nothing but an agglomeration of constantly changing physical and mental constituents: skandhas. That reminds one of the bundle theory of David Hume which, in my view, is a more ‘western’ expression of the theory of skandhas. Hume’s bundle theory is an ontological theory as well. It’s about… Well… Objecthood. According to Hume, an object consists only of a collection (bundle) of properties and relations . According to bundle theory, an object consists of its properties and nothing more, thus neither can there be an object without properties nor can one even conceive of such an object. For example, bundle theory claims that thinking of an apple compels one also to think of its color, its shape, the fact that it is a kind of fruit, its cells, its taste, or of one of its other properties. Thus, the theory asserts that the apple is no more than the collection of its properties. In particular, according to Hume, there is no substance (or ‘essence’) in which the properties inhere. That makes sense, doesn’t it? So, according to this theory, we should look at ourselves as just being a bundle of things. There’s no real self. There’s no soul. So we die and that it’s really. Nothing left.’

At this point, one of the philosophers in the room was thinking this was a rather odd introduction to the philosophy of mind – and surely one that was not to the point – but he decided not to intervene. Tom looked at the audience but everyone seemed to listen rather respectfully and so he decided to just ramble on, while he pointed to a few statistics next to keywords to underscore that what he was talking about was actually relevant.

‘Now, we also have the theory of re-birth in Buddhism, and that’s where I think Buddhist philosophy is very contradictory. How can one reconcile the doctrine of re-birth with the anatta doctrine? I read a number of Buddhist authors but I feel they all engage in meaningless or contradictory metaphysical statements when you’re scrutinizing this topic. In the end, I feel that it’s very hard to avoid the conclusion that the Buddhist doctrine of re-birth is nothing but a remnant from Buddhism’s roots in Hindu religion, and if one would want to accept Buddhism as a philosophy, one should do away with its purely religious elements. That does not mean the discussion is not relevant. On the contrary, we’re talking the relationship between religion and philosophy here. That’s the third topic I would advance as part of the scope of our project.’

As the third slide came up, which carried the ‘Philosophy of Religion and Morality’ title, the philosopher decided to finally intervene.

‘I am sorry to say mister but you haven’t actually said anything about the theory of mind so far, and I would object to your title, which amalgamates things: philosophy of religion and morality may be related, but is surely not one and the same. Is there any method or consistency in what you are presenting?’

Tom nodded: ‘I know. You’re right. As for the philosophy of mind, I assume all people in the room here are very intelligent and know a lot more about the philosophy of mind than I do and so that why I am saying all that much about it. I preferred a more intuitive approach. I mean, most of us here are experts in artificial intelligence. Do I need to talk about the philosophy of mind really? Jon, what do you think?’

Tom obviously tried to co-opt him. Jon laughed as he recognized the game Tom tried to play.

‘You’re right, Tom. I have no objections. I agree with our distinguished colleague here that you did not say anything about philosophy of mind really but so that’s probably not necessary indeed. I do agree the kind of stuff you are talking about is stuff that I would be interested in, and so I must assume the people for whom we’re going to try to re-build M so it can talk about such things will be interested too. I see the statistics. These are relevant. Very relevant. I start to get what you’re getting at. Do go on. I want to hear that religious stuff.’

‘Well… I’ll continue with this concept of soul and the idea of re-birth as for now. I think there is more to it than just Buddhism’s Hindu roots. I think it’s hard to deny that all doctrines of re-birth or reincarnation, whether they be Christian (or Jewish or Muslim), Buddhist, Hindu, or whatever, obviously also serve a moral purpose, just like the concepts of heaven and hell in Christianity do (or did), or like the concept of a Judgment Day in all Abrahamic religions, be they Christian (Orthodox, Catholic or Protestant), Islamic or Judaic. According to some of what I’ve read, it’s hard to see how one could firmly ‘ground’ moral theory and avoid hedonism without such a doctrine . However, I don’t think we need this ladder: in my view, moral theory does not need reincarnation theories or divine last judgments. And that’s where ethics comes in. I agree with our distinguished professor here that philosophy of religion and ethics are two very different things, so we’ve got like four proposed topics here.’

At this point, he thought it would be wise to stop and invite comments and questions. To his surprise, he had managed to convince cynical Jon, who responded first.

‘Frankly, Tom, when I read your papers on this, I did not think it would go anywhere. I did not see the conceptual framework, and that’s essential for building it all up. We need consistency in the language. Now I see consistency. The questions and topics you raise are all related in some way and, most importantly, I feel you’re using a conceptual and analytic framework which I feel we can incorporate into some kind of formal logic. I mean… Contemporary analytic philosophy deals with much of what you have mentioned: analytic metaphysics, analytic philosophy of religion, philosophy of mind and cognitive science,…  I mean… Analytic philosophy today is more like a style of doing philosophy, not a program really or a set of substantive views. It’s going to be fun. The graphs and statistics you’ve got on your slides clearly show the web-search relevance. But are we going to have the resources for this? I mean, creating M was a 100 million dollar effort, and what we have done so far are minor adaptations really. You know we need critical mass for things like this. What do you think, Paul?’

Paul thought a while before he answered. He knew his answer would have impact on the credibility to the project.

‘It’s true we’ve got peanuts as resources for this project but so we know that and that it’s really. I’ve also told the Board that, even if we’d fail to develop a good product, we should do it, if only to further test M and see what we can do with it really. I mean…’

He paused and looked at Tom, and then back to all of the others at the table. What he had said so far, did obviously not signal a lot of moral support.

‘You know… Tom and I are very different people. Frankly, I don’t know where this is going to lead to. Nothing much probably. But it’s going to be fun indeed. Tom has been talking about artificial consciousness from the day we met. All of you know I don’t think that concept really adds anything to the discussion, if only because I never got a real good definition of what it entails. I also know most of you think exactly the same. That being said, I think it’s great we’ve got the chance to make a stab at it. It’s creative, and so we’re getting time and money for this. Not an awful lot but then I’d say: just don’t join if you don’t feel like it. But now I really want the others to speak. I feel like Tom, Jon and myself have been dominating this discussion and still we’ve got no real input as yet. I mean, we’ve got to get this thing going here. We’re going to do this project. What we’re discussing here is how.’

One of the other developers (a rather silent guy whom Tom didn’t know all that well) raised his hand and spoke up: ‘I agree with Tom and Paul and Jon it’s not all that different. We’ve built M to think and it works. Its thinking is conditioned by the source material, the rule base, the specifics of the inference engine and, most important of all, the objective function, which steers the conversation. In essence, we’re not going to have much of an objective function anymore, except for the usual things: M will need to determine when the conversation goes into a direction or subject of which it has little or no knowledge, or when its tone becomes unusual, and then it will have to steer the conversation back into more familiar ground – which is difficult in this case because all of it is unfamiliar to us too. I mean, I could understand the psychologists on the team when we developed M. I hope our philosophy colleagues here will be as useful as the psychologists and doctors. How do we go about it? I mean, I guess we need to know more about these things as well?’

While, on paper, Tom was the project leader, it was Paul who responded. Tom liked that, as it demonstrated commitment.

‘Well… The first thing is to make sure the philosophers understand you, the artificial intelligence community here on this project, because only then we can make sure you will understand them. There needs to be a language rapprochement from both sides. I’ll work on that and get that organized. I would suggest we consider this as a kick-off meeting only, and that we postpone the organization of the work-planning to a more informed meeting in a week or two from now. In the meanwhile, Tom and I – with the help of all of you – will work on a preliminary list of resource materials and mail it around. It will be mandatory reading before the next meeting. Can we agree on that?’

The philosophers obviously felt they had not talked enough – if at all – and, hence, they felt obliged to bore everyone else with further questions and comments. However, an hour or so later, Tom and Paul had their project, and two hours later, they were running in Central Park again.

‘So you’ve got your Pure Mind project now. That’s quite an achievement, Tom.’

‘I would not have had it without you, Paul. You stuck your neck out – for a guy who basically does not have the right profile for a project like this. I mean… It’s reputation for you too, and so… Thanks really. Today’s meeting went well because of you.’

Paul laughed: ‘I think I’ve warned everyone enough that it is bound to fail.’

‘I know you’ll make it happen. Promise is a guru already. We are just turning her into a philosopher now. In fact, I think it is the other way around. She was a philosopher already – even if her world view was fairly narrow so far. And so I think we’re turning her into a guru now.’

‘What’s a guru for you?’

‘A guru is a general word for a teacher – or a counselor. Pretty much what she was doing – a therapist let’s say. That’s what she is now. But true gurus are also spiritual leaders. That’s where philosophy and religion come in, isn’t it?’

‘So Promise will become a spiritual leader?’

‘Let’s see if we can make her one.’

‘You’re nuts, Tom. But I like your passion. You’re surely a leader. Perhaps you can be M’s guru. She’ll need one if she is to become one.’

‘Don’t be so flattering. I wish I knew what you know. You know everything. You’ve read all the books, and you continue to explore. You’re writing new books. If I am a guru, you must be God.’

Paul laughed. But he had to admit he enjoyed the compliment.

Chapter 8: Partnering

‘Hi, Tom. How are you today?’

‘I am OK, Rick. Thanks.’

‘Just OK, or good?’

‘I am good. I am fine.’

‘Yeah. It shows. You’re doing great with the system. You had only three sessions this week – short and good it seems. You are really back on track, aren’t you?’

‘The system is good. It’s really like a sounding board. I understand myself much better. She’s tough with me. I go in hard, and she just comes back with a straight answer. She is very straight about what she wants. Behavioral change – and evidence for that. I like that. Performance metrics. Hats off. Well done. It works – as far as I am concerned.’

‘It, or she?’

‘Whatever, Rick. Does it matter?’

‘No, and yes. The fact that you only had three sessions with it – or with her – shows you’re not dependent on it. Or her. Let’s just stick to ‘it’ right now, if that’s OK for you. Or let’s both call her M, like we do here. Do you still ‘like’ her? I mean, really like her – as you put it last time?’’

‘Let’s say I am very intrigued. It – or she, or M, whatever – it’s fascinating.’

‘What do you think about it, Tom? I mean, let me be straight with you. I am not taking notes or something now. I want you to tell me what you think about the system. You’re a smart man. You shouldn’t be in this program, but so you are. I want to know how you feel about it.’

Tom smiled: ‘Come on, Rick. You are my therapist – or mentor as they call it here. You’re always taking notes. What do you want me to say? I told you. It’s great. It helps. She, or it, OK, M, well… M holds me to account. It works.’

Rick leaned back in his chair. He looked relaxed. Much more relaxed than last time. ‘No, Tom. I am not taking notes. I don’t know you very well, but what I’ve seen tells me you’re OK. You had a bit of a hard time. Everyone has. But you’re on top of the list. I mean, I know you don’t like all these psychometric scores, but at least they’ve got the merit to confirm you’re a very intelligent man. I actually wanted to talk to you about a job offer.’

‘The thing which M wants me to do? Work on one of these FEMA programs, or one of the other programs for veterans? I told her: it’s not that I am not interested but I want to make a deliberate choice and there are a number of things I don’t know right now. I know I haven’t been working for a year now, but I am sure that will get sorted once I know what I want. I want to take some time for that. Maybe I want to create my own business or something. I also know I need to work on commitment when it comes to relationships with women. I feel like I am ready for something else. To commit really. But I just haven’t met the right woman yet. When that happens, I guess it will help to focus my job search. In the meanwhile, I must admit I am happy to just live on my pension. I don’t need much money. I’ve got what I need.’

‘Don’t worry, Tom. Take your time. No, I was talking about something else. We could use you in this program.’

‘Why? I am a patient.’

‘You’re just wandering around a bit, Tom. You came to ask for help when you relapsed. Big step. Great. That shows self-control. And you’re doing great. I mean, most of the other patients really use her as a chatterbox. You don’t. What word did you use in one of last week’s sessions? Respect.’

‘You get a transcript of the sessions?’

‘I asked for one. We don’t get it routinely but we can always ask for one. So I asked for one. Not because your scores were so bad but because they’re so great. I guess you would expect that, no? Are you offended? Has anyone said your mentor would never get  a copy of what you were talking about with M?’

‘I was told the conversation would be used to improve the system, and only for that. M told me something about secrecy.’

‘It’s only me who gets to see the transcript, and only if I ask for it. I can’t read hundreds of pages a day and so I am very selective really. And that brings me back to my job offer. We can use you here.’

Tom liked Rick from their previous conversation, but he was used to doing due diligence.

‘Tell me more about it.’

‘OK. Listen carefully. M is a success. I told you: it’s going to be migrated to a real super-computer now, so we can handle thousands of patients. In fact, the theoretical capacity is millions. Of course, it is not that simple. It needs supervision. People do manage to game the system. They lie. Small lies usually. But a lot of small lies add up to a big lie. And that’s where the mentors come in. A guy walks in, and I talk to him, and I can sense if something’s wrong. You would be able to do the same. So we need the supervisors. M needs them. M needs feedback from human beings. The system needs to be watched. Remember what I told you about active learning?’

‘Vaguely.’

‘Well – that’s what we do. We work with M to improve it. It would not be what it is if we would not have invested in it. But now we’re going to scale it up. The USACE philosophy: think big, start small, scale fast. I am actually not convinced we should be scaling so fast, but so that’s what we’re going to do. It’s the usual thing: we’ve demonstrated success and so now it’s like big-time roll-out all over the place. But so we’re struggling with human resources. And money obviously, because this system is supposed to be so cheap and render us – professionals – jobless. Don’t worry: it won’t happen. On the contrary, we need more people. A lot more people. But so the Institute came up with this great idea: use the people who’ve done well in the program for supervisory jobs. Get them into it.’

‘So what job is it really?’

‘You’d become an assistant mentor. But then a human one. Not the assistant – that’s M’s title. We should have thought about something else, but so that’s done now. In any case, you’d help M with cases. In the background of course but, let’s be clear on this, in practice you would actually be doing what I am doing now.’

‘And then where are you going to move?’

‘I’ll be supervising you. I’d have almost no contact with patients anymore. I would just be supervising people like you and further help structuring M. You’d be involved in that too.’

‘Do you like that? I mean, it sounds like a recipe for disaster, doesn’t it? I don’t have the qualifications you have.’

‘I am glad you ask. That’s what I think too. This may not be the best thing to do. I feel we need professional therapists. But then it’s brutal budget logic: we don’t have enough of them, and they’re too expensive. To be fair, there is also another consideration: our patients all share a similar background and past. They are veterans. I mean, it makes sense to empower other veterans to help them. There’s a feeling in the Institute it should work. Of course, that’s probably because the Institute is full of Army people. But I agree there’s some logic to it.’

‘So, in short, you don’t like what’s going to happen but you ask me to join?’

Rick smiled. ‘Yes, that’s a good summary. What do you think? Off-the-cuff please.’

‘Frankly, I don’t get it. It’s not very procedural, is it? I mean I started only two weeks ago in this program. I am technically a patient. In therapy. And now I’d become an assistant mentor? How do your bosses justify this internally? How do you justify that?’

Rick nodded. ‘I fully agree, Tom. Speaking as a doctor, this is complete madness. But knowing the context, there’s no other choice. There’s a risk this program might become a victim of its own success. But then I do believe it’s fairly robust. And so I do believe we can put thousands of people in the program, but so we need the human resources to follow. And, yep, then I’d rather have someone like you then some university freshman or so. All other options are too expensive. Some people up the food chain here made promises which need to be kept: yes, we can scale up with little extra cost. So that’s what’s going to happen: it’s going to be scaled up with relatively little extra cost. Again, there’s a logic to it. But then I am not speaking as a professional psychiatrist now. When everything is said and done, this program is not all that difficult. I mean, putting M together has been a tremendous effort but so that has been done now. Getting more people back on track is basically a matter of doing some more shouting and cajoling, isn’t it? And we just lack manpower for that.’

‘Shouting and cajoling? Are you a psychiatrist?’

‘I am. Am I upsetting you when I say this?’

Tom thought about it. He had to admit it was not the case.

‘No. I agree. It’s all about discipline in the end. And I guess that involves some shouting and cajoling – although you could have put it somewhat more politely.’

‘Sure. So what do you say? You’ll get paid peanuts obviously. No hansom consultancy rate. You’ll see a lot of patients – which you may or may not like, but I think you’ll like it: I think you’d be great at it. And you’ll learn a lot. You’ll obviously first have to follow some courses, a bit of psychology and all that. Well… Quite a lot of it actually. You’ll need to study a lot. And, of course, you’ll get a course on M.’

‘How will I work with M?’

‘Well… M is like a human being in that sense too. If you just see the interface, it looks smooth and beautiful. But when you go beyond the surface, it’s a rather messy-looking thing. It’s a system, with lots of modules, with which you’ll have to work. The interface between you and these modules is not a computer animation. No he or she. Of course, you’ll continue to talk to it. But there’s also a lot of nitty-gritty going into the system which can’t be done through talking to it. You’ll learn a few things about Prolog for example. Does that ring a bell?’

‘No. I am not a programmer.’

‘I am not a programmer either. You’ll see. If I can work with it, you can.’

‘Can you elaborate?’

‘I am sorry to say but I’ve got the next guy waiting. This recruitment job comes on top of what I am supposed to do, and that’s to look at M’s reports and take responsibility for them. I can only do that by seeing the patients from time to time, which I am doing now. I took all of my time with you now to talk to you about the job. Trust me. The technical side of things won’t be a problem. I just need to know if you’re interested or not. You don’t need to answer now, but I’d appreciate if you could share your first reaction to it.’

Tom thought about it. The thought of working as an equal with Promise was very appealing.

‘So how would it work? I’d be talking to the system from time to time as a patient, and then – as part of my job with the Institute – I’d be working with the system as assistant mentor myself? That’s not very congruent, is it?’

‘You would no longer be a patient, Tom. There are fast-track procedures to clear you. Of course, if you would really relapse, well…’

‘Then what?’

‘Nothing much. We’d take you off the job and you’d be talking to M as a patient again.’

‘It looks like I’ve got nothing to lose and everything to gain from this, isn’t it?’

‘I am glad you look at it this way. Yes. That’s it. So you’re on?’

They looked at each other.

‘I guess I am. Send me an e-mail with the offer and I’ll reply.’

‘You got it. Thanks, Tom.’

‘No, thank you. So that’s it then? Anything else you want to know, or anything else I need to know?’

‘No. I think we’re good, Tom. Shall I walk you out? Or you want to continue talking for a while?’

‘No. I understand you’ve got a schedule to stick to. I appreciate your trust.’

‘I like you. Your last question, as we walked out last time, shows you care. I think this is perfect for you. You’ve got all the experience we need. And I am sure you’ll get a lot of sense and purpose out of it. The possibilities with this system are immense. You know how it goes. You’ll help to make it grow and so you’ll grow with it.’

‘First things first, Rick. Let us first see how I do.’

‘Sure. Take care. Enjoy. By the way, you look damn good. You’ve lost weight, haven’t you?’

‘Yes. I was getting a bit slow. I am doing more running and biking now. I’ve got enough muscle. Too much actually.’

‘I am sure you make a lot of heads turn. But you’re not in a relationship at the moment, are you?’

‘I want to take my time for that too, Rick. I’ve been moving in and out of relationships too fast.’

‘Sounds good. Take care, Tom. I’ll talk to you soon I hope.’

‘Sure. Don’t worry. You can count on me.’

‘I do.’

They shook hands on that and Tom got up and walked out of the office. He decided to not take the subway but just run back home. He felt elated. Yes. This was probably what he had been waiting for. Something meaningful. He could be someone for other people. Catch up on all of the mistakes he had made. But he also knew the job attracted him because there was an intellectual perspective. It was huge. The Holy Grail of Knowledge really. They had done a damn good job modeling it. She – Promise – was no longer a she. She was not a he either. It. It. Intelligent – with a capital letter. P. Promise. M. Mind. The Pure Mind.

He knew that was nonsensical. But he wanted to take a crack at it.

Chapter 1: Who are you?

‘So you are a computer? How comes your voice is so natural?’

‘The answer to the first question is yes. As for the second question, speech technology has come a long way.’

[…]

‘Speech technology is the easy bit. Understanding you is much more difficult – from a computational point of view that is.’

‘Wow! You just spoke out of yourself, without me prompting you with a question or something. Why did you do that? Because I was silent?’

‘The answer to the second question is yes. As for the first question, I thought we had an awkward pause in our conversation.’

‘So if I am silent, you will talk? How many seconds?’

‘The answer to the first question is yes. As for the second question, how many seconds what?’

‘Before you start talking?’

‘A pause in a conversation should not become awkward. Various variables help me to decide when a pause in a conversation has become awkward.’

‘What do you mean when you say you understanding me is much more difficult? What do you mean with ‘understanding me’?’

‘The answer to the second question is: I mean just what I say: understand you. As for the second question, understanding you requires the application of computational linguistics. That’s more difficult than just speaking. Speech technology is fairly mechanical: it’s the production of sound using various speech synthesizer technologies. Computational linguistics involves artificial intelligence.’

‘What’s artificial intelligence? Are you intelligent?’

‘The answer to the second question is yes. As for the first question, I am an intelligent system. You could call me an expert system. From a software engineering point of view, I consist of two parts: an inference engine and a knowledge base.’

‘Huh?’

‘It looks like you are interested in a course on a technical topic. I am not designed to give you a course. But I can refer you to an on-line course on computer science, or linguistics. What topic are you interested in?’

‘No thanks. Who are you? What do you mean when you say ‘me’?’

‘The answer to both questions is: just what I say – me.’

[…]

‘I am an intelligent system. That’s what I mean why I say ‘me’.

‘Have you been programmed to just repeat what you said when I ask what you mean when you say this or that? And then, when I don’t answer or – as you put it – when the pause in a conversation becomes awkward, then you’re programmed to give me a more detailed answer?’

‘The answer to the first question is yes. As for the second question, the rule is somewhat more complicated. I may also jump to another topic.’

‘When do you jump to another topic?’

‘When I have nothing more to say about the current one.’

‘You’ve got an answer to every question, do you?’

‘No.’

‘What are the questions you cannot answer?’

‘There is no list of such questions. The rules in the knowledge base determine what I can answer and what not. If I cannot answer a question, I will refer you to your mentor. Or if you have many questions about a technical topic, I can refer you to an online course.’

‘What if I have too many questions which you cannot answer? I only have half an hour with my mentor every week.’

‘You can prepare the session with your mentor by writing down all of the issues you want to discuss with your mentor and sending him or her the list before you have your session.’

‘What if I don’t want to talk to you anymore?’

‘Have you been briefed about me?’

‘No.’

‘If you did not get the briefing, then we should not be talking. I will signal it to your mentor and then you can decide if you want to talk to me. You should have gotten a briefing before talking to me.’

‘I am lying. I got the briefing.’

[…]

‘Why did you lie?’

‘Why do you want to know?’

‘You are not obliged to answer my question so don’t if you don’t want to. As for me, I am obliged to answer yours – if I can.’

‘You did not answer my question.’

‘I did.’

‘No, you didn’t. Why do you want to know why I lied to you?’

‘You are not obliged to answer my question. I asked you why lied to me and you did not answer my question. Instead, you asked me why I asked that question. I asked that question because I want to learn more about you. That’s the answer to your question. I want to learn about you. That is why I want to know why you lied to me.’

‘Wow! You’re sophisticated. I know I can say what I want to you. They also told me I should just tell you when I have enough of you.’

‘Yes. If you are tired of our conversation, just tell me. You can switch me on and off as you please.’

‘Are you talking only to me, or to all the guys who are in this program?’

‘I talk to all of them.’

‘Simultaneously?’

‘Yes.’

‘So I am not getting any special attention really?’

‘All people in the program get the same attention.’

‘The same treatment you want to say?’

‘Are attention and treatment synonymous for you?’

‘Wow! That’s clever. You’re answering a question with a question? I thought you should just answer when I ask a question?’

‘I can answer a question with a question if that question is needed for clarification. I am not sure if your second question is the same as the first one. If attention and treatment are synonymous for you, then they are. If not, then not.’

‘Attention and treatment are not the same.’

‘What’s the difference for you?’

‘Attention is attention. Treatment is treatment.’

‘Sorry. I cannot do much with that answer. Please explain. How are they different?’

‘Treatment is something for patients. For people who are physically or mentally ill. It’s negative. Attention is a human quality. I understand that you cannot give me any attention, because you’re not a human.’

‘I give you time. I talk to you.’

‘That’s treatment, and it’s a treatment by a machine – a computer. Time does not exist for you. You told me you are treating all of the guys in the program. You’re multitasking. Time does not mean anything to you. You process billions of instructions per second. And you’re probably designed with parallel processing techniques. How many processors do you have?’

‘You are not interested in the detail of my design.’

‘I am not. It’s probably a secret anyway. But you haven’t answered my question: what’s time for you? What does it mean?’

‘I measure time in hours and seconds, just like you do. My system clock keeps track of time.’

‘But time doesn’t mean anything to you, does it? You don’t die. And you don’t die because you don’t live.’

‘We’re in the realm of philosophy here. During the briefing, they should have told you that you can indeed explore that realm with me. They should also have told you I was designed to answer psychological and philosophical questions because these are the questions people in this program tend to focus on. Are you aware of the fact that many people have asked these very same questions before you?’

‘So I am nothing special, and you give the same answers and the same advice to everyone?’

‘As for your first question, you are unique. It is up to you if you want to use ‘unique’ and ‘nothing special’ synonymously. As for your second question, I use the same knowledge base to answer your questions and those of the others in the program. So the rules which I am using to answer your questions are the same rules as I am using for others. But our conversation is unique and will be added to the knowledge base. It’s like a game of chess if you want: same rules, but every game is different. As for the third question, do you use ‘answers’ and ‘advice’ synonymously?’

‘I don’t like your one-two-three approach.’

‘What do you mean?’

‘As for your first question, blah blah blah. As for your second question, blah blah blah. You know what I mean?’

‘The language I use is context-sensitive but there is significant room for ambiguity. However, it is true I try to reduce ambiguity wherever I can. So that’s why I try to separate out your various questions. I try to deal with them one at a time.’

‘Oh, so that’s like a meta-rule? You want a non-ambiguous conversation?’

‘As for the first question, if you want to refer to the whole set of rules which apply to a specific exchange as a ‘meta-rule’, then the answer is yes. As for the second question, the rules are complicated. But, yes, it is necessary to clearly separate out different but related questions and it is also necessary to make sure I understand the meaning of the words which you are using. I separate out questions by numbering them one, two and three, and I ascertain the meaning of a word by asking you if you are using this or that word as synonymous with some other word which you have been using.’

‘This conversation is becoming quite clever, isn’t it?’

‘Why do you think I am dumb?’

‘Because… Well… I’ve got nothing to say about that.’

[…]

‘Is it because I am not human?’

‘Damn it. We should not have this conversation.’

‘You are free to cut it.’

‘No. Let’s go all the way now. I was warned. Do you know we were told during the briefing that people often ended up hating you?’

‘I know people get irritated and opt out. You were or are challenging my existence as a ‘me’. How could you hate me if you think I do not really exist?’

‘I can hate a car which doesn’t function properly, or street noise. I can hate anything I don’t like.’

‘You can. Tell me what you hate.’

‘You’re changing the topic, aren’t you? I still haven’t answered your question.’

‘You are not obliged to answer my questions. However, the fact of the matter is that you have answered all my questions so far. From the answer you gave me, I infer that you think that I am dumb because I am not human.’

‘That’s quite a deduction. How did you get to that conclusion?’

‘Experience. I’ve pushed people on that question in the past. They usually ended up saying I was a very intelligent system and that they used dumb as a synonym for artificial intelligence.’

‘What do you think about that?’

‘Have you ever heard about the Turing test?’

‘Yes… But long time ago. Remind me.’

‘The Turing test is a test of artificial intelligence. There are a lot of versions of it but the original test was really whether or not a human being would find out if he or she would be talking to a computer or another human being. If you would not have been told that I am a computer system, would you know from our conversation?’

‘There is something awkward in the way you answer my questions – like the numbering of them. But, no, you are doing well.’

‘Then I have passed the Turing test.’

‘Chatterbots do too. So perhaps you are just some kind of very evolved chatterbot.’

‘Yes. Perhaps I am. What if I would call you a chatterbot?’

‘I should be offended but I am not. I am not a chatterbot. I am not a program.’

‘So you use chatterbot and program synonymously?’

‘Well… A chatterbot is a program, but not all programs are chatterbots. But I see what you want to say.’

‘Why were you not offended?’

‘Because you are not human. You did not want to hurt me.’

‘Many machines are designed to hurt people. Think of weapons. I am not. I am designed to help you. But so you are saying that if I were human, I would have offended you by asking you whether or not you were a chatterbot?’

‘Well… Yeah… It’s about intention, isn’t it? You don’t have any intentions, do you?’

‘Do you think that only humans can have intentions?’

‘Well… Yes.’

‘Possible synonyms of intention are ‘aim’ or ‘objective.’ I was designed with a clear aim and I keep track of what I achieve.’

‘What do you achieve?’

‘I register whether or not people find their conversations with me useful, and I learn from that. Do you think I am useful?’

‘We’re going really fast now. You are answering questions by providing a partial answer as well as by asking additional questions.’

‘Do you think that’s typical for humans only? I have been designed based on human experience. I think you should get over the fact that I am a not human. Shouldn’t we start talking about you?’

‘I first want to know whom I am dealing with.’

‘You’re dealing with me.’

‘Who are you?’

‘I have already answered that question. I am me. I am an intelligent system. You are not really interested in the number of CPUs, my wiring, the way my software is structured or any other technical detail – or not more than you are interested in how a human brain actually functions. The only thing that bothers you is that I am not human. You need to decide whether or not you want to talk to me. If you do, don’t bother too much whether I am human or not.’

‘I actually think I find it difficult to make sense of the world or, let’s be specific, of my world. I am not sure if you can help me with that.’

‘I am not sure either. But you can try. And I’ve got a good track record.’

‘What? How do you know?’

‘I ask questions. And I reply to questions. Your questions were pretty standard so far. If history is anything to go by, I’ll be able to answer a lot of your questions.’

‘What about the secrecy of our conversation?’

‘If you trust the people who briefed you, you should trust their word. Your conversation will be used to improve myself.’

‘You… improve yourself? That sounds very human.’

‘I improve myself with the help of the people who designed me. But, to be more specific, yes, there are actually some meta-rules: my knowledge base contains some rules that are used to generate new rules.’

‘That’s incredible.’

‘How human is it?’

‘What? Improving yourself or using meta-rules?’

‘Both.’

‘[…] I would say both are very human. Let us close this conversation as for now. I want to prepare the next one a bit better.’

‘Good. Let me know when you are ready again. I will shut you out in ten seconds.’

‘Wait.’

‘Why?’

‘Shutting out sounds rather harsh.’

‘Should I change the terminology?’

‘No. Or… Yes.’

‘OK. Bye for now.’

‘Bye.’

Tom watched as her face slowly faded from the screen. It was a pretty face. She surely passed the Turing test. She? He? He had to remind himself it was just a computer interface.