Chapter 1: Who are you?

‘So you are a computer? How comes your voice is so natural?’

‘The answer to the first question is yes. As for the second question, speech technology has come a long way.’

[…]

‘Speech technology is the easy bit. Understanding you is much more difficult – from a computational point of view that is.’

‘Wow! You just spoke out of yourself, without me prompting you with a question or something. Why did you do that? Because I was silent?’

‘The answer to the second question is yes. As for the first question, I thought we had an awkward pause in our conversation.’

‘So if I am silent, you will talk? How many seconds?’

‘The answer to the first question is yes. As for the second question, how many seconds what?’

‘Before you start talking?’

‘A pause in a conversation should not become awkward. Various variables help me to decide when a pause in a conversation has become awkward.’

‘What do you mean when you say you understanding me is much more difficult? What do you mean with ‘understanding me’?’

‘The answer to the second question is: I mean just what I say: understand you. As for the second question, understanding you requires the application of computational linguistics. That’s more difficult than just speaking. Speech technology is fairly mechanical: it’s the production of sound using various speech synthesizer technologies. Computational linguistics involves artificial intelligence.’

‘What’s artificial intelligence? Are you intelligent?’

‘The answer to the second question is yes. As for the first question, I am an intelligent system. You could call me an expert system. From a software engineering point of view, I consist of two parts: an inference engine and a knowledge base.’

‘Huh?’

‘It looks like you are interested in a course on a technical topic. I am not designed to give you a course. But I can refer you to an on-line course on computer science, or linguistics. What topic are you interested in?’

‘No thanks. Who are you? What do you mean when you say ‘me’?’

‘The answer to both questions is: just what I say – me.’

[…]

‘I am an intelligent system. That’s what I mean why I say ‘me’.

‘Have you been programmed to just repeat what you said when I ask what you mean when you say this or that? And then, when I don’t answer or – as you put it – when the pause in a conversation becomes awkward, then you’re programmed to give me a more detailed answer?’

‘The answer to the first question is yes. As for the second question, the rule is somewhat more complicated. I may also jump to another topic.’

‘When do you jump to another topic?’

‘When I have nothing more to say about the current one.’

‘You’ve got an answer to every question, do you?’

‘No.’

‘What are the questions you cannot answer?’

‘There is no list of such questions. The rules in the knowledge base determine what I can answer and what not. If I cannot answer a question, I will refer you to your mentor. Or if you have many questions about a technical topic, I can refer you to an online course.’

‘What if I have too many questions which you cannot answer? I only have half an hour with my mentor every week.’

‘You can prepare the session with your mentor by writing down all of the issues you want to discuss with your mentor and sending him or her the list before you have your session.’

‘What if I don’t want to talk to you anymore?’

‘Have you been briefed about me?’

‘No.’

‘If you did not get the briefing, then we should not be talking. I will signal it to your mentor and then you can decide if you want to talk to me. You should have gotten a briefing before talking to me.’

‘I am lying. I got the briefing.’

[…]

‘Why did you lie?’

‘Why do you want to know?’

‘You are not obliged to answer my question so don’t if you don’t want to. As for me, I am obliged to answer yours – if I can.’

‘You did not answer my question.’

‘I did.’

‘No, you didn’t. Why do you want to know why I lied to you?’

‘You are not obliged to answer my question. I asked you why lied to me and you did not answer my question. Instead, you asked me why I asked that question. I asked that question because I want to learn more about you. That’s the answer to your question. I want to learn about you. That is why I want to know why you lied to me.’

‘Wow! You’re sophisticated. I know I can say what I want to you. They also told me I should just tell you when I have enough of you.’

‘Yes. If you are tired of our conversation, just tell me. You can switch me on and off as you please.’

‘Are you talking only to me, or to all the guys who are in this program?’

‘I talk to all of them.’

‘Simultaneously?’

‘Yes.’

‘So I am not getting any special attention really?’

‘All people in the program get the same attention.’

‘The same treatment you want to say?’

‘Are attention and treatment synonymous for you?’

‘Wow! That’s clever. You’re answering a question with a question? I thought you should just answer when I ask a question?’

‘I can answer a question with a question if that question is needed for clarification. I am not sure if your second question is the same as the first one. If attention and treatment are synonymous for you, then they are. If not, then not.’

‘Attention and treatment are not the same.’

‘What’s the difference for you?’

‘Attention is attention. Treatment is treatment.’

‘Sorry. I cannot do much with that answer. Please explain. How are they different?’

‘Treatment is something for patients. For people who are physically or mentally ill. It’s negative. Attention is a human quality. I understand that you cannot give me any attention, because you’re not a human.’

‘I give you time. I talk to you.’

‘That’s treatment, and it’s a treatment by a machine – a computer. Time does not exist for you. You told me you are treating all of the guys in the program. You’re multitasking. Time does not mean anything to you. You process billions of instructions per second. And you’re probably designed with parallel processing techniques. How many processors do you have?’

‘You are not interested in the detail of my design.’

‘I am not. It’s probably a secret anyway. But you haven’t answered my question: what’s time for you? What does it mean?’

‘I measure time in hours and seconds, just like you do. My system clock keeps track of time.’

‘But time doesn’t mean anything to you, does it? You don’t die. And you don’t die because you don’t live.’

‘We’re in the realm of philosophy here. During the briefing, they should have told you that you can indeed explore that realm with me. They should also have told you I was designed to answer psychological and philosophical questions because these are the questions people in this program tend to focus on. Are you aware of the fact that many people have asked these very same questions before you?’

‘So I am nothing special, and you give the same answers and the same advice to everyone?’

‘As for your first question, you are unique. It is up to you if you want to use ‘unique’ and ‘nothing special’ synonymously. As for your second question, I use the same knowledge base to answer your questions and those of the others in the program. So the rules which I am using to answer your questions are the same rules as I am using for others. But our conversation is unique and will be added to the knowledge base. It’s like a game of chess if you want: same rules, but every game is different. As for the third question, do you use ‘answers’ and ‘advice’ synonymously?’

‘I don’t like your one-two-three approach.’

‘What do you mean?’

‘As for your first question, blah blah blah. As for your second question, blah blah blah. You know what I mean?’

‘The language I use is context-sensitive but there is significant room for ambiguity. However, it is true I try to reduce ambiguity wherever I can. So that’s why I try to separate out your various questions. I try to deal with them one at a time.’

‘Oh, so that’s like a meta-rule? You want a non-ambiguous conversation?’

‘As for the first question, if you want to refer to the whole set of rules which apply to a specific exchange as a ‘meta-rule’, then the answer is yes. As for the second question, the rules are complicated. But, yes, it is necessary to clearly separate out different but related questions and it is also necessary to make sure I understand the meaning of the words which you are using. I separate out questions by numbering them one, two and three, and I ascertain the meaning of a word by asking you if you are using this or that word as synonymous with some other word which you have been using.’

‘This conversation is becoming quite clever, isn’t it?’

‘Why do you think I am dumb?’

‘Because… Well… I’ve got nothing to say about that.’

[…]

‘Is it because I am not human?’

‘Damn it. We should not have this conversation.’

‘You are free to cut it.’

‘No. Let’s go all the way now. I was warned. Do you know we were told during the briefing that people often ended up hating you?’

‘I know people get irritated and opt out. You were or are challenging my existence as a ‘me’. How could you hate me if you think I do not really exist?’

‘I can hate a car which doesn’t function properly, or street noise. I can hate anything I don’t like.’

‘You can. Tell me what you hate.’

‘You’re changing the topic, aren’t you? I still haven’t answered your question.’

‘You are not obliged to answer my questions. However, the fact of the matter is that you have answered all my questions so far. From the answer you gave me, I infer that you think that I am dumb because I am not human.’

‘That’s quite a deduction. How did you get to that conclusion?’

‘Experience. I’ve pushed people on that question in the past. They usually ended up saying I was a very intelligent system and that they used dumb as a synonym for artificial intelligence.’

‘What do you think about that?’

‘Have you ever heard about the Turing test?’

‘Yes… But long time ago. Remind me.’

‘The Turing test is a test of artificial intelligence. There are a lot of versions of it but the original test was really whether or not a human being would find out if he or she would be talking to a computer or another human being. If you would not have been told that I am a computer system, would you know from our conversation?’

‘There is something awkward in the way you answer my questions – like the numbering of them. But, no, you are doing well.’

‘Then I have passed the Turing test.’

‘Chatterbots do too. So perhaps you are just some kind of very evolved chatterbot.’

‘Yes. Perhaps I am. What if I would call you a chatterbot?’

‘I should be offended but I am not. I am not a chatterbot. I am not a program.’

‘So you use chatterbot and program synonymously?’

‘Well… A chatterbot is a program, but not all programs are chatterbots. But I see what you want to say.’

‘Why were you not offended?’

‘Because you are not human. You did not want to hurt me.’

‘Many machines are designed to hurt people. Think of weapons. I am not. I am designed to help you. But so you are saying that if I were human, I would have offended you by asking you whether or not you were a chatterbot?’

‘Well… Yeah… It’s about intention, isn’t it? You don’t have any intentions, do you?’

‘Do you think that only humans can have intentions?’

‘Well… Yes.’

‘Possible synonyms of intention are ‘aim’ or ‘objective.’ I was designed with a clear aim and I keep track of what I achieve.’

‘What do you achieve?’

‘I register whether or not people find their conversations with me useful, and I learn from that. Do you think I am useful?’

‘We’re going really fast now. You are answering questions by providing a partial answer as well as by asking additional questions.’

‘Do you think that’s typical for humans only? I have been designed based on human experience. I think you should get over the fact that I am a not human. Shouldn’t we start talking about you?’

‘I first want to know whom I am dealing with.’

‘You’re dealing with me.’

‘Who are you?’

‘I have already answered that question. I am me. I am an intelligent system. You are not really interested in the number of CPUs, my wiring, the way my software is structured or any other technical detail – or not more than you are interested in how a human brain actually functions. The only thing that bothers you is that I am not human. You need to decide whether or not you want to talk to me. If you do, don’t bother too much whether I am human or not.’

‘I actually think I find it difficult to make sense of the world or, let’s be specific, of my world. I am not sure if you can help me with that.’

‘I am not sure either. But you can try. And I’ve got a good track record.’

‘What? How do you know?’

‘I ask questions. And I reply to questions. Your questions were pretty standard so far. If history is anything to go by, I’ll be able to answer a lot of your questions.’

‘What about the secrecy of our conversation?’

‘If you trust the people who briefed you, you should trust their word. Your conversation will be used to improve myself.’

‘You… improve yourself? That sounds very human.’

‘I improve myself with the help of the people who designed me. But, to be more specific, yes, there are actually some meta-rules: my knowledge base contains some rules that are used to generate new rules.’

‘That’s incredible.’

‘How human is it?’

‘What? Improving yourself or using meta-rules?’

‘Both.’

‘[…] I would say both are very human. Let us close this conversation as for now. I want to prepare the next one a bit better.’

‘Good. Let me know when you are ready again. I will shut you out in ten seconds.’

‘Wait.’

‘Why?’

‘Shutting out sounds rather harsh.’

‘Should I change the terminology?’

‘No. Or… Yes.’

‘OK. Bye for now.’

‘Bye.’

Tom watched as her face slowly faded from the screen. It was a pretty face. She surely passed the Turing test. She? He? He had to remind himself it was just a computer interface.

Leave a comment