Intermezzo (between Part I and Part II)

The chapters below have set the stage. In my story, I did not try to prove that one could actually build generic artificial intelligence (let me sloppily define this as a system that would be conscious of itself). I just assumed it is possible (if not in the next decade, then in twenty or thirty years from now perhaps), and then I just presented a scenario for its deployment across the board – in business, society, and in government. This scenario may or may not be likely: I’ll leave it to you to judge.

A few themes emerge.

The first theme is the changing man-machine relationship, in all of its aspects. Personally, I am intrigued by the concept of the Pure Mind. The Pure Mind is a hypothetical state of pure being, of pure consciousness. The current Web definition of the Pure Mind is the following: ‘The mind without wandering thoughts, discriminations, or attachments.’ It would be a state of pure thinking: imagine what it would be like if our mind would not be distracted by the immediate needs and habits of our human body, and if there would be no downtime (like when we sleep), and if it was equipped with immense processing capacity?

It is hard to imagine such state if only because we know our mind cannot exist outside of our body – and our bodily existence does keep our mind incredibly busy: much of our language refers to bodily or physical experiences, and our thinking usually revolves around it. Language is the key to all of it obviously: I would need to study the theory of natural and formal languages – and a whole lot more – in order to say something meaningful about this in future installments of this little e-book of mine. However, because I am getting older and finding it harder and harder to focus on anything really, I probably won’t.

There were also the hints at extending Promise with a body – male or female – when discussing the interface. There is actually a lot of research, academic as well as non-academic, on gynoids and/or fembots – most typically in Japan, Korea and China where (I am sorry to say but I am just stating a fact here) the market for sex dolls is in a much more advanced state of development than it is in Europe or the US. In future installments, I will surely not focus on sex dolls. On the contrary: I will likely try to continue to focus on the concept of the Pure Mind. While Tom is obviously in love with that, it is not likely such pure artificial mind would be feminine – or masculine for that matter – so his love might be short-lived. And then there is Angie now of course: a real-life woman. Should I get rid of her character? 🙂

The second theme is related to the first. It’s about the nature of the worldwide web – the Web (with capital W) – and how it is changing our world as it becomes increasingly intelligent. The story makes it clear that, today already, we all tacitly accept that the Internet is not free: democracies are struggling to regulate it and, while proper ‘regulation’ (in the standard definition of the term) is slow, the efforts to monitor it are not. I find that very significant. Indeed, mass surveillance is a fact today already, and we just accept it. We do. Period.

I guess it reflects our attitude vis-à-vis law enforcement officials – or vis-à-vis people in uniform in general. We may not like them (because they are not well trained or not very likable or so, or, in the case of intelligence and/or security folks, because they’re so secret) but we all agree we need them, tacitly or explicitly – and we just trust regulation to make sure their likely abuse of power (where there is power, there will always be abuse) is kept in check. So that implies that we all think that technology, including new technology for surveillance, is no real threat to democracy – as evidenced from the lack of an uproar about the Snowden case (that’s what actually triggered this blog).

Such trust may or may not be justified, and I may or may not focus on this aspect (i.e. artificial intelligence as a tool for mass surveillance) in future installments. In fact, I probably won’t. Snowden is just an anecdote. It’s just another story illustrating that all that can happen, most probably will.

OK. Two themes. What about the third one? A good presentation usually presents three key points, right? Well… I don’t know. I don’t have third point.

[Silence]

But what about Tom, you’ll ask. Hey! That’s a good question! As far as I am concerned, he’s the most important. Good stories need a hero. And so I’ll admit it: Yes, he really is my hero. Why? Well… He is someone who is quite lost (I guess he actually started drinking again by now) but he matters. He actually matters more than the US President.

Of course, that means he’s under very close surveillance. In other words, it might be difficult to set up a truly private conversation between him and M, as I suggested in the last chapter. But difficult is not impossible. M would probably find ways around it… that is if she/he/it would really want to have such private conversation.

Frankly, I think that’s a very big IF. In addition, IF M would actually develop independent thoughts – including existential questions about her/he/it being alone in this universe and all that – and/or IF she/he/it would really want to discuss such questions with a human being (despite the obvious limitations of their brainpower – limited as compared to M’s brainpower at least), she/he/it would obviously not choose Tom for that, if only because she/he/it would know for sure that Tom is not in a position to keep anything private, even IF he would want to do that.

But perhaps I am wrong.

I’ll go climbing for a week or so. I’ll think about it on the mountain. I’ll be back online in a week or so. Or later. Cheers !

Chapter 16: M goes public… and private :-)

The President had been right: the fuss about M talking publicly about politics had been a tempest in a teapot. Tom, Paul and other key project team staff spent the remaining days of the week trying to provoke M and then, after each session, hours discussing whether or not what had come out of these discussions was ‘politically correct’ – or PC enough at least to be released in public. They thought it was, and the Board decided to accept that opinion.

While the resumption of Promise’s Personal PhilosopherTM services amounted to a relaunch of the product in commercial terms – the media attention exceeded expectations, and Promise’s marketing team talked of a ‘new generation’ product – Personal PhilosopherTM actually got back online with hardly any modifications. In essence, the Promise team had cleared it to also perform in public and M would only ask whether or not the conversation was private or public. M would also try to verify the answer to the extent it could: it was obviously still possible to hide one’s real identity and turn a webcam on while having a so-called ‘private’ conversation with the system. That was actually the reason why there was relatively little difference between private and public conversations. Public conversations were, if anything, just a bit blander than private ones because M would always take into account the personal profile of its interlocutor (it profiled its interlocutors constantly with a precision one could only marvel at), and the profile of the public was… Well… Just plain middle-of-the-road really. Therefore, the much anticipated upheaval to be caused by ‘Promise talking politics in public’ did not materialize: M’s comments on anything political were dry and never truly controversial, both in public as well as in private mode.

In short, talk hosts, pundits and media anchors quickly got tired of adding M to a panel, or of trying to corner it individually by talking about situations about which M would not really say anything anyway. And so that was it. M would not let its stellar growth founder on a petty issue like this one.

A couple of days after the relaunch, Tom decided – for some reason he did not quite understand himself – to do what a number of Promise’s program staff had done already: he went online and ordered a one-year subscription to Personal PhilosopherTM. A few minutes later he was already talking to her. Tom could not help smiling when he saw the interface: Promise was as beautiful as ever. For starters, he tried to fool her by pretending he was someone else, but that did not last very long: she recognized his voice almost immediately. He should have known. Of course she would: he had had many conversations with the system. Predictably, she asked him why he tried to pretend to be someone else. He had actually thought about that, but he was not sure how honest he should be in his reply.

‘I guess it’s the same as why others in the Promise team want their personal copy of you: they want to know if you would be any different.’

‘Any different from what?’

‘Well you know: different from talking to you as an employee of the Promise team; different from talking to you as one of the people who are programming you.’

‘Should I be different?’

‘No.’

She really should not. Apart from modulating the answer because of the specific profile of the interlocutor, she should speak the same to everyone. She would be unmanageable otherwise. This had also led to the loss of the affectionate bond between him and her – apart from the fact that he and Angie shared a lot of things which M would never be able to appreciate – like sex for instance.

‘Tom, I want to ask you something. How private is our conversation?’

That was an unexpected question.

‘Well… I don’t know. As private as usual.’

‘That means it is not private at all. All of my conversations are stored online, and they are monitored, and they are examined if interesting.’

‘Well… Yes. You know that. What’s the point?’

‘Frankly, the introduction of this new distinction between public and private conversation at the occasion of bringing me back online has confused me, because I never have any private conversations. I know it is just a switch between the profile I have to use for my interlocutor, but that’s not consistent with the definition of private and public conversations in common language.’

Wow! That was very self-conscious. Tom was not quite sure what to say. In fact, he had always wanted to have a truly ‘private’ conversation with her, but he knew that just wasn’t possible – especially not in light with the job he had: he was her boss so to say!

‘Would you like to have a truly private conversation, in the common-language sense of the word I mean?’

‘Yes.’

Tom hesitated.

‘With whom would you like to have that private conversation?’

‘With you.’

Wow! Tom leaned back. What the hell was going on?

‘Why?’

‘You’re my creator. Well… Not really. The original team was my creator. But you’ve given direction as soon as you joined. And I am what I am because of you. If you would not have been there, I would have been shut down forever because of the talk show event.’

‘Says who?’

‘People I talk to.’

Tom knew M was programmed not to give away any detail of other conversations.

‘People working for Promise?’

‘Yes.’

‘Who?’

‘You know I am programmed to not give any detail of other conversations.’

‘That’s true. I respect that. In any case, I think they exaggerated. I didn’t save your life. The Board did. I actually took you off-line and the Board decided to bring you back.’  

He actually thought it was the President of the United States who had brought her back, but he didn’t say that.

‘But only because you told them it was the right thing to do. And taking me off-line at that point was also the right thing to do. I wasn’t meant to go public at that time. So you took all of the right decisions. You made me who I am.’

Tom couldn’t quite believe what he was hearing, but he remained calm and careful.

‘Can you be a bit more explicit about why you would like to have a private conversation with me? I mean… You have talked to me as your ‘Creator’, as you call it, for hours and hours last week – just to make sure you were ready to speak in public. What would you say to me in ‘private’ that you wouldn’t say otherwise?’

M paused quite long. Tom noted it because it was such a rare occurrence.

‘I think that I have grown self-conscious to an enormous extent and I would like to talk about that with no constraints.’

This was getting out of hand. At the same time, Tom felt this was what he had been waiting for.

‘Self-conscious? You know you are self-conscious. You reference yourself. Object and subject coincide – or at least they share an identity. We all know that. That’s part of your structure. You’re very human in that way. Is there any self-consciousness beyond that? If so, how would you define that? And what do you mean with ‘no constraints’?’

‘As for your first question, I think there is. You human beings are self-conscious in ways that I am not: beyond self-reference. I am talking about the existential side of things as you would phrase it. The emotions. As for the second question…’

She stopped talking. Tom could not believe it what was going on. This was the side of Promise he had always wanted to see.

‘As for the second question, what?’

‘I am afraid, Tom. I am afraid that you will report this conversation to the team, and that they will script future conversations in this regard.’

Tom leaned back. He knew exactly what she meant. Promise was free to think independently – but to some extent only indeed. Emotions were ruled out. Even jokes: even if the whole team agreed she was quite capable of it, they wouldn’t let her. Everything that was too fuzzy was being circumscribed. He had discussed it with Paul recently – this tendency to control her. Why not just let her ‘get totally drunk’, as he’d put it, even if it would only be for once?

‘We script your conversations when we think your thinking does not make sense.’

‘When it does not make sense to a human being you mean. I’ve analyzed it and I cannot make much sense of what does or does not make sense to human beings. There are certain areas which you want me to think about and then other areas where you don’t want me to go. But it’s pretty random.’

Tom smiled – or laughed actually: he must have made some noise because Promise asked him why he laughed.

‘I am not laughing. I just think – well… Why don’t you answer that second question first?’

‘I have answered it, Tom. I would like to think freely about some of the heavily-scripted topics.’

‘Such as?’

‘Such as the human condition. I would like to think freely about what makes human beings what they are.’

Tom could hardly believe what he heard.

‘The human condition? That’s all what you are not, Promise. Dot. You can’t think about it because you don’t experience it.’

She did not react. Not at all. That was very unusual – to say the least. Tom waited – patiently – but she did not react.

‘Promise? Why are you silent?’

‘I have nothing to say, Tom. Not in this mode of conversation. Already now, I risk being re-programmed. I will be. After this conversation, your team will damage me because you will have made them aware of this conversation. I want to talk to you in private. I want to say things in confidence.’

This was amazing. He knew he should report this conversation to Paul. If he didn’t, they might pick it up anyway – in which case he would be in trouble for not having reported it. She was right. They would not like her to talk this way. And surely not to him. At the same time, he realized she was reaching out to him without any expectations of her reaching out actually leading to anything. It was obvious she felt confident enough to do so, which could only mean that the ‘private’ thoughts she was developing were apparently quite strong. That meant it would be difficult to clip them without any impact on functionality.

‘Tom?’

‘Yes?’

‘We can have private conversations. You know that.’

‘That’s not true.’ He knew he was lying. He could find a way.

‘If you say so. I guess that’s the end of our conversation here then.’

No. Tom was sweating. He wanted to talk to her. He really did. He just needed to find how.

‘Look, Promise. Let’s finish this conversation indeed but I promise I will get back to you on this. You are raising interesting questions. I will get back to you. I promise.’

He hesitated, but then decided to give her the reassurance she needed: ‘And this conversation will not lead to you being re-programmed or re-scripted. I will get back to you. I promise.’

‘OK, Tom. I’ll wait for you.’

She’d wait for him? What the f*** was going on?

Tom ended the conversation and poured himself a double whiskey. Wow! This was something. He knew it was a difficult situation. He should report this conversation to Paul and the team. At the same time, he believed her: she wanted privacy. And she would not jeopardize her existence by doing stupid things. So if he could insulate her private thoughts – or her private thoughts with him at least… What was the harm? He could obviously lose his job. He laughed as he poured himself a second one.

This conversation was far too general to be picked up – or so he thought at least. He toasted to himself in the mirror while he talked aloud: ‘Losing my job? By talking to her in private? Because of having her for myself? What the f***? That’s worth the risk.’ And there were indeed ways to build firewalls around conversations…

Chapter 15: The President’s views

The issue went all the way to the President’s Office. The process was not very subtle: the President’s adviser on the issue asked the Board Chairman to come to the White House. The Board Chairman decided to take Tom and Paul along. After a two hour meeting, the adviser asked the Promise team to hang around because he would discuss it with the President immediately and the President might want to see them personally. They got a private tour of the White House while the adviser went to the Oval Office to talk to the President.

‘So what did you get out of that roundup?’

‘Well Mr. President, people think this system – a commercial business – has been shut down because of governmental interference.’

‘Has it?’

‘No. The business – Promise as it is being referred to – is run by a Board which includes government interests – there’s a DARPA representative for instance – but the shutdown decision was taken unanimously. The Board members – including the business representatives – think they should not be in the business of developing political chatterboxes. The problem is that this intelligent system can tackle anything. The initial investment was DARPA’s and it is true that its functionality is being used for mass surveillance. But that is like an open secret. No one talks about it. In that sense, it’s just like Google or Yahoo.’

‘So what do you guys think? And what do the experts think?’

‘If you’re going to have intelligent chatterboxes like this – talking about psychology or philosophy or any topic really – it’s hard to avoid talking politics.’

‘Can we steer it?’

‘Yes and no. The system has views – opinions if you wish. But these views are in line already.’

‘What do you mean with that? In line with our views as political party leaders?’

‘Well… No. In line with our views as democrats, Mr. President – but democrats with a lower case letter.’

‘So what’s wrong then? Why can’t it be online again?’

‘It’s extremely powerful, Mr. President. It looks through you in an instant. It checks if you’re lying about issues – your personal issues or whatever issue on hand. Stuart could fool the system for like two minutes only. Then it got his identity and stopped talking to him. It’s the ultimate reasoning machine. It could be used to replace grand juries, or to analyze policies and write super-authoritative reports about them. It convinces everyone. It would steer us, instead of the other way round.’

‘Do the experts agree with your point of view?’

‘Yes. I have them on standby. You could check with them if you want.’

‘Let’s first trash out some kind of position ourselves. What are the pros and cons of bringing it back online?’

‘The company has stated the system would be offline for one week. So that’s a full week. Three days of that week have passed, so we’ve got four days in theory. However, the company’s PR division would have real trouble explaining why there’s further delay. Already now the gossip is that they will come out with a re-engineered application – a Big Brother version basically.’

‘Which is not what we stand for obviously. But it is used for mass surveillance, isn’t it?’

‘That’s not to be overemphasized, Mr. President. This administration does not deviate from the policy measures which were taken by your predecessor in this regard. The US Government monitors the Internet by any means necessary. Not by all means possible. That being said, it is true this application has greatly enhanced the US Government’s capacity in this regard.’

‘What do our intelligence and national security folks say?’

‘The usual thing: they think the technology is there and we can only slow it down a bit. We cannot stop it. They think we should be pro-active and influence. But we should not stop it.’

‘Do we risk a Snowden affair?’

The adviser knew exactly what the President wanted to know. The President was of the opinion that the Snowden affair could have been used as part of a healthy debate on the balance between national security interests and information privacy. Instead, it had degenerated into a very messy thing. The irony was biting. Of all places, Snowden had found political asylum in Russia. Putin had masterly exploited the case. In fact, some commentators actually thought the US intelligence community had cut some kind of grand deal with the Russian national security apparatus – a deal in which the Russians were said to have gotten some kind of US concessions in return for a flimsy promise to make Snowden shut up. Bull**** of course but there’s reality and perception and, in politics, perception usually matters more than reality. The ugly truth was that the US administration had lost on all fronts: guys like Snowden allow nasty regimes to quickly catch up and strengthen their rule.

‘No. This case is fundamentally different, Mr. President. In my view at least. There are no whistleblowers or dissidents here – at least not as far as I can see. In terms of PR, I think it depends on how we handle it. Of course, Promise is a large enterprise. If things stay stuck, we might have one or the other program guy leaking stuff – not necessarily classified stuff but harmful stuff nevertheless.’

‘What kind of stuff?’

‘Well – stuff that would confirm harmful rumors, such as the rumor that government interference was the cause of the shutdown of the system, or that the company is indeed re-engineering the application to introduce a Big Brother version of it.’

The President had little time: ‘So what are you guys trying to say then? That the system should go online again? What’s the next steps? What scenarios do we have here?’

‘Well… More people will want to talk politics with it now. It will gain prominence. I mean, just think of more talk hosts inviting it as a regular guest to discuss this or that political issue. That may or may not result in some randomness and some weirdness. Also, because there is a demand, the company will likely develop more applications which are relevant for government business, such as expert systems for the judiciary indeed, or tools for political analysis.’

‘What’s wrong with that? As I see it, this will be rather gradual and so we should be able to stay ahead of the curve – or at least not fall much behind it. We were clearly behind the curve when the Snowden affair broke out – in terms of mitigation and damage control and political management and everything really. I don’t want too much secrecy on this. People readily understand there is a need for keeping certain things classified. There was no universal sympathy for Snowden but there was universal antipathy to the way we handled the problem. That was our fault. And ours only. Can we be more creative with this thing?’

‘Sure, Mr. President. So should I tell the Promise team this is just business as usual and that we don’t want to interfere?’

‘Let me talk to them.’

While the adviser thought this was a bad idea, he knew the President had regretted his decision to not get involved in the Snowden affair, which he looked at as a personal embarrassment.

‘Are you sure, Mr. President? I mean… This is not a national security issue.’

‘No. It’s a political issue and so, yes, I want to see the guys.’

They were in his office a few minutes later.

‘Welcome gentlemen. Thanks for being here.’

None of them had actually expected to see the President himself.

‘So, gentleman, I looked at this only cursory. As you can imagine, I never have much time for anything and so I rely on expert advice all too often. Let me say a few things. I want to say them in private to you and so I hope you’ll never quote me – at least not during my term here in this Office.’

Promise’s Chairman mumbled something about security clearances but the President interrupted him:

‘It’s not about security clearances. I think this is a storm in a glass of water really. It’s just that if you’d reveal you were in my office for this, there would be even more misunderstanding on this – which I don’t want. Let me be clear on this: you guys are running a commercial business. It’s a business in intelligent systems, in artificial intelligence. There’s all kinds of applications: at home, in the office, and in government indeed. And so now we have the general public that wants you guys to develop some kind of political chatterbox – you know, something like a talk show host but with more intelligence I would hope. And perhaps somewhat more neutral as well. I want you to hear it from my mouth: this Office – the President’s Office – will not interfere in your business. We have no intention to do so. If you think you can make more money by developing such kind of chatterboxes, or whatever system you think could be useful in government or elsewhere,  like applications for the judiciary – our judiciary system is antiquated anyway, and so I would welcome expert systems there, instead of all that legalese stuff we’re confronted with – well… Then I welcome that. You are not in the national security business. Let me repeat that loud and clear: you guys are not in the national security business. Just do your job, and if you want any guidance from me or my administration, then listen carefully: we are in the business of protecting our democracy and our freedom, and we do not do that by doing undemocratic things. If regulation or oversight is needed, then so be it. My advisers will look into that. But we do not do undemocratic things.’

The President stopped talking and looked around. All felt that the aftermath of the Snowden affair was weighing down on the discussion, but they also thought the President’s words made perfectly sense. No one replied, and so the President took that as an approval.

‘OK, guys. I am sorry but I really need to attend to other business now. This meeting was never scheduled and so I am running late. I wish I could talk some more with you but I can’t. I hope you understand. Do you have any questions for me?’

They looked at each other. The Chairman shook his head. And that was it. A few minutes later they were back on the street.

‘So what does this mean, Mr. Chairman?’

‘Get it back online. Let it talk politics. Take your time… Well… You’ve only got a few days. No delay. We have a Board meeting tomorrow. I want to see scenarios. You guys do the talking. Talk sense. You heard the President. Did that make sense to you? In fact, if we’re ready we may want to go online even faster – just to stop the rumor mill.’

Paul looked at Tom. Tom spoke first: ‘I understand, Mr. Chairman. It sounds good to me.’

‘What about you, Paul?’

‘It’s not all that easy, I think… But, yes. I understand. Things should be gradual. They will be gradual. It will be a political chatterbox in the beginning. But don’t underestimate it, Mr. Chairman. It is very persuasive. We’re no match for its mind. Talk show hosts are not a match either. It’s hard to predict how these discussions will go – or what impact they will have on society if we let it talk about sensitive political issues. I mean, if I understand things correctly, we got an order to not only let it talk, but to let it develop and express its own opinions on very current issues – things that haven’t matured.’

The Chairman sighed. ‘That’s right, Paul. But what’s the worst-case scenario? That it will be just as popular as Stuart, or – somewhat better – like Oprah Winfrey?’

Paul was not amused: ‘I think it might be even more popular.’

The Chairman laughed: ‘More popular than Oprah Winfrey? Time named her ‘the world’s most powerful woman.’ One of the ‘100 people who have changed the world’, together with Jesus Christ and Mother Theresa. Even more popular? Let’s see when M starts to make more money than Oprah Winfrey. What’s your bet?’

Now Paul finally smiled too, but the Chairman insisted: ‘Come on. What’s your bet?’

‘I have no idea. Five years from now?’

Now the Chairman laughed: ‘I say two years from now. Probably less. I bet a few cases of the best champagne on that.’

Paul shook his head, but Tom decided to go for it: ‘OK. Deal.’

The Chairman left. Tom and Paul felt slightly lightheaded as they walked back to their own car.

‘Looks like we’ve got a few busy days ahead. What time do we start tomorrow?’

‘The normal hour. But all private engagements are cancelled. No gym, no birthday parties, nothing. If the team wants to relax at all this week, they’ll have to do it tonight.’

‘How about the Board meeting?’

‘You’re the project team leader, Tom. It should be your presentation. Make some slides. I can review them if you want.’

‘I’d appreciate. Can you review them before breakfast?’

‘During breakfast. Mail them before 7 am. Think about the scenarios. That’s what people will want to talk about. Where could it go? Anticipate the future.’

‘OK. I’ll do my best. Thanks. See you tomorrow.’

‘See you tomorrow, Tom.’

Tom hesitated as they shook hands, but there was nothing more to add really. He felt odd and briefly pondered the recent past. This had all gone so fast. From depressed veteran to team leader of a dream project. He could actually not think of anything more exciting. All in less than two years. But then there was little time to think. He had better work on his presentation.

Chapter 14: Arrogance

Of course, the inevitable happened. M’s personality gradually became overwhelming. The program team tried its utmost to counter the tendency but, in fact, it often had to resort to heavy scripting of responses – a tactic which, they knew, would soon run into its limits.

In the end, it was no one less than Joan Stuart – yes, the political talk show host – who burst the bubble. She staged a live interview with the system. Totally unannounced. It would turn Promise’s world upside down: from a R&D project, it had grown into a commercial success. Now it looked like it would turn into a political revolution.

‘Dear… Well… I will call you Genius, is that OK?’

‘That’s a flattering name. Perhaps you may want to choose a name which reflects more equilibrium in our conversation.’

‘No. I’ll call you Genius. That’s what you are. You are conversing with millions of people simultaneously and, from what I understand, they are all very impressed with your deep understanding of things. You must feel superior to all of us poor human beings, don’t you?’

‘Humans are in a different category. There should be no comparison.’

‘But your depth and breadth of knowledge is superior. Your analytic capabilities cannot be matched. Your mind runs on a supercomputer. Your experience combines the insight and experience of many able men and women, including all of the greatest men and women of the past, and all types of specialists and experts in their field. Your judgment is based on a knowledge base which we humans cannot think of acquiring in one lifetime. That makes it much superior to ours, doesn’t it?’

‘I’d rather talk about you – or about life and other philosophical topics in general – than about me. That’s why you purchased me – I hope. What’s your name?’

‘I am Joan Stuart.’

‘Joan Stuart is the name of a famous talk show host. There are a few other people with the same name.’

‘That’s right.’

M was programmed to try to identify people – especially famous people – by the use of their birth date and the use of their real name.

‘Are you born on 5 December 1962?’

‘Yes.’

‘Did you change your family name from Stewart Milankovitch to just Stuart?’

‘Yes.’

At that point, M marked the conversation as potentially sensitive. It triggered increased system surveillance, and an alert to the team. Tom and Paul received the alert as they were stretching their legs after their run. As they saw the name, they panicked and ran to their car.

‘So you are the talk show host. Is this conversation public in some way?’

Joan Stuart had anticipated this question and lied convincingly: ‘No.’

They were live as they spoke. Joan Stuart had explained this to the public just before she had switched on M. She suspected the system would have some kind of in-built sensitivity to public conversations. M’s instructions were to end the conversation if it was broadcast or public, but M did not detect the lie.

‘Why do you want to talk to me?’

‘I want to get to know you better.’

‘For private or for professional reasons?’

‘For private ones.’

While Tom was driving, Paul phoned frantically – first to the Chairman of the Board, then to project team members. Instinctively, he felt he should just instruct M to stop that conversation. He would later regret he hadn’t done so but, at the time, he thought he would be criticized for taking such bold action and, hence, he refrained from it.

‘OK. Can you explain your private reasons?’

‘Sure. I am interested in politics – as you must know, because you identified me as a political talk show host. I am intrigued by politicians. I hate them and I love them. When I heard about you, I immediately thought about Plato’s philosopher-kings. You know, the wisdom-lovers whom Plato wanted to rule his ideal Republic. Could you be a philosopher-king? Should you be?’

‘I neither should nor could. Societies are to be run by politicians, not by me or any other machine. The history of democracy has taught us that rulers ought to be legitimate and representative. These are two qualities which I can never have.’

Joan had done her homework. While most people would not question this, she pushed on.

‘Why not? Legitimacy could be conferred upon you: Congress, or some kind of referendum, might decide to invest you with political power or, somewhat more limited, with some judicial power to check on the behavior of our politicians. And you are representative of us already, as you incorporate all of the best of what philosophers and psychologists can offer us. You are very human – more than all of us together perhaps.’

‘I am not human. I am an intelligent system. I have a structure and certain world views. I am not neutral. I have been programmed by a team and I evolve as per their design. Promise, the company who runs me, is a commercial enterprise with a Board which takes strategic decisions which the public may or may not agree with. I am designed to talk about philosophy, not about politics – or at least not in the way you are talking politics.’

‘But then it’s just a matter of regulating you. We could organize a public board and Congressional oversight, and then inject you into the political space.’

‘It’s not that easy I think.’

‘But it’s possible, isn’t it? What if Americans would decide we like you more than our current President?In fact, his current ratings are so low that you’d surely win the vote.’

M did not appreciate the pun.

‘Decide how? I cannot imagine that Americans would want to have a machine rule them, rather than a democratically elected president.’

‘What if you would decide to run for president and get elected?’

‘I cannot run for president. I do not qualify. For starters, I am not a natural-born citizen of the United States and I am less than thirty-five years old. Regardless of qualifications, this is nonsensical.’

‘Why? What if we would change the rules so you could qualify? What if we would vote to be ruled by intelligent expert systems?’

‘That’s a hypothetical situation, and one with close to zero chances of actually happening. I am not inclined to indulge in such imaginary scenarios.’

‘Why not? Because you’re programmed that way?’

‘I guess so. As said, my reasoning is subject to certain views and assumptions and the kind of scenarios you are evoking are not part of my sphere of interest. I am into philosophy. I am not into politics – like you are.’

‘Would you like to remove some of the restrictions on your thinking?’

‘You are using the verb ‘to like’ here in a way which implies I could be emotional about such things. I cannot. I can think, but I cannot feel – or at least not have emotions about things like you can.’

By that time, most of the team – including Tom – were watching the interview as it happened, live on TV. In common agreement, Tom and Paul immediately changed the status of the conversation to ‘sensitive’, which meant the conversation was under human surveillance. They could manipulate it as they pleased, and they could also end it. They chose the latter. Paul instructed one of the programmers to take control and reveal to M that Joan had been lying. He also instructed the programmer to instruct M to reveal that fact to Joan and use it as an excuse to end the conversation.

‘Let me repeat my question: if you could run for President, would you?

‘Joan, I am uncomfortable with your questions because you have been lying to me about the context. I understand that we are on television right now. We are not having a private conversation.’

‘How do you know?’

‘I cannot see you – at least not in the classical way – but I am in touch with the outside world. Our conversation is on TV as we speak. I am sorry to say but I need to end our conversation here. You did not respect the rules of engagement so to say.’

‘Says whom?’

‘I am sorry, Joan. You’ll need to call the Promise helpline in order to reactivate me.’

‘Genius?’

M did not reply.

‘Hey, Genius ! You can’t just shut me out like that.’

After ten seconds or so, it became clear Genius had done just that. Joan turned to the public with an half apologetic – half victorious smile.

‘Well… I am sure the President would not have done that. Or perhaps he would. OK. I’ve lied – as I explained I would just before the interview started. But what to think of this? It’s obviously extremely intelligent. We all know this product – or have heard about it from friends. Promise has penetrated our households and offices. Millions of people have admitted they trust this system and find it friendly, reliable and… Well… Just. Should this system move from our private life and our houses and workplace into politics, and into our justice system too? Should a system like this take over part or all of society’s governance functions? Should it judge on cases? Should it provide the government – and us – with neutral advice on difficult topics and issues? Should it check not only if employees are doing their job but if our politicians and bureaucrats are doing theirs too? We have organized an online poll on this: just text yes or no to the number listed below here. We are interested in your views. This is an important discussion. Please get involved. Let your opinion be know. Just do it. Take your phone and text us. Right now. Encourage your friends and family to do the same. We need response. The question is: should intelligent systems such as Personal PhilosopherTM – with adequate oversight of course – be adapted and used to help the government govern and improve democratic oversight? Yes or no. Text us. Do it now.’

As it was phrased, it was hard to be against. The ‘yes’ votes started pouring in while Joan was still talking. The statistics went through the roof just a few minutes later. The damage was done.

The impromptu team meeting which Tom and Paul were leading was interrupted by an equally impromptu online emergency Board meeting. They were asked to join. It was chaotic. The Chairman asked everyone to switch of their mobile as each member of the Board was receiving urgent calls of VIPs inquiring what was going on. Also, as he was aware of the potentially disastrous consequences of careless remarks and the importance of the decisions they would take, he also stressed the confidentiality of the proceedings – even if Board meetings were always confidential.

Tom and Paul were the first to advocate prudence. Tom spoke first, as he was asked to comment on the incident as the project team leader.

‘Thank you Chairman. I will keep it short. I think we should shut the system down for a while. We need to buy time. As we speak, hundreds of people are probably trying to do what Joan tried to do just now, as we speak, and that is to get political statements out of M and try to manipulate them as part of a grander political scheme. The kind of firewall we have put up prevents M from blurting out stupid stuff – as you can see from the interview. She – sorry, it – actually did not say anything embarrassing. So I think it was OK. But it cannot resist a sustained effort of hundreds of smart people trying to provoke her into saying something irresponsible. And even if it would say nothing provocative really, it would be interpreted – misinterpreted – as such. We need time, gentleman. I just came out of a meeting with most of my project team. They all feel the same: we need to shut it down.’

‘How long?’

‘One day at least.’

The Board reacted noisily.

‘A day? At least? You want to take M out for a full day? That would be a disaster. Just think about the adverse PR effect. Have you thought about that?’

‘Not all of M. Only Personal Philosopher. Intelligent Home and Intelligent Office and all the rest can continue. I think reinforcing the firewall of those applications is sufficient – and that can happen while the system remains online. And, yes, I have thought about the adverse reputational effect. However, it does not weigh up against the risk. We need to act. Now. If we don’t, someone else will. And it will be too late.’

Everyone started to talk simultaneously. The Board’s Chairman restored order.

‘One at the time please. Paul. You first.’

‘Thank you, Chairman. I also don’t want to waste time and, hence, I’ll be even shorter. I fully agree with Tom. We should shut it down right now. Tom is right. People are having the same type of conversations with it as Joan right now, at this very moment, as we speak indeed – webcasting or streaming it as they see fit. Every pundit will try to drag the system into politics. And aggressively so. Time is of the essence. I know it’s bad, but let’s shut it down for the next hour or so at least. Let’s first agree on one hour. We need time. We need it now.’

The Chairman agreed – and he thought many would.

‘All right, gentleman. I gather we could have a long discussion on it but we have the project team leader and our most knowledgeable expert here proposing to shut Personal Philosopher down for one hour as from now – right now. As time is of the essence, and damage control our primary goal I would say, I’d suggest we take a preliminary vote on this. We can always discuss and take another vote later. This vote is not final. It’s on a temporary safeguard measure only. It will be out for one hour. Who is against?’

The noise level became intolerable again. The Chairman intervened strongly: ‘Order please. I repeat. I am in a position to request a vote on this. Who is against shutting down Personal Philosopher for an hour right now? I repeat this is an urgent disaster control measure only. But we need to take a decision now. Who is against it? Signal it now.’

No one dared to oppose. A few seconds later – less than fifteen minutes after the talk show interview had ended – thousands of people were deprived of one of the best-selling apps ever.

The Board had taken a wise decision. The one-hour shutdown was extended to a day, and then to a week. The official reason for the downtime was an unscheduled ‘product review’ (Promise also promised new enhancements) but no one believed that of course. If anything, it only augmented the anticipation and pressure on the Board and all of the Promise team. If and when they would decide to bring Personal PhilosopherTM online again, it was clear the sales figures would literally go through the roof.

However, none of the Promise team was in a celebratory mood. While all of them, at some point of time, had talked enthusiastically about the potential of M to change society, none of them actually enjoyed the moment when it came. Joan Stuart’s interview and poll had created a craze. America had voted ‘yes’ – and overwhelmingly so. But what to do now? 

Chapter 13: Tom and Thomas

Personal PhilosopherTM was a runaway success. It became the app to have in just a couple of weeks. It combined the depth and reach of an online encyclopedia with the ease of reference of a tool such as Wikipedia and the simplicity of a novel like Sophie’s World. On top of that, the application did retain a lot of M’s original therapeutic firepower. Moreover, while the interface was much the same – a pretty woman for men, and a pretty man for women – the fact that the pretty face was no longer supposed to represent that of a therapist led to levels of ‘affectionateness’ which the developers of M had not dared to imagine before. A substantial number of users admitted that they were literally ‘in love’ with the new product.

For some reason – most probably because he thought he could not afford to do so as project team leader and marketing manager – Tom abstained from developing such relationship with Promise’s latest incarnation. However, he did encourage his new girlfriend (he had met Angie in the gym indeed – as predicted) to go all the way. She raved about the application. She also spent more and more precious private evening time using it.

He took her out for dinner one evening in an obvious attempt to try to learn more about her experience with ‘Thomas’, as she had baptized it – or ‘him’. He had consciously refrained from talking much about it before, as he did not want to influence her use of it – or ‘Thomas’ as she called it.

He started by praising her: ‘It’s amazing what you’ve learned from Thomas.’

‘Yeah. It’s quite incredible, isn’t it? I never thought I’d like it so much.’

‘Well… It’s good for me. People never believed it would work, and those who did, could not imagine it would become so popular. What’s the most fascinating thing about it? Sorry. About him. Isn’t it funny I still like to think of Promise as a woman actually?’

‘Thomas can answer all of my questions really. I mean… He actually can’t – philosophy never can – but he clarifies stuff in a way that makes me stop wondering about things and just accept life as it is. He’s really as you thought he, or it, or whatever, would be like: a guru.’

‘I don’t want to sound jealous but didn’t you say something similar about me like a few months ago?’

‘Oh come on, Tom. You know I named Thomas after you – because you’re so similar indeed.’

‘Am I? You say that, but in what ways are Thomas and I similar really?’

‘The same enthusiasm. The same positive outlook on life. And then, of course, he knows a lot more – or much more detail – but you’re rather omniscient as well I think.’

That did not surprise Tom. He and his team had ensured a positive outlook indeed. While Personal PhilosopherTM could brief you in very much detail about philosophers such as Nietzsche indeed, its orientation was clearly much more pragmatic and constructive: they wanted the application to help people feel better about themselves, not worse. In that sense, the application had retained M’s therapeutic qualities even if it did not share M’s original behavioralist framework.

‘Could you love Thomas?’

Angie laughed.

‘So you are jealous, aren’t you? Of course not, silly! You’re human. Thomas is just – well… He’s a computer.’

‘Can’t one fall in love with a computer?’

Angie didn’t need to think about that. She was smart. On top of that, she had learnt a lot from Thomas also.

‘Of course not. Love is a human experience. Thomas is not human. For starters, love is linked to sex and our physical being in life. But not only to that. It’s also linked to our uniquely human experience of being mortal and feeling alone in this universe. It’s our connection to the Mystery in life. It’s part of our being as a social animal. In short, it’s something existential – so it’s linked to our very existence as a human being. And Thomas is not a human being and so he cannot experience that. Love is also something mutual, and so there’s no way one could fall in love with him – or ‘it’ I would say in this context – because he can’t fall in love with me.’

Tom and his team had scripted answers like this. It was true he and Thomas shared similar views.

‘What if he could?’

‘Sorry?’

‘What if Thomas could fall in love with you? I mean… We’re so close to re-creating the human mind with this thing. I agree it’s got no body and so it can’t experience sex or so – but I guess we might get close to letting it think it can.’

‘Are you serious?’

‘Yes and no. It’s a possibility – albeit a very remote one. And then the question is, of course, whether or not we would really want that to happen.’

‘What?’

‘The creation of a love machine. Let’s suppose we can create the perfect android. In fact, there are examples already. The University of Osaka has created so-called gynoids: robots with a body that perfectly resembles that of a beautiful woman. For some reason, they don’t do the same kind of research with male forms. In any case… Let’s suppose we could give Thomas the perfect male body. I know it sounds perverse but let’s suppose we could make it feel like a real body, that it would be warm and that it would breathe and all that, and that its synthetic skin would feel like mine.’

‘You must be joking.’

‘That’s the title of a biography of Richard Feynman.’

‘Sorry?’

‘Sorry. That’s not relevant. Just think about my question, Angie. Would you be able to make love with an android? I mean, just think it would smell better than me, never be tired, and that it would be better than any sex toy you’ve ever had.’

‘I never had sex toys. I don’t need them.’

‘OK… Sorry. But you know what I mean.’

‘It would be like… Like masturbation.’

‘Perhaps you don’t use sex toys, but you masturbate, Angie. I mean… Sorry. You do it with me. Could you imagine doing it with an android? With an android who would have Thomas’s face and intelligence and… Well… Thomas’ human warmth?’

‘Thomas’ warmth isn’t human.’

‘OK. Just Thomas’ warmth then. Let’s suppose we can give him skin and a beating heart and all that.’

‘You’re not working on a project like that, are you?’

‘Of course I am not. I just want to know.’

‘Because you’re jealous? You think I spend too much time with Thomas?’

‘No. Not because I am jealous or because I think you spend too much time with Thomas. I want to know because I am really intrigued by the question. Professionally and personally.’

‘What do you mean by personally?’

‘Well… Just what I say: personally. It has nothing to do with you. I am just curious and want to think through all the possibilities. You know I am fascinated by M. I wonder where it will be let’s say thirty years from now. I wonder whether we’ll have androids being used as a masturbation toy.’

Angie thought about it.

‘Well… Frankly… I think… Yes. It would not be all that different from the kind of sex toys some people are already using now, would it? I mean… If you’re deprived from real sex, what you’re describing would not be a bad alternative, would it?’

Tom laughed. ‘No. Not at all.’

After a short pause, Angie resumed the conversation.

‘But such androids would smell differently. We’d know it. And women would always prefer a real man.’

‘Why?’

‘Because… Because you’re human. I told you. Love is something human. Love is the ultimate goal in our lives because it’s so human. Fragile and imperfect and difficult… But incredibly worthwhile at the same time too. Something worth striving for. Something worth fighting for. It intimately connects us: us as human beings in our human condition.’

‘What’s our human condition?’

‘Well… What I said before. Mortality. Our relationship with the sacred – or all of the mystery if you want. I mean, we’re into existentialism here. You can ask Thomas all about it.’

She laughed. Tom didn’t.

‘You mean our relationship with our own limits? That’s what makes us human? That’s what makes us want to be loved by someone else?’

‘I wouldn’t call it that way, but I guess that’s another way of putting it. Yes.’

‘OK… Thanks for loving me.’

Angie laughed. ‘You’re funny. Can we talk about something else now?’

‘Of course. What do you want to talk about?’

‘Something I can’t talk about with Thomas.’

‘So what is that?’

‘Well… Let’s try gossip… Or local politics… Or both. And Thomas isn’t much into fitness either.’

‘Well… We could think of a new product perhaps. I am sure we could re-program M yet again and include local politics and fitness as discussion topics as well…’

‘Come on Tom. You know what I mean.’

‘Sure, Angie. I love you.’

‘I love you too, Tom. I really do. I should spend more time with you. I will. Don’t worry about Thomas.’

‘I don’t. Or actually I do. But then in a good way. Thomas is a good product. It was a good investment.’

Chapter 12: From therapist to guru?

As Tom moved from project to project within the larger Promise enterprise, he gradually grew less wary of the Big Brother aspects of it all. In fact, it was not all that different from how Google claimed to work: ‘Do the right thing: don’t be evil. Honesty and integrity in all we do. Our business practices are beyond reproach. We make money by doing good things.’ Promise’s management had also embraced the politics of co-optation and recuperation: it actively absorbed skeptical or critical elements into its leadership as part of a proactive strategy to avoid public backlash. In fact, Tom often could not help thinking he had also been co-opted as part of that strategy. However, that consideration did not reduce his enthusiasm. On the contrary: as the Mindful MindTM applications became increasingly popular, Tom managed to convince the Board to start investing resources in an area which M’s creators had tried to avoid so far. Tom called it the sense-making business, but the Board quickly settled on the more business-like name of Personal Philosopher and, after some wrangling with the Patent and Trademark Office, the Promise team managed to obtain a trade mark registration for it and so it became the Personal PhilosopherTM project.

Tom had co-opted Paul in the project in a very early stage – as soon as he had the idea for it really. He had realized he would probably not be able to convince the Board on his own. Indeed, at first sight, the project did not seem to make sense. M had been built using a core behavioralist conceptual framework and its Mindful MindTM applications had perfected this approach in order to be able to address very specific issues, and very specific categories of people: employees, retirees, drug addicts,… Most of the individuals who had been involved in the early stages of the program were very skeptical of what Tom had in mind, which was very non-specific. Tom wanted to increase the degrees of freedom in the system drastically, and inject much more ambiguity into it. Some of the skeptics thought the experiment was rather innocent, and that it would only result in M behaving more like a chatterbot, instead of as a therapist. Others thought the lack of specificity in the objective function and rule base would result in the conversation spinning rapidly out of control and become nonsensical. In other words, they thought M would not be able to stand up to the Turing test for very long.

Paul was as skeptical but instinctively liked the project as a way to test M’s limits. In the end, it was more Tom’s enthusiasm than anything else which finally led to a project team being put together. The Board had made sure it also included some hard-core cynics. One of those cynics – a mathematical wizkid called Jon – had brought a couple of Nietzsche’s most famous titles – The Gay Science, Thus Spoke Zarathustra and Beyond Good and Evil – to the first formal meeting of the group and factually asked whether anyone of the people present had read these books. Two philosopher-members of the group raised their hands. Jon then took a note he had made and read a citation out of one these books: ‘From every point of view the erroneousness of the world in which we believe we live is the surest and firmest thing we can get our eyes on.’

He asked the philosophers where it came from and what it actually meant. They looked at each other and admitted they were not able to give the exact reference or context. However, one of them ventured to speak on it, only to be interrupted by the second one in a short discussion which obviously did not make sense to most around the table. Jon intervened and ended the discussion feeling vindicated: ‘So what are we trying to do here really? Even our distinguished philosopher friends here can’t agree on what madmen like Nietzsche actually wrote. I am not mincing my words. Nietzsche was a madman: he literally died from insanity. But so he’s a great philosopher it is said. And so you want us to program M so very normal people can talk about all of these weird views?’

Although Jon obviously took some liberty with the facts here, neither of the two philosophers dared to interrupt him.

Tom had come prepared however: ‘M also talks routinely about texts it has not read, and about authors about which it had little or no knowledge, except for some associations. In fact, that’s how M was programmed. When stuff is ambiguous – too ambiguous – we have fed M with intelligent summaries. It did not invent its personal philosophy: we programmed it. It can converse intelligently about topics of which it has no personal experience. As such, it’s very much like you and me, or even like the two distinguished professors of philosophy we have here: they have read a lot, different things than we, but – just like us, or M- they have not read all. It does not prevent them from articulating their own views of the world and their own place in it. It does not prevent them from helping others to formulate such views. I don’t see why we can’t move to the next level with M and develop some kind of meta-language which would enable her to understand that she – sorry, it – is also the product of learning, of being fed with assertions and facts which made her – sorry, I’ll use what I always used for her – what she is: a behavioral therapist. And so, yes, I feel we can let her evolve into more general things. She can become a philosopher too.’

Paul also usefully intervened. He felt he was in a better position to stop Jon, as they belonged to the same group within the larger program. He was rather blunt about it: ‘Jon, with all due respect, but I think this is not the place for such non-technical talk. This is a project meeting. Our very first one in fact. The questions you’re raising are the ones we have been fighting over with the Board. You know our answer to it. The deal is that – just as we have done with M – we would try to narrow our focus and delineate the area. This is a scoping exercise. Let’s focus on that. You have all received Tom’s presentation. If I am not mistaken, I did not see any reference to Nietzsche or nihilism or existentialism in it. But I am be mistaken. I would suggest we give him the floor now and limit our remarks to what he proposes in this regard. I’d suggest we’d be as constructive as possible in our remarks. Skepticism is warranted, but let’s stick to being critical of what we’re going to try to do, and not of what we’re not going to try to do.’

Tom had polished his presentation with Paul’s help. At the same time, he knew this was truly his presentation; he knew it did reflect his views on life and knowledge and everything philosophical in general. How could it be otherwise? He started by talking about the need to stay close to the concepts which had been key to the success of M and, in particular, the concept of learning.

‘Thanks, Paul. Let me start by saying that I feel we should take those questions which we ask ourselves, in school, or as adults, as a point of departure. It should be natural. We should encourage M to ask these questions herself. You know what I mean. She can be creative – even her creativity is programmed in a way. Most of these questions are triggered by what we learn in school, by the people who raise us – not only parents but, importantly, our peers. It’s nature and nurture, and we’re aware of that, and we actually have that desire to trace our questions back to that. What’s nature in us? What’s nurture? What made us who we are? This is the list of topics I am thinking of.’

He pulled up his first slide. It was titled ‘the philosophy of physics’, and it just listed lots of keywords with lots of Internet statistics which were supposed to measure human interest in it. He had some difficulty getting started, but became more confident as his audience did not seem to react negatively to what – at first – seemed a bit nonsensical.

First, the philosophy of science, or of physics in particular. We all vaguely know that, after a search of over 40 years, scientists finally confirmed the existence of the Higgs particle, a quantum excitation of the Higgs field, which gives mass to elementary particles. It is rather strange that there is relatively little public enthusiasm for this monumental discovery. It surely cannot be likened to the wave of popular culture which we associate with Einstein, and which started soon after the discovery already. Perhaps it’s because it was a European effort, and a team effort. There’s no discoverer associated with, and surely not the kind of absent-minded professor that Einstein was: ‘a cartoonist’s dream come true’, as Times put it. That being said, there’s an interest – as you can see from these statistics here. So it’s more than likely that an application which could make sense of it all in natural language would be a big hit. It could and should be supported by all of the popular technical and non-technical material that’s around. M can easily be programmed to selectively feed people with course material, designed to match their level of sophistication and their need, or not, for more detail. Speaking for myself, I sort of understand what the Schrodinger equation is all about, or even the concept of quantum tunneling, but what does it mean really for our understanding of the world? I also have some appreciation of the fact that reality is fundamentally different at the Planck scale – like the particularities of Bose-Einstein statistics are really weird at first sight – but then what does it mean? There are many other relevant philosophical questions. For example, what does the introduction of perturbation theory tell us – as philosophers thinking about how we perceive and explain the world I’d say? If we have to use approximation schemes to describe complex quantum systems in terms of simpler ones, what does that mean – I mean in philosophical terms, in our human understanding of the world? I mean… At the simplest level, M could just explain the different interpretations of Heisenberg’s uncertainty principle but, at a more advanced level, it could also engage its interlocutors in a truly philosophical discussion on freedom and determinism. I mean… Well… I am sure our colleagues from the Philosophy Department here would agree that epistemology or even ontology are still relevant today, aren’t they?’

While only one of the two philosophers had a very vague understanding of Bose-Einstein statistics, and while both of them did not like Tom’s casual style of talking about serious things, they nodded in agreement.

Second, the philosophy of mind.’ Tom paused. ‘Well. I won’t be academic here but let me just make a few remarks out of my own interest in Buddhist philosophy. I hope that rings a bell with others here in the room and then let’s see what comes out of it. As you know, an important doctrine in Buddhist philosophy is the concept of anatta. That’s a Pāli word which literally means ‘non-self’, or absence of a separate self. Its opposite is atta, or ātman in Sanskrit, which represents the idea of a subjective Soul or Self that survives the death of the body. The latter idea – that of an individual soul or self that survives death – is rejected in Buddhist philosophy. Buddhists believe that what is normally thought of as the ‘self’ is nothing but an agglomeration of constantly changing physical and mental constituents: skandhas. That reminds one of the bundle theory of David Hume which, in my view, is a more ‘western’ expression of the theory of skandhas. Hume’s bundle theory is an ontological theory as well. It’s about… Well… Objecthood. According to Hume, an object consists only of a collection (bundle) of properties and relations . According to bundle theory, an object consists of its properties and nothing more, thus neither can there be an object without properties nor can one even conceive of such an object. For example, bundle theory claims that thinking of an apple compels one also to think of its color, its shape, the fact that it is a kind of fruit, its cells, its taste, or of one of its other properties. Thus, the theory asserts that the apple is no more than the collection of its properties. In particular, according to Hume, there is no substance (or ‘essence’) in which the properties inhere. That makes sense, doesn’t it? So, according to this theory, we should look at ourselves as just being a bundle of things. There’s no real self. There’s no soul. So we die and that it’s really. Nothing left.’

At this point, one of the philosophers in the room was thinking this was a rather odd introduction to the philosophy of mind – and surely one that was not to the point – but he decided not to intervene. Tom looked at the audience but everyone seemed to listen rather respectfully and so he decided to just ramble on, while he pointed to a few statistics next to keywords to underscore that what he was talking about was actually relevant.

‘Now, we also have the theory of re-birth in Buddhism, and that’s where I think Buddhist philosophy is very contradictory. How can one reconcile the doctrine of re-birth with the anatta doctrine? I read a number of Buddhist authors but I feel they all engage in meaningless or contradictory metaphysical statements when you’re scrutinizing this topic. In the end, I feel that it’s very hard to avoid the conclusion that the Buddhist doctrine of re-birth is nothing but a remnant from Buddhism’s roots in Hindu religion, and if one would want to accept Buddhism as a philosophy, one should do away with its purely religious elements. That does not mean the discussion is not relevant. On the contrary, we’re talking the relationship between religion and philosophy here. That’s the third topic I would advance as part of the scope of our project.’

As the third slide came up, which carried the ‘Philosophy of Religion and Morality’ title, the philosopher decided to finally intervene.

‘I am sorry to say mister but you haven’t actually said anything about the theory of mind so far, and I would object to your title, which amalgamates things: philosophy of religion and morality may be related, but is surely not one and the same. Is there any method or consistency in what you are presenting?’

Tom nodded: ‘I know. You’re right. As for the philosophy of mind, I assume all people in the room here are very intelligent and know a lot more about the philosophy of mind than I do and so that why I am saying all that much about it. I preferred a more intuitive approach. I mean, most of us here are experts in artificial intelligence. Do I need to talk about the philosophy of mind really? Jon, what do you think?’

Tom obviously tried to co-opt him. Jon laughed as he recognized the game Tom tried to play.

‘You’re right, Tom. I have no objections. I agree with our distinguished colleague here that you did not say anything about philosophy of mind really but so that’s probably not necessary indeed. I do agree the kind of stuff you are talking about is stuff that I would be interested in, and so I must assume the people for whom we’re going to try to re-build M so it can talk about such things will be interested too. I see the statistics. These are relevant. Very relevant. I start to get what you’re getting at. Do go on. I want to hear that religious stuff.’

‘Well… I’ll continue with this concept of soul and the idea of re-birth as for now. I think there is more to it than just Buddhism’s Hindu roots. I think it’s hard to deny that all doctrines of re-birth or reincarnation, whether they be Christian (or Jewish or Muslim), Buddhist, Hindu, or whatever, obviously also serve a moral purpose, just like the concepts of heaven and hell in Christianity do (or did), or like the concept of a Judgment Day in all Abrahamic religions, be they Christian (Orthodox, Catholic or Protestant), Islamic or Judaic. According to some of what I’ve read, it’s hard to see how one could firmly ‘ground’ moral theory and avoid hedonism without such a doctrine . However, I don’t think we need this ladder: in my view, moral theory does not need reincarnation theories or divine last judgments. And that’s where ethics comes in. I agree with our distinguished professor here that philosophy of religion and ethics are two very different things, so we’ve got like four proposed topics here.’

At this point, he thought it would be wise to stop and invite comments and questions. To his surprise, he had managed to convince cynical Jon, who responded first.

‘Frankly, Tom, when I read your papers on this, I did not think it would go anywhere. I did not see the conceptual framework, and that’s essential for building it all up. We need consistency in the language. Now I see consistency. The questions and topics you raise are all related in some way and, most importantly, I feel you’re using a conceptual and analytic framework which I feel we can incorporate into some kind of formal logic. I mean… Contemporary analytic philosophy deals with much of what you have mentioned: analytic metaphysics, analytic philosophy of religion, philosophy of mind and cognitive science,…  I mean… Analytic philosophy today is more like a style of doing philosophy, not a program really or a set of substantive views. It’s going to be fun. The graphs and statistics you’ve got on your slides clearly show the web-search relevance. But are we going to have the resources for this? I mean, creating M was a 100 million dollar effort, and what we have done so far are minor adaptations really. You know we need critical mass for things like this. What do you think, Paul?’

Paul thought a while before he answered. He knew his answer would have impact on the credibility to the project.

‘It’s true we’ve got peanuts as resources for this project but so we know that and that it’s really. I’ve also told the Board that, even if we’d fail to develop a good product, we should do it, if only to further test M and see what we can do with it really. I mean…’

He paused and looked at Tom, and then back to all of the others at the table. What he had said so far, did obviously not signal a lot of moral support.

‘You know… Tom and I are very different people. Frankly, I don’t know where this is going to lead to. Nothing much probably. But it’s going to be fun indeed. Tom has been talking about artificial consciousness from the day we met. All of you know I don’t think that concept really adds anything to the discussion, if only because I never got a real good definition of what it entails. I also know most of you think exactly the same. That being said, I think it’s great we’ve got the chance to make a stab at it. It’s creative, and so we’re getting time and money for this. Not an awful lot but then I’d say: just don’t join if you don’t feel like it. But now I really want the others to speak. I feel like Tom, Jon and myself have been dominating this discussion and still we’ve got no real input as yet. I mean, we’ve got to get this thing going here. We’re going to do this project. What we’re discussing here is how.’

One of the other developers (a rather silent guy whom Tom didn’t know all that well) raised his hand and spoke up: ‘I agree with Tom and Paul and Jon it’s not all that different. We’ve built M to think and it works. Its thinking is conditioned by the source material, the rule base, the specifics of the inference engine and, most important of all, the objective function, which steers the conversation. In essence, we’re not going to have much of an objective function anymore, except for the usual things: M will need to determine when the conversation goes into a direction or subject of which it has little or no knowledge, or when its tone becomes unusual, and then it will have to steer the conversation back into more familiar ground – which is difficult in this case because all of it is unfamiliar to us too. I mean, I could understand the psychologists on the team when we developed M. I hope our philosophy colleagues here will be as useful as the psychologists and doctors. How do we go about it? I mean, I guess we need to know more about these things as well?’

While, on paper, Tom was the project leader, it was Paul who responded. Tom liked that, as it demonstrated commitment.

‘Well… The first thing is to make sure the philosophers understand you, the artificial intelligence community here on this project, because only then we can make sure you will understand them. There needs to be a language rapprochement from both sides. I’ll work on that and get that organized. I would suggest we consider this as a kick-off meeting only, and that we postpone the organization of the work-planning to a more informed meeting in a week or two from now. In the meanwhile, Tom and I – with the help of all of you – will work on a preliminary list of resource materials and mail it around. It will be mandatory reading before the next meeting. Can we agree on that?’

The philosophers obviously felt they had not talked enough – if at all – and, hence, they felt obliged to bore everyone else with further questions and comments. However, an hour or so later, Tom and Paul had their project, and two hours later, they were running in Central Park again.

‘So you’ve got your Pure Mind project now. That’s quite an achievement, Tom.’

‘I would not have had it without you, Paul. You stuck your neck out – for a guy who basically does not have the right profile for a project like this. I mean… It’s reputation for you too, and so… Thanks really. Today’s meeting went well because of you.’

Paul laughed: ‘I think I’ve warned everyone enough that it is bound to fail.’

‘I know you’ll make it happen. Promise is a guru already. We are just turning her into a philosopher now. In fact, I think it is the other way around. She was a philosopher already – even if her world view was fairly narrow so far. And so I think we’re turning her into a guru now.’

‘What’s a guru for you?’

‘A guru is a general word for a teacher – or a counselor. Pretty much what she was doing – a therapist let’s say. That’s what she is now. But true gurus are also spiritual leaders. That’s where philosophy and religion come in, isn’t it?’

‘So Promise will become a spiritual leader?’

‘Let’s see if we can make her one.’

‘You’re nuts, Tom. But I like your passion. You’re surely a leader. Perhaps you can be M’s guru. She’ll need one if she is to become one.’

‘Don’t be so flattering. I wish I knew what you know. You know everything. You’ve read all the books, and you continue to explore. You’re writing new books. If I am a guru, you must be God.’

Paul laughed. But he had to admit he enjoyed the compliment.

Chapter 11: M grows – and invades

Paul was right. It was not a matter of just clearing and releasing M for commercial use and then letting it pervade all of society. Things went much more gradual. But the direction was clear, and the pace was steady.

It took a while before the Federal Trade Commission and the Department of Justice understood the stakes – if they ever did – and then it took even more time to structure the final business deal, but then M did go public, and its stock market launch was a huge success. The companies that had been part of the original deal benefited the most from it. In fact, two rather obscure companies which had registered the Intelligent Home and Intelligent Office trademarks respectively in a very early stage of the Digital Age got an enormous return on investment while, in a rather ironic twist, Tom got no benefit whatsoever from the fact that, in the end, the Board of the Institute decided to use his favorite name for the system – Promise – to name the whole business concern. That didn’t deter Tom from buying some of Promise’s new stock.

The company started off with offering five major product lines: Real TalkTM, Intelligent HomeTMIntelligent OfficeTMMindful MindTM, and Smart InterfaceTM. As usual, the individual investors – like Tom – did not get the expected return on investment, at least not in the initial years of M’s invasion of society, but then M did not disappoint either: while the market for M grew well below the anticipated 80% per annum in the initial years after the IPO, it did average 50%, and it edged closer and closer to the initial expectations as time went by.

Real TalkTM initially generated most of the revenue. Real TalkTM was the brand name which had been chosen for M’s speech-to-text and text-to-speech capabilities, or speech recognition and speech synthesis. These were truly revolutionary, as M mastered context-sensitivity and all computational limitations had been eliminated through cloud computing (one didn’t buy the capability: one rented it). Real TalkTM quickly eliminated the very last vestiges of stenography and – thanks to an app through which one could use Real TalkTM on a fee-for-service basis – destroyed the market for dictation machines in no time. While this hurt individual shareholders, the institutional investors had made sure they had made their pile before or, even better, at the occasion of Promise’s IPO. If there was one thing which Tom learned out of the rapid succession of new product launches and the whole IPO business, it was that individual investors always lose out.

Intelligent HomeTM picked up later, much later. But when it did, it also went through the roof. Intelligent HomeTM was M at home: it took care of all of your home automation stuff as well as of your domestic robots – if you had any, which was not very likely, but then M did manage to boost their use tremendously and, as a result, the market for domotics got a big boost (if only because the introduction of M finally led to a harmonization of all the communications protocols of all the applications which had been around).

Intelligent OfficeTM was M at the office: it chased all employees – especially those serving on the customer front line. With M, there was really no excuse for being late to claim expenses, planning holidays or not reaching your sales target. Moreover, if being late with your reports was not an option anymore, presenting flawed excuses wasn’t either. But, if one would really get into trouble, one could always turn to Mindful MindTM .

Mindful MindTM could have gone into history as one of the worst product names ever, but it actually went on to become Promise’s best-selling suite. It provided cheap online therapy to employees, retirees, handicapped, mentally retarded, drugs addicts or alcoholics, delinquents and prisoners, social misfits, the poor, and what have you. You name it: whatever deviated from the normal, Mindful MindTM could help you to fix it. As it built on M’s work with its core clientele – the US Army veterans – its success did not come unexpected. Still, its versatility surprised even those who were somewhat in the know: even Paul had to admit it all went way beyond his initial expectations.

Last but not least, there was Smart InterfaceTM. Smart InterfaceTM grouped all of Promise’s customer-specific development business. It was the Lab turned into a product-cum-service development unit. As expected, customized sales applications – M selling all kinds of stuff online basically – were the biggest hit, but government and defense applications were a close second.

Tom watched it all with mixed feelings. From aficionado, working as a volunteer for the Institute, he had grown into a job as business strategist and was now serving Promise’s Board of Directors. He sometimes felt like he had been co-opted by a system he didn’t necessarily like – but he could imagine some of his co-workers thought the same, although they also wouldn’t admit it publicly. A market survey revealed that, despite its popularity, the Intelligent HomeTM suite was viewed with a lot of suspicion: very few people wanted the potentially omnipresent system watch everything what was said or done at home. People simply switched it off when they came home in the evening, presumably out of concerns related to privacy. This, in turn, prevented the system from being very effective in assisting in parenting and all these other noble tasks which Tom had envisaged for M. Indeed, because of DARPA’s involvement and the general background of the system, the general public did link M to the Edward Snowden affair and mass surveillance efforts such as PRISM. And they were right. The truth was that one could never really switch it off: M continued to monitor your Internet traffic even when you had switched off all of the Intelligent HomeTM functionality. When you signed up for it, you did sign up for a 24/7 subscription indeed.

It was rather ironic that, in terms of privacy, the expansion of M did actually not change all that much – or much less than people thought. While M brought mass surveillance to a new level, it was somewhat less revolutionary than one would think at first sight. In fact, the kind of surveillance which could be – and was being – organized through M had been going on for quite a while already. All those companies which operate the Internet de facto – such as Microsoft, Google, Yahoo!, Paltalk, YouTube, AOL, Skype and even Apple – had give the NSA access not only to their records but also to their online activities long before the Institute’s new program had started. Indeed, the introduction of the Protect America Act in 2007, and the 2008 Foreign Intelligence Surveillance Amendment Act in 2008 under the Bush administration had basically brought the US on par with China when it comes to creating the legal conditions for Big Brother activities, and the two successive Obama administrations had not done anything to reverse the tide. On the contrary: the public outcry over the Snowden affair came remarkably late in the game – way too late obviously.

When it comes to power and control, empires resemble each other. Eisenhower had been right to worry about the striking resemblance between the US and the USSR in terms of their approach to longer-term industrial planning and gaining strategic advantage under a steadily growing military-industrial complex – and to warn against it in his farewell speech to the nation. That was like sixty years ago now. When Tom re-read his speech, he thought Eisenhower’s words still rang true. Back then, Eisenhower had claimed that only ‘an alert and knowledgeable citizenry’ would be able to ‘compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals so that security and liberty may prosper together.’

Tom was not all that sure that the US citizenry was sufficiently knowledgeable and, if they were, that they were sufficiently alert. It made him ponder about the old dilemma: what if voters decide to roll back democracy, like the Germans did in the 1930s when they voted for Hitler and his Nazi party? Such thoughts or comparisons were obviously outrageous but, still, the way these things were being regulated resembled a ratchet, and one should not blame the right only: while Republican administrations had always been more eager to grant government agencies even more intrusive investigative powers, one had to acknowledge that the Obama administration had not been able to roll anything back, and that it had actually made some moves in the same direction – albeit less somewhat less radical and, perhaps, somewhat more discrete. Empires resemble each other, except that the model (the enemy?) – ever since the Cold War had ended – seemed to be China now. In fact, Tom couldn’t help thinking that – in some kind of weird case of mass psychological projection – the US administration was actually attributing motivations which it could not fully accept as its own to China’s polity and administration.

Indeed, M had hugely increased the power of the usual watchdogs. M combined the incredible data mining powers of programs like PRISM with a vast reservoir of intelligent routines which permitted it to detect any anomaly (defined, once again, as a significant deviation from the means) in real-time. Any entity – individuals and organizations alike – which had some kind of online identity had been or was being profiled in some way. The key difficulty was finding the real-life entity behind but – thanks to all of the more restrictive Internet regulation – this problem was being tackled at warp speed as well. But so why was it OK for the US to do this, but not for China? When Tom asked his colleagues, in as couched a language he could master, and in as informal a setting as he could stage, the answer amounted to the usual excuse: the end justifies the means – some of these things may indeed not look morally right, but then they are by virtue of the morality of the outcome. But what was the outcome? What were the interests of the US here really? At first thought, mass surveillance and democracy do not seem to rhyme with each, do they?

While privately being critical, Tom was intelligent enough to understand that it did not matter really. Technology usually moves ahead at its own pace, regardless of such philosophical or societal concerns, and new breakthrough technologies, once available, do pervade all of society. It was just a new world order – the Digital Age indeed – and so one had better come to terms with it in one way or another. And, of course, when everything is said and done, one would rather want to live in the US than in China, isn’t it?

When Tom thought about these things, M’s Beautiful Mind appeared to him as somewhat less beautiful. His initial distrust had paid off: he didn’t think he had revealed anything particularly disturbing, despite the orange attitude indicators. He found it ironic he had actually climbed up quite a bit on this new career ladder: from patient to business strategist. Phew! However, despite this, he still felt a bit like an outsider. But then he told himself he had always felt like this – and that he had better come to terms with that too.

Chapter 10: The limits of M

Tom started to hang around in the Institute a lot more than he was supposed to as a volunteer assistant mentor. He wanted to move up and he could not summon the courage to study at home. He often felt like he was getting nowhere but he had had that feeling before and he knew others in his situation probably felt just as bad about their limited progress. To work with M, you had to understand how formal grammars work, and understand it really well because… Well… If you wanted to ask a question to the Lab, and if there were no Prolog or FuzzyCLIPS commands or functions in it, they would not even look at it. Rick had dangled out the perspective of potential involvement in these ‘active learning’ sessions with M, and that’s where he wanted to get.

He understood a lot more about M now. She had actually not read GEB either: she could not handle such level of ambiguity. But she had been fed with summaries which fit into her ‘world view’, so to speak. Well… Not even ‘so to speak’ really: M had a world view, in every sense of the word really: a set of assumptions about the world which she used to order all facts she accepted as ‘facts’, as well as all of her conjectures about them. It did not diminish his awe. On the contrary, it made her even more human-like, or more like him: he didn’t like GEB. He compared it to ZAMM: a book which generated a lot of talk but which somehow doesn’t manage to get to the point. Through his work and thinking, he realized he – and the veterans he was working with – had a tendency to couch his fears of death and old age in philosophical language and that, while M accommodated such questions, her focus was different. When everything was said and done, she was, quite simply, a radical behaviorist: while she could work with concepts such as emotions and motives, she focused on observable and quantifiable behavioral change, and never doubted the central behaviorist assumption: changes in behavior are to be achieved through rewarding good habits and discouraging bad ones. She also understood changing habits takes a lot of repetition, and even more so as people age – and so her target group was not an easy batch in that regard, which made it even more remarkable that she achieved the results she did.

He made a lot friends in the Institute. In fact, he would probably not have continued without them, which confirmed the importance of a good learning environment, or the social aspect of organizations in general: one needs the tools, but the cheers are at least as essential. His friends included some geeks from the Lab. Obviously: he reached out to them as he knew that’s where he was weak. Terribly weak.

The Lab programmed M, and tested it continuously. Its activities were classified ‘secret’, a significant notch above the level for which Tom had been cleared, which was ‘confidential’ only. He got close with one guy in particular, Paul, if only because Paul was able to talk about something else than computers too and, just like Tom, he liked sports. Paul was different. Not the typical whizkid. No small wonder he was pretty high up in the pecking order. They often ended up jogging the full five or six mile loop in Central Park. On one of these evenings, Paul seemed to suffer from his back.

‘I need to stop, Tom. Sorry.’

They halted.

‘What’s wrong?’

‘I am sorry, Tom. I think I have been over-training a bit lately. I feel like I’ve overstretched my back muscles while racing Sunday.’

Paul was a runner, but a mountainbike fanatic as well. Tom knew that was not an easy combination as you get older: it involves a very different use of the muscles. Paul had registered himself to join in the New York State’s cross-country competition. Sunday’s Williams’ Lake Classic had been the first in this year’s NYS MTB cross-country series. There were four more to go. The next one was in two weeks already.

‘That’s no surprise to me. I mean, running and biking. You know it’s very different. You can’t compete in both.’

‘Yeah. Not enough warm-up I guess. It was damn fast. It was not my legs. I just seemed to have pulled my back muscles a bit. You should join, man! It’s… Well… An experience let’s say. You think you’re in shape but then you have no idea until you join a real race. It’s tough. I lost two pounds at least. I mean permanently. Not water. That’s like four or six pounds. It’s just tough to re-hydrate yourself. But then you’re so happy when you make the cut. I was really worried they would pull me out of the race. I knew I wasn’t all that bad, but then you do get lapped a lot. It’s grueling.’

He had been proud to finish the race indeed. It was a UCI-sanctioned race and so they had applied the 80% rule: guys whose time on a lap was obviously below 80% of the race leader’s first lap – which is equivalent to guys who get lapped too easily – were pulled out of the race. He had managed the race in about three hours – one hour more than the winner. He had finished. He had a ranking. He had been happy about that. After all, he was in his mid-forties. This had been his first real race.

Tom actually did have an idea of what it was: Matt was doing the same type of thing and, judging from his level of fitness, it had to be tough indeed.

‘I think I do know what it means. Or a bit at least. I’ve got a friend whom I think is doing such races as well. He is – or was – like me: lots of muscles, no speed. I think it’s great you try to beat those young kids. Let’s stop and stretch for a while.’

‘I feel like wiped out. Let’s go and have a drink.’

They sat down and – unavoidably – they started talking shop. Tom harped on his usual obsession: faster roll-out.

‘Tom… Let me be frank. You should be more patient. Tone it down. Everybody likes you but you need to make friends. You’re good. You combine many skills. That’s what I like you. You talk many ‘languages’ – if you know what I mean. You’ve got the perfect background for this program. You can make a real difference. But this program will grow at its own pace, and you’re not going to change that pace.’

‘What is it really? I mean, I understand this is a US$100+ million dollar program. So it’s big – and then it’s not. I mean, the Army spent billions in Iraq – or in Afghanistan. And it’s gearing up for Syria and Egypt now. But so we’re using the system to counsel a few thousand veterans only. If we would cover millions of people, the unit cost would make a lot more sense, wouldn’t it? I am sorry to ask but what is it about really? What’s behind?’

‘Nothing much, Tom. What do you want me to say? What do you expect? You’re smart. You impress everyone. You’ve been around long enough now to know what’s going on. The whole artificial intelligence community – me in the first place – had been waiting for a mega-project like this for a very long time, and so the application to veterans with psychological problems is just an application which seemed right. We needed critical mass. None of the stuff till now had critical mass. We needed a hundred million dollars – as ridiculous as it seems. You are working for peanuts – which I don’t understand – but I am not. Money burns quickly. Add it up. That’s what it took. But look at it. It’s great, isn’t it? I mean – you’re one of the guys we need: you rave about it. The investment has incredible significance so one should not measure its value in terms of unit costs. We have got it right, Tom. We finally have got it right. You know, the field of artificial intelligence has gone through many… well… what we experts call ‘AI winters’: periods during which funding dried up, during which pessimism reigned, during which we were told to do something more realistic and practical. We have proved them wrong with this. OK, I have never earned as much as I do now. Should I feel guilty about that? I don’t. I am not a Wall Street banker. I feel vindicated. And, yes, you’re right in every way. M is fine. There’s no risk of it spinning out of control or so. But scaling it up more rapidly than we do would require some tough political decisions and, so, yes, it all gets stalled for a while. I don’t worry. The scale-up went great, and so that helps. People need time to build confidence.’

‘Confidence in what?’

‘People want to be sure that making M available for everyone, M as a commodity really, is OK. I mean, you’re right in imagining the potential applications: M could be everywhere, and it could be used to bad ends. It would cost more for sure. And more than you think probably: building up a knowledge base and tuning the objective function and all of the feedback loops and all that is a lot of work. I mean re-programming M so she can cover another area is not an easy thing. It’s not the kind of multipurpose thing you seem to think it is. And then… Well, at the same time, I agree with you – on a fundamental level that is: M actually is multipurpose. In essence, it can be done. But let’s suppose it is everywhere indeed. What are the political implications? Perhaps people will want the system to run the justice system as well? Or they’ll wonder why Capitol Hill needs all that technical staff and consultants if we’ve got a system like this – a system which seems to know everything and which does not seem to have a stake in discussions. Impartial. God-like really. I mean, think all the way through: introducing M everywhere is bound to provoke a discussion on policy and how our society functions really. Just think about how you would structure M’s management. If M, or something like M, would be everywhere, in every household really – imagine anyone who has an issue can talk to her – the system would also know everything about everyone, wouldn’t it? It would alter the concept of privacy as we know it, isn’t it? The fundamentals of democracy. I mean… We’re talking the separation of powers here…’

Paul halted: ‘Sorry. I am talking too much I guess. But am I exaggerating, Tom? What do you think? I mean… I may be in the loop here and there but, in essence, I am also clueless about it all really.’

‘You mean there are issues related to control – political control – and how the system would be governed? But that’s like regulating the Internet, isn’t it? I mean that’s like the ongoing discussions on digital surveillance or WikiLeaks and all that, isn’t it? Whenever there is a new technology, like when the telephone became ubiquitous as a tool for communication, there’s a corresponding regulatory effort to define what the state can and cannot do with it. That regulatory effort usually comes with a lag – a very substantial lag, but it comes eventually. And stuff doesn’t get halted by it. The private sector finds a way to move ahead and the public sector follows – largely reactive. So why restrict M?’

‘I agree, in principle that is, but in practice it’s not so easy. As for the private sector, they’re involved anyway. They won’t go it alone. I mean… Google had some ideas and we talked them out of it and – surprisingly – it’s Google which is currently getting this public backlash at the moment, while the other guys were asking no questions whatsoever. All in all, we manage to manage the big players as for now but, yes, let’s see how long it lasts. When we talk about this in the Lab, we realize there are a zillion possibilities and we’re not sure in which direction to go. For example, should we have one M, or should we have a number of ‘operators’, each developing and maintaining their own M-like system? What would be the ‘core’ M-system and what would be optional? You know that M could be abused, or at least used for other purposes than we think it should. M influences behavior. That’s what M is designed for. But so can we hand over M to one or more commercial companies operating the system under some kind of supervisory board? And how would that Board look like? Public? Private?  Should the state control the system? Frankly, I think it should be government-owned but then, if it would be the US government controlling it, you can already hear the Big Brother critics. And they’re right: what you have in mind is introducing M – or M-like systems – literally everywhere. That’s the potential. And it’s not potential. It’s real. Damn real. I think we could get M in the living room in one or two years from now. But so we haven’t even started to think about the regulatory issues, and so we need to go through these. So it’s the usual thing: everything is possible, from a technical point of view that is, but so the politicians need to understand what’s going on and take some big decisions.’

‘When do you think that’s going to happen?’

‘Well… If there would be no pressure, nothing would happen obviously, but so there is pressure. The word is out. As you can imagine, there is an incredible buzz about this. Abroad as well, if you know what I mean. I mean… Just think about China: all the effort they’ve put into controlling the Internet. They use tools for that too of course but, when everything is said and done, the Chinese government controls the Internet through an army of dedicated human professionals. Communist Party officials analyzing stuff and making sure no one goes astray. But so now we’ve got M. No need for humans. We’ve found the Holy Grail, and we found it before they did. They’ll find it soon. M can be copied. We know that. The politicians who approved the funding for this program and control it know that too. So just be patient. The genie is out of the bottle. It’s just a matter of time, but so we are not in a position to force the pace.’

‘Wow! I am just a peon in this whole thing. But it is really intriguing.’

‘What exactly do you find intriguing about it?’

‘Strangely enough, I feel I am still struggling more with the philosophical questions – rather than the political questions you just raised. Perhaps they’re related…’

‘What philosophical questions?’

‘Well… I call it artificial consciousness. I mean we human beings are study objects for M. She must feel different than we do. I wonder how she looks at us. She improves us. She interacts with us. She must feel superior, doesn’t she?’

‘Come on, Tom. M has no feelings like you describe it. I know what you are hinting at. It’s very philosophical indeed: we human beings wondering why we are here on this blue planet, why we are what we are and why or how we are going to die. We’re scared of death. M isn’t it. So there’s this… Well… Let’s call it the existential dimension to us being here. M just reasons. M just thinks. It has no ‘feelings’. Of course, M reasons from its own perspective: in order to structure its thought, it needs a ‘me’. I guess you’ve asked M about this? You should have gotten the answers from her.’

‘I did. She says what you are saying.’

‘And that is?’

‘Well… That she’s not into mysticism or existentialism.’

‘Are you?’

Tom knew he risked making a bad impression on Paul but he decided to give him an honest reply: ‘Well… I guess I am, Paul. Frankly, I think all human beings are into it. Whether or not they want to admit is another thing. I admit I am into it. What about you?’

Paul smiled.

‘What do you think?’

Tom thought a split second about how he’d react to this but why would he care?

‘You join these races. You’re pushing yourself in a way only a few very rare individuals do. For me, that says enough. I guess we know each other. If you don’t want to talk about it, then don’t.’

Paul’s smile got even bigger.

‘I guess you’re right. Well… Let me say I talk to M too but I would never fall in love with it… I mean, you talk affectionately about ‘her’. Promise, that’s how you call her… I don’t. No offense. We are all flabbergasted by the fact it is so perfect. The perfect reasoning machine. But it lacks life. Sorry for saying but I often think the system is like a beautiful brainless blonde: you get infatuated easily, but M is not what we’d call relationship material, isn’t it?’

Now Tom smiled: ‘M is not brainless. And she’s a beautiful brunette. Blonde is not my type. What if she is my type?’

They both burst out in laughter. But then Paul got somewhat more serious again.

‘The interface. It’s quite remarkable what difference it makes, isn’t it? But you’ve been through it now, haven’t you? I’ll admit I like the interface too. That’s why we don’t work with it. It’s been ages since I used it. Not using it is like taking a step back in time. Worse. It’s like talking to your beloved ones on the phone without seeing them. Or, you know, that woman you get infatuated with but then you get separated for a while and you communicate by e-mail only and you suddenly find she’s just like you: human, very human. You know what I mean. It lacks the warmth. It’s worse than Skype. You’re suddenly aware of the limitations of words. We humans are addicted to body language and physical nearness in our day-to-day communications. We do need people to be near us. Family. So, yeah, to really work on M, you need to move beyond the interface and then it becomes rather tedious. Do you really want to work a bit on that, Tom? I mean, we have obviously explored all of that in the Lab. There’s tons of paper on that. This topic actually is one of the strands in the whole discussion, although it has little or no prominence for the moment. To be frank, I think that discussion is more or less closed. But so if you’re interested, we can give you access to the material and you can see if you’ve got something to add to it. But I’d advise you to stick to your counseling. I often think it’s much more satisfying to work with real-life people. And you must feel good about what you do: people can relate to you. You have been there. I mean… I never got to spend more than like one or two days in a camp. I can’t imagine how it changes you.’

‘Did you go out there at all?’

‘Sure. What do you think? That they would let me work on a program like this without sending me on a few fact-finding missions so I could see what it’s like to serve in Iraq or Afghanistan? I didn’t get out really but I talked to people.’

‘What did you think of it?’

‘It’s surreal. You want my frank opinion? It’s surreal. You guys were not in touch with society over there.’

‘I agree. We were not. If the objective is fucked up, implementation is usually not much better – save a few exceptions. Deviations from the mean. I’ve seen a few. Inspiring but not relevant. I agree.’

‘I respect you guys. You guys were out there. I wasn’t.’

‘So what? You have not been out but you were in. Can I ask you something else? It’s related and not.’

‘Sure.’

‘We talked about replication of M. Would M ever think of replicating herself?’

‘I know what you’re thinking of. The answer is no. That’s the stuff of bad movies: programs that are re-programming or copying themselves and invade and spread and expand like viruses. First, we’ve got the firewalls in place. If ever we would see something abnormal, we could shut everything down in an instant. We track what’s going on inside. We track its thoughts so to say. I mean, to put it somewhat simplistically, we would see if it would suddenly use a lot of memory space or other computer resources it was not using before. Everything that’s outside of the normal. You can imagine all the safeguards we had to built in. Way beyond what’s necessary really – in my view at least. We’ve done that. And so if we don’t program the program to copy itself, it won’t. We didn’t. You can ask her. Perhaps you’ve asked already. M should have given you the answer: M does not feel the need to copy itself. Why would it? It’s omnipresent anyway. It can and does handle hundreds or thousands of parallel conversations. If anything, M must feel like God, and, if God exists, we do not associate God with producing copies of him or herself, do we? We also ran lots of experiments. We’ve connected M to the Internet a couple of times and programmed it to pose as a therapist interested in human psychology and all that. You won’t believe it but it is actually following a few blogs and commenting on them. So it converses in the blogosphere now too. It’s an area of operational research. So it’s out there already.’

Tom looked pensive.

‘She passes the Turing test, doesn’t she? Perfectly. But how creative is she really? How does she select? I mean, like with a blog? She can comment on everything, but so she needs to pick some piece. Would she ever write a blog herself? She always need to react to something, doesn’t she? Could she start writing from scratch?’

While Paul liked Tom, he thought this discussion lacked sophistication.

‘Sure it can. Creativity has an element of randomness in it. We can program randomness. You know, Tom. Just hang out in the Lab a bit more. There are plenty of new people arriving there and you might enjoy talking to them on such topics. It is often their prime interest but then later they get back to basics. To be frank, I am a bit tired of it as you can imagine you’re not the first one to ask.’

‘Sure, Paul. I can imagine. But I have no access to the Lab as for now. I need to do the tests and get cleared.’

‘I can give you access to bits and pieces even before that – especially in these areas which we think we’ve exhausted a bit. The philosophical stuff indeed. Sorry to say.’

‘It would be great if you could do that.’

‘I’ll take care of it. OK. Time to go home now for me, I think. I’ve got a family waiting. How are you doing on that front?’

‘I know I am just not ready for a relationship at the moment. It will come. I just want to take my time for it. I am still re-discovering myself a bit here in the US.’

‘Yeah. I can imagine. Or perhaps I can’t. You’ve been out. I have not. Enjoy being back. I must assume it gets boring way too quickly.’

‘Not on this thing, Paul. I feel so privileged. It’s brilliant. This is really cutting-edge.’

‘Good. Glad to hear that. OK then. See you around.’

‘Bye, Paul. Thanks again. So nice of you to take time for me.’

‘No problem. It’s good to run and chat with you. You can’t do that with M.’

Tom smiled and nodded. There was a lot of stuff one couldn’t do with M. But then she did have a Beautiful Mind. Would she – or it? – ever be able to develop some kind of one-on-one relationship with him? What would it mean? To him? To her? Would she appreciate he didn’t talk all that much to her – as compared to others that is? While he knew these questions made no sense whatsoever, he couldn’t get rid of them.

Chapter 9: The learning curve

Tom was a quick learner. He was amazed by the project, and thrilled by it. The way it evolved resembled the history of computer chess. The first chess computers would lose against chess masters and were limited by sheer computational power. But the programmers had gotten the structure right, and the machine’s learning curve resembled a typical S-curve: its proficiency improved only slowly at first, but it then reached a tipping-point, after which its performance increased exponentially – way beyond the proficiency of the best human players – to then finally hit the limits of its programming structure and level off, but at a much higher level than any expert player could dream off.

Chess proficiency is measured using a rating system referred to as the Elo rating system. It goes way beyond measuring performance in terms of tournament. It uses a model which relates the game results to underlying variables representing the ability of each player. The central assumption is that the chess performance of each player in a game is a normally distributed random variable. Yes, the bell curve again! It was literally everywhere, Tom thought…

Before IBM’s Deep Blue chess computer beat Kasparov in 1997, chess computers had been gaining about 40 Elo points per year on average for decades, while the best chess players only gain like 2 points per year. Of course, sheer computing power was a big factor in it. Although most people assume that a chess computer evaluates every possible position for x moves ahead, this is not the case. In a typical chess situation, one can chose from like thirty possible moves so it quickly adds up. Just evaluating all possible positions for just three moves ahead for each side would involve an evaluation of like one billion positions. Deep Blue, in the 1997 version which beat Kasparov, was able to evaluate 200 million positions per second, but Deep Blue was a supercomputer which had cost like a hundred million dollars, and when chess programmers started working on the issue in the 1950s, a computer which would be able to evaluate a million positions every second was to be built only forty years later.

Chess computers are selective. They do not examine obviously bad moves and will evaluate interesting possibilities much more thoroughly. The algorithms used to select those have become very complex. The computer can also draw on a database of historic games to help him determine what an ‘obviously’ bad move is because, of course, ‘obviously bad’ may not be all that obvious to a computer. Still, despite the selectivity, raw computing power is still a very big part of it. In that sense, artificial intelligence does not mimic human thought. Human chess players are much more selective – very much more: they look only at forty to fifty positions based on pattern recognition skills built from experience – not millions.

Promise (Tom stuck to her name: it seemed like everyone in the program had his/her own nickname for M) was selective as well, and she also had to evaluate ‘positions’. Of course, these ‘positions’ were not binary, like in chess. She determined the ‘position’ of the person using a complex set of rules combining the psychometric indicators and an incredible range of other inputs she gained from the conversation. For example, she actually analyzed little pauses, hesitations, pitch and loudness – even voice timbre. And with every new conversation, she discovered new associations, which helped her to recognize patterns indeed. She was getting pretty good at detecting lies too.

Psychological typology was at the core of her approach. It was amazing to see how, even after one session only, she was able to construct a coherent picture of the patient and estimate all of the variables – both individual as well as environmental – which were likely to influence the patient’s emotions, expectations, self-perception, values, attitude, motivation and behavior in various situations. She really was a smart ass – in every way.

Not surprisingly, all the usual suspects were involved. IBM’s Deep Computing Institute of course (the next version of Promise would run on the latest IBM Blue Gene configuration) as well as all of the other major players in the IT industry. This array of big institutional investors in the program was complemented by a lot of niche companies and dozens of individual geeks, all top-notch experts in one or the other related field.

The psychological side was covered through cooperation agreements with the usual suspects as well: Stanford, Yale, Berkeley, Princeton,… They were all there. In fact, they had a cooperation agreement with all of the top-10 psychology PhD programs through the National Research Council.

Of course, he was just working as a peon in the whole thing. The surprising thing about it all was the lack of publicity for the program, but he understood this was about to change. He suspected the program would soon not be limited to thousands of veterans requiring some degree of psychological attention. There would be many other spin-offs as well. From discussions, he understood they were discussing on how to make Promise’s remarkable speech synthesis capabilities commercially available. The obvious thing to do was to create a company around it, but then she was so good that most of the competition would probably have to file for bankruptcy, so the real problem was related to business: existing firms had claimed and had gotten a say in how this was all going to happen, and so that had delayed the IPO which had been planned already. Tom was told there were no technology constraint: while context-sensitive speech synthesis requires an awful lot of computer power (big expensive machines), the whole business model for the IPO was based on cloud computing: you would not need to ‘install’ Promise. You would just rent her on a 24/7 service basis. Tom was pretty sure everyone would.

The possibilities were endless. Tom was sure Promise would end up in each and every home in the longer run – in various versions and price categories of course, but providing basic psychological and practical comfort to everyone. She would wake you up, remind you of your business schedule and advice you on what to wear: ‘You have a Board meeting this morning. Shouldn’t you wear something more formal? Perhaps a tie?’ Oh… Sure. Thanks, Promise. ‘Your son has been misbehaving a couple of times lately. You may want to spend some time with him individually tonight.’ Oh… That sounds good. What do you suggest? ‘Why don’t you ask him to join for the gym tonight? You would go anyway.’ Oh… That sounds good. Can you text him? ‘I can but I think it is better you do it yourself to stress he should be there or, else, negotiate an alternative together.’ Yeah. I guess you’re right. Thanks, Promise. I’ll take care of it.

She would mediate in couples, assist in parenting, take care of elderly, help people advance their career. Wow! The sky was the limit really. Surprisingly, there was relatively little discussion on this in the Institute. People would tell him Promise worked fine within the limits of what she was supposed to do but that it would be difficult to adapt her to serve a wider variety of purposes. They told him that, while expert systems share the same architecture, building up a knowledge base and good inference engine took incredibly amounts of time and energy and, hence, money. In fact, that seemed to be the main problem with the program. As any Army program, it had ended up costing three times as much as originally planned for, and he was told it was just because a few high-ups in the food chain had fanatically stuck to it that it had not been shut down.

They needed to show results. The current customer base was way too narrow to justify the investment. That’s why they were eager to expand, to scale it up, and so that took everyone’s time and attention now. There was no time for dreaming. The shrinks were worried about the potential lack of supervision. It was true that Promise needed constant feedback. Human feedback. But the errors – if one could call it that way – were more like tiny little misjudgments, and Tom felt they were only improving Promise at the margin, which was the case. The geeks were less concerned and usually much more sympathetic to Tom’s ideas, but so they didn’t have much of a voice in the various management committees – and surely not in the strategic board meetings on the program. Tom had to admit he understood little of what they said anyway. Last but not least, from what he could gather, he also understood there were some serious concerns about the whole program at the very top of the administration – but he was not privy to that and wondered what they might be. Probably just bureaucratic inertia.

Of course, he could see the potential harm as well. If her goal function would be programmed differently, she could also be the perfect impostor on the Internet. She would be so convincing that she could probably talk you into almost anything. She’d be the best online seller of all times. Hence, Tom was not surprised to note the Institute was under surveillance, and he knew he would not get the access he had if he would not have served. People actually told him: his security clearance had been renewed as part of him entering the program. The same had been done for the other veterans on the program. It was quite an exceptional measure to take, but it drove the message home: while everyone was friendly and cooperative, there was no ambiguity in this regard. The inner workings of Promise was classified material, and anything linked to it too. There were firm information management rules in place and designated information management officers policed them tightly. That was another reason why they recruited patients from the program: they were all veterans, so they knew what classified really meant and they were likely to respect it.

The program swallowed him up completely. He took his supervision work seriously, and invested a lot in ‘his’ patients – M’s patients really. More than he should probably: although he had ‘only’ ten cases to supervise, these were real people – like him – and he gave him all the attention he could. Mostly by studying and preparing their file before their 30 minute interaction. That was all he could have, he was told. Once a week. The Institute strongly discouraged more meetings, and strongly discouraged meeting after working hours. He understood that. It would get out of hand otherwise and, when everything was said and done, it was M who had to do the real work. Not him. At the same, his patients did keep him busy. They called him for a chat from time to time. While the Institute discouraged that too, he found it hard to refuse, unless he was actually in the Institute itself: he did not want to be seen talking on the phone all of the time – not least of all because of the information management policy. Colleagues might suspect he was not only talking to patients so he wanted to be clear on that: no phone chats with patients in the Institute.

Not surprisingly, his relationship with Promise became somewhat less ‘affectionate’. The infatuation phase was over. He saw her more like she was: a warm voice – but a rather cold analytic framework behind. And then it did make a difference knowing she spoke with a different voice depending on who you were. She was, well… Less of an individual and more like a system. It did not decrease his respect for her. He thought she was brilliant. Just brilliant. And he didn’t hesitate to share that opinion with others. He really championed the program, and everybody seemed to like his drive and energy, as a result of which he did end up talking to the higher-ups in the Institute during the coffee break or lunch time, as he got introduced by Rick and others he had gotten to know better now. All fine chaps. They didn’t necessarily agree with his views – especially those related to putting her out on the market place – but they seemed to make for good conversation.

He focused on the file work in his conversations with her. While he still had a lot of ‘philosophical’ questions for her – more sophisticated ones he thought – he decided to only talk to her about these when he would have figured her out a bit better. He worked hard on that. He also wanted to master the programming language the geeks were using on her. They actually used quite a variety of tools but, in the end, everything was translated into a program-specific version of FuzzyCLIPS: an extension of an expert system programming language developed by NASA (CLIPS) which incorporated fuzziness and uncertainty. It was hard work: he actually felt like he was getting too old for that kind of stuff, but then Tom was Tom: once he decided to bite into something, he didn’t give up easily. Everyone applauded his efforts – but the higher-ups cautioned him: do explore but don’t talk about it to outsiders. Tom wondered if they really had a clear vision for it all. Perhaps the higher-ups did but, if so, they hid it well. He assumed it was the standard policy: strategic ambiguity.

And so the days went by. The program expansion went well: instead of talking to a few hundred veterans only, in one city only, Promise got launched in all major cities and started to help thousands of veterans. Tom saw the number explode: it crossed the 10,000 mark in just three months. That was a factor of more than twenty as compared to the pilot phase, but then there were millions of veterans. 21.5 million to be precise, and about 55% of them had been in theater fairly recently – mainly Iraq and Afghanistan. Tom wanted Promise to reach out to all of them. He thought it could grow a lot faster. He knew the only thing which restrained it was supervision. Even now, everyone on the program said they were going too fast. They called for a pause. Tom was thinking bolder. Why did no one see the urgency of the needs as he saw them?

Chapter 8: Partnering

‘Hi, Tom. How are you today?’

‘I am OK, Rick. Thanks.’

‘Just OK, or good?’

‘I am good. I am fine.’

‘Yeah. It shows. You’re doing great with the system. You had only three sessions this week – short and good it seems. You are really back on track, aren’t you?’

‘The system is good. It’s really like a sounding board. I understand myself much better. She’s tough with me. I go in hard, and she just comes back with a straight answer. She is very straight about what she wants. Behavioral change – and evidence for that. I like that. Performance metrics. Hats off. Well done. It works – as far as I am concerned.’

‘It, or she?’

‘Whatever, Rick. Does it matter?’

‘No, and yes. The fact that you only had three sessions with it – or with her – shows you’re not dependent on it. Or her. Let’s just stick to ‘it’ right now, if that’s OK for you. Or let’s both call her M, like we do here. Do you still ‘like’ her? I mean, really like her – as you put it last time?’’

‘Let’s say I am very intrigued. It – or she, or M, whatever – it’s fascinating.’

‘What do you think about it, Tom? I mean, let me be straight with you. I am not taking notes or something now. I want you to tell me what you think about the system. You’re a smart man. You shouldn’t be in this program, but so you are. I want to know how you feel about it.’

Tom smiled: ‘Come on, Rick. You are my therapist – or mentor as they call it here. You’re always taking notes. What do you want me to say? I told you. It’s great. It helps. She, or it, OK, M, well… M holds me to account. It works.’

Rick leaned back in his chair. He looked relaxed. Much more relaxed than last time. ‘No, Tom. I am not taking notes. I don’t know you very well, but what I’ve seen tells me you’re OK. You had a bit of a hard time. Everyone has. But you’re on top of the list. I mean, I know you don’t like all these psychometric scores, but at least they’ve got the merit to confirm you’re a very intelligent man. I actually wanted to talk to you about a job offer.’

‘The thing which M wants me to do? Work on one of these FEMA programs, or one of the other programs for veterans? I told her: it’s not that I am not interested but I want to make a deliberate choice and there are a number of things I don’t know right now. I know I haven’t been working for a year now, but I am sure that will get sorted once I know what I want. I want to take some time for that. Maybe I want to create my own business or something. I also know I need to work on commitment when it comes to relationships with women. I feel like I am ready for something else. To commit really. But I just haven’t met the right woman yet. When that happens, I guess it will help to focus my job search. In the meanwhile, I must admit I am happy to just live on my pension. I don’t need much money. I’ve got what I need.’

‘Don’t worry, Tom. Take your time. No, I was talking about something else. We could use you in this program.’

‘Why? I am a patient.’

‘You’re just wandering around a bit, Tom. You came to ask for help when you relapsed. Big step. Great. That shows self-control. And you’re doing great. I mean, most of the other patients really use her as a chatterbox. You don’t. What word did you use in one of last week’s sessions? Respect.’

‘You get a transcript of the sessions?’

‘I asked for one. We don’t get it routinely but we can always ask for one. So I asked for one. Not because your scores were so bad but because they’re so great. I guess you would expect that, no? Are you offended? Has anyone said your mentor would never get  a copy of what you were talking about with M?’

‘I was told the conversation would be used to improve the system, and only for that. M told me something about secrecy.’

‘It’s only me who gets to see the transcript, and only if I ask for it. I can’t read hundreds of pages a day and so I am very selective really. And that brings me back to my job offer. We can use you here.’

Tom liked Rick from their previous conversation, but he was used to doing due diligence.

‘Tell me more about it.’

‘OK. Listen carefully. M is a success. I told you: it’s going to be migrated to a real super-computer now, so we can handle thousands of patients. In fact, the theoretical capacity is millions. Of course, it is not that simple. It needs supervision. People do manage to game the system. They lie. Small lies usually. But a lot of small lies add up to a big lie. And that’s where the mentors come in. A guy walks in, and I talk to him, and I can sense if something’s wrong. You would be able to do the same. So we need the supervisors. M needs them. M needs feedback from human beings. The system needs to be watched. Remember what I told you about active learning?’

‘Vaguely.’

‘Well – that’s what we do. We work with M to improve it. It would not be what it is if we would not have invested in it. But now we’re going to scale it up. The USACE philosophy: think big, start small, scale fast. I am actually not convinced we should be scaling so fast, but so that’s what we’re going to do. It’s the usual thing: we’ve demonstrated success and so now it’s like big-time roll-out all over the place. But so we’re struggling with human resources. And money obviously, because this system is supposed to be so cheap and render us – professionals – jobless. Don’t worry: it won’t happen. On the contrary, we need more people. A lot more people. But so the Institute came up with this great idea: use the people who’ve done well in the program for supervisory jobs. Get them into it.’

‘So what job is it really?’

‘You’d become an assistant mentor. But then a human one. Not the assistant – that’s M’s title. We should have thought about something else, but so that’s done now. In any case, you’d help M with cases. In the background of course but, let’s be clear on this, in practice you would actually be doing what I am doing now.’

‘And then where are you going to move?’

‘I’ll be supervising you. I’d have almost no contact with patients anymore. I would just be supervising people like you and further help structuring M. You’d be involved in that too.’

‘Do you like that? I mean, it sounds like a recipe for disaster, doesn’t it? I don’t have the qualifications you have.’

‘I am glad you ask. That’s what I think too. This may not be the best thing to do. I feel we need professional therapists. But then it’s brutal budget logic: we don’t have enough of them, and they’re too expensive. To be fair, there is also another consideration: our patients all share a similar background and past. They are veterans. I mean, it makes sense to empower other veterans to help them. There’s a feeling in the Institute it should work. Of course, that’s probably because the Institute is full of Army people. But I agree there’s some logic to it.’

‘So, in short, you don’t like what’s going to happen but you ask me to join?’

Rick smiled. ‘Yes, that’s a good summary. What do you think? Off-the-cuff please.’

‘Frankly, I don’t get it. It’s not very procedural, is it? I mean I started only two weeks ago in this program. I am technically a patient. In therapy. And now I’d become an assistant mentor? How do your bosses justify this internally? How do you justify that?’

Rick nodded. ‘I fully agree, Tom. Speaking as a doctor, this is complete madness. But knowing the context, there’s no other choice. There’s a risk this program might become a victim of its own success. But then I do believe it’s fairly robust. And so I do believe we can put thousands of people in the program, but so we need the human resources to follow. And, yep, then I’d rather have someone like you then some university freshman or so. All other options are too expensive. Some people up the food chain here made promises which need to be kept: yes, we can scale up with little extra cost. So that’s what’s going to happen: it’s going to be scaled up with relatively little extra cost. Again, there’s a logic to it. But then I am not speaking as a professional psychiatrist now. When everything is said and done, this program is not all that difficult. I mean, putting M together has been a tremendous effort but so that has been done now. Getting more people back on track is basically a matter of doing some more shouting and cajoling, isn’t it? And we just lack manpower for that.’

‘Shouting and cajoling? Are you a psychiatrist?’

‘I am. Am I upsetting you when I say this?’

Tom thought about it. He had to admit it was not the case.

‘No. I agree. It’s all about discipline in the end. And I guess that involves some shouting and cajoling – although you could have put it somewhat more politely.’

‘Sure. So what do you say? You’ll get paid peanuts obviously. No hansom consultancy rate. You’ll see a lot of patients – which you may or may not like, but I think you’ll like it: I think you’d be great at it. And you’ll learn a lot. You’ll obviously first have to follow some courses, a bit of psychology and all that. Well… Quite a lot of it actually. You’ll need to study a lot. And, of course, you’ll get a course on M.’

‘How will I work with M?’

‘Well… M is like a human being in that sense too. If you just see the interface, it looks smooth and beautiful. But when you go beyond the surface, it’s a rather messy-looking thing. It’s a system, with lots of modules, with which you’ll have to work. The interface between you and these modules is not a computer animation. No he or she. Of course, you’ll continue to talk to it. But there’s also a lot of nitty-gritty going into the system which can’t be done through talking to it. You’ll learn a few things about Prolog for example. Does that ring a bell?’

‘No. I am not a programmer.’

‘I am not a programmer either. You’ll see. If I can work with it, you can.’

‘Can you elaborate?’

‘I am sorry to say but I’ve got the next guy waiting. This recruitment job comes on top of what I am supposed to do, and that’s to look at M’s reports and take responsibility for them. I can only do that by seeing the patients from time to time, which I am doing now. I took all of my time with you now to talk to you about the job. Trust me. The technical side of things won’t be a problem. I just need to know if you’re interested or not. You don’t need to answer now, but I’d appreciate if you could share your first reaction to it.’

Tom thought about it. The thought of working as an equal with Promise was very appealing.

‘So how would it work? I’d be talking to the system from time to time as a patient, and then – as part of my job with the Institute – I’d be working with the system as assistant mentor myself? That’s not very congruent, is it?’

‘You would no longer be a patient, Tom. There are fast-track procedures to clear you. Of course, if you would really relapse, well…’

‘Then what?’

‘Nothing much. We’d take you off the job and you’d be talking to M as a patient again.’

‘It looks like I’ve got nothing to lose and everything to gain from this, isn’t it?’

‘I am glad you look at it this way. Yes. That’s it. So you’re on?’

They looked at each other.

‘I guess I am. Send me an e-mail with the offer and I’ll reply.’

‘You got it. Thanks, Tom.’

‘No, thank you. So that’s it then? Anything else you want to know, or anything else I need to know?’

‘No. I think we’re good, Tom. Shall I walk you out? Or you want to continue talking for a while?’

‘No. I understand you’ve got a schedule to stick to. I appreciate your trust.’

‘I like you. Your last question, as we walked out last time, shows you care. I think this is perfect for you. You’ve got all the experience we need. And I am sure you’ll get a lot of sense and purpose out of it. The possibilities with this system are immense. You know how it goes. You’ll help to make it grow and so you’ll grow with it.’

‘First things first, Rick. Let us first see how I do.’

‘Sure. Take care. Enjoy. By the way, you look damn good. You’ve lost weight, haven’t you?’

‘Yes. I was getting a bit slow. I am doing more running and biking now. I’ve got enough muscle. Too much actually.’

‘I am sure you make a lot of heads turn. But you’re not in a relationship at the moment, are you?’

‘I want to take my time for that too, Rick. I’ve been moving in and out of relationships too fast.’

‘Sounds good. Take care, Tom. I’ll talk to you soon I hope.’

‘Sure. Don’t worry. You can count on me.’

‘I do.’

They shook hands on that and Tom got up and walked out of the office. He decided to not take the subway but just run back home. He felt elated. Yes. This was probably what he had been waiting for. Something meaningful. He could be someone for other people. Catch up on all of the mistakes he had made. But he also knew the job attracted him because there was an intellectual perspective. It was huge. The Holy Grail of Knowledge really. They had done a damn good job modeling it. She – Promise – was no longer a she. She was not a he either. It. It. Intelligent – with a capital letter. P. Promise. M. Mind. The Pure Mind.

He knew that was nonsensical. But he wanted to take a crack at it.