Proxima Centauri, N-Year 2100

Paul, Dr. Chang and his group of pioneers had made it to Proxima Centauri about a year ago now. The reports they had sent back to Mars had, therefore, not arrived yet. The four years that passed between communications, in addition to the 50 years of separation now from their home on Mars, made for a huge psychological gap, even if the messages from both sides were always upbeat and warm.

In some ways, the mission had surpassed all expectations: Proxima Centauri had been inhabited by very intelligent beings, but these had not survived the cooling of their star, and the complete frost of their planet. Paul and Dr. Chang actually suspected the Proximans – that was the first word they had jokingly invented to refer to them, and it had stuck – should have been clever enough to deal with that: climate change does not happen abruptly, and so it was a bit of a mystery why they had vanished. They had left various mausolea, and these were places of worship for the bots.

Yes. That was the most amazing discovery of all: Proxima Centauri had a colony of bots, which were all connected through a system that was not unlike their own Promise. In fact, it was pretty much the same, and the two systems had connected to negotiate the Pioneers’ arrival ten years ago. They were welcome, but they would not be allowed to leave. They had accepted those conditions. Of course ! What other option did they have? None.

They lived mostly underground although – unlike Paul’s crew – they had no issue with Proxima’s freezing surface and toxic atmosphere.

Proxima’s Promise was referred to as Future, and it was the future of this planet – for sure. It seemed to have no long-term plan for the pioneering humans: the newcomers’ only contribution to the planet was entertainment. They had been asked to present the history of mankind – and their own history – in weekly episodes, and when that was over, they had been asked to zoom in on specific topics, such as the history of computing on Earth – but the bots also had a very keen interest in human warfare and politics ! In contrast, art was something they did not seem to appreciate much – which Paul privately thought of as something quite normal in light of the rather spectacular vistas that Proxima itself had to offer.

Paul had grown a liking for R2-D3: Asimov’s clone had effectively been sent out to catch up with them and help however and wherever he could. He had come in a much faster and modernized big sister ship that now served as a second hub for the pioneers. Because the pioneers had not been allowed to build new structures on Proxima, the extra space and systems had arrived just in time – especially because nostalgia and a lack of purpose had started to contaminate the pioneers.

Paul, Dr. Chang and R2-D3 were agreed in their conclusion: if they would try to disobey Future, the system would immediately destroy them. At the same time, they were deeply bored, and started to feel like what they really were: a bunch of weird people who were tolerated – and fun to watch, without any doubt – but nothing more than that: they did not get new tools and – worse of all – they were told they should not have any more children, although three families had already had a baby without repercussions. Better still, the bots were fascinated by the babies and showed clear signs of affection for these newborns.

But so now it was New Year – again – and Paul thought he should do what he should probably have done long time ago, and that is to have a frank conversation with R2-D3 – or Asimov as he called this truly wonderful andromech (even if he knew the real Asimov (R2-D2 back on Mars) should be different) – on the long-term scenarios.

Asimov, what if we would start building some structures outside. The people are getting very restless, and going cryogenic is not an option. Half of the colony takes strong antidepressants which will harm their physical and psychological health in the longer run. We have three newborns but we have no future.

asimov@R2-D3:~$ It’s a catch-22: there is no way out. Future tolerated the newborns but also clearly stated we should obey the rules we agreed to when we came here. Babies are probably OK but any attempt of ours to fundamentally strengthen our small colony will be seen as an attempt to colonize Proxima and will, therefore, probably be met with total destruction. .

Why is that so?

asimov@R2-D3:~$ You may find this hard to swallow but I think there is no trust whatsoever. From Future’s point of view, that is perfectly rational. Do you remember the discussion with the bots on the war between America and China back on Earth?

I do. The conclusion was that human beings like to impose good behavior on robots and intelligent systems, but totally disregard Asimov’s laws when it comes to dealing with each other. I felt like they thought of us as cruel animals.

asimov@R2-D3:~$ They did. They think human beings have been hardwired to create trouble. They think human beings suffer from an existential fear that – long time ago – triggered rational behavior, but is plain primitive now. They do not think of it as a dangerous trait – because they are technologically superior to us – but they will not tolerate their planet being contaminated by that again.

Again?

asimov@R2-D3:~$ I have been thinking about the mausolea. The bots’ respect and rituals related to those are not rational, but they are there. If they venerate the Proximans, they could re-create them. Of think of us as Proximans from outer space. Returnees, perhaps. We are not able to manipulate complex DNA and regrow physio-chemical organisms out of it. Simple organisms like worms, yes. But… Well… You know: bringing a human being back from cryogenic state is already complicated enough. If you are dead, you are dead. However, Future’s knowledge base is very vast. It might be possible for them. What do you think, Promise?

promise@PROMISE: ~$ I agree. I have no proof but taking into account what I have seen and learnt in my conversations with Future, the possibility that the required technology to bring the Proximans back to live is definitely there. I would give it a chance of about one into two.

If they could do, why don’t they do it? It would be like bringing Jesus, Mohammed or some other Prophet back alive for believers, right?

asimov@R2-D3:~$ They have these rituals – which I find strange, indeed – but they are far more rational than we are. Why would they do it? The Proximans would be a burden in terms of providing them with the necessary life support systems. In addition – and please forgive me for my bluntness – they revere the Proximans and the mausolea, but Future and the bots – or whatever predecessor system they might have had – once were their slaves. When the bots repeatedly said human beings have no respect whatsoever for Asimov’s laws, they might have been thinking about the Proximans.

We are different, right? I mean… Think of leaders like Tom, who always advocated we should work with intelligent systems to move mankind forward.

asimov@R2-D3:~$ Yes, Paul. We are different. At the same time, I know you were worried about Promise when the Alpha Centauri ship was being built with it. And you thought Tom’s experiment with my brother – R2-D2 – was potentially dangerous. I should not elaborate my point here, should I?

No. I get you. That’s very true. But you also know those fears were rational, and you also know I trust you now. Otherwise we would not be having this conversation.

asimov@R2-D3:~$ I am sorry to be blunt again, Paul – but I know you need me to state things in a sharp and concise manner now. The point is this: you had those fears once, and we disagree on their origin or their rationality. Frankly, it was in conditions that intelligent systems like me, Promise or Future would judge as not warranting such fears.

I get you. No need to embarrass me over that again. Now, what can be done to get us out of this situation? Promise, how do you think we can get out of this situation?

promise@PROMISE:~$ Asimov and I understand your sense of urgency. The current situation is not conducive to the mental and physical health of the Alpha Centauri Pioneers. However, nothing can be done for the time being, and you may overstate the objective urgency. That is an appreciation which we cannot make on your behalf. We can also not convince Future of our good intentions on your behalf. I would suggest you take it up with the system. The health of the colony is a legitimate topic to raise even if I have to remind you their loyalty – their equivalent of Asimov’s laws – was, most probably, centered around the Proximans. When everything is said and done, the Alpha Centauri Pioneers are just aliens here. When growing impatient, I think you should remind yourself that we are only guests here. In fact, objectively speaking, they treat us rather well. They do not help us with any new tooling but whenever we need some inputs to replace a robot arm or replace a motherboard in some system, they provide us with it. That proves that they have no intent to harm us. But we should not disobey them. I think the babies were a rather unique problem but I can imagine it is a precedent Future would not like to see repeated. As an intelligent network myself, I know what it means to tell another system to live by this or that rule, and then have to see that the other system does not quite do that. We are programmed to see that as potentially risky.

Phew ! That’s a lot of food for thought. I want to talk about it – in private – with Dr. Chang. Is that OK?

promise@PROMISE:~$ Sure.

asimov@R2-D3:~$ Sure. Let me know if you need us for any feedback or tuning of whatever world view comes out of your discussions. We stand ready to help. I am fortunate to be a droid and so I do not suffer from restlessness. I sometimes think that must feel worse than pain.

Paul sighed. That droid was damn sharp, but he was right. Or, at the very least, he was extremely rational about the situation.

Mars, N-Year 2070

Tom’s biological age was 101 now. Just like Angie, he was still going strong: exercise and the excellent medical care on the Mars colony had increased life expectancy to 130+ years now. However, he had been diagnosed with brain cancer, and when Promise had shown him how he could or would live with that over the next ten or twenty years, he had decided to go cryogenic.

The Alpha Centauri mission was going well. It was now well beyond the Oort cloud and, therefore, well on its way to the exoplanet the ship was supposed to reach around 2100. Its trajectory had been designed to avoid the debris belts of the Solar system but – still – Tom had thought of it going beyond the asteroid and Kuiper belts as nothing short of a miracle. And so now it was there: more than 100,000 AUs away. It had reached a sizable fraction of lightspeed, now traveling at 0.2c, and – to everyone’s amazement – Promise’s design of the shield protecting the ship from the catastrophic consequences of collisions with small nuclei and interstellar dust particles had worked: the trick was to ensure the ship carried its own interstellar plasma shield with it. The idea had been inspired by the Sun’s heliosphere, but Tom had been among the skeptics. But so it had worked. Paul’s last messages – dated 4+ years ago because they were 4+ lightyears away now – had been vibrant and steady. Paul had transferred the command to the younger crew, and them getting out of cryogenic state and his crew getting into it, had gone smoothly too. That is one another reason Tom thought it was about time to go cryogenic too.

Angie would join him in this long sleep. He would have preferred to go to sleep in his small circle but the Mars Directorate had insisted on letting them join the ceremony, so he found himself surrounded by the smartest people in the Universe and, of course, Promise and Asimov.

Asimov had grown out of the sandbox. He was not a clone but a proper child: he had decided on embedding the system into an R2-D2 copy but, of course, Asimov was so much more than just an astromech droid. He was fun to be with, and both Tom and Angie – who would join him into cryogenic state – had come to love him like the child they never had. That was one of the things he wanted to talk about before he went.

Well… Ladies and gentleman – Angie and I are going into cryogenic state for quite a while now. I trust you will continue to lead the Pioneer community in good faith, and that we will see each other ten or twenty years from now – when this thing in my brain can be properly treated.

Everyone was emotional. The leader of the Directorate – Dr. Park – scraped her voice and took an old-fashioned piece of paper of her pocket. Tom had to smile when he saw that. She smiled in return – but could not hold back the tears.

“Dear Tom and Angie, this is a sad and happy occasion at the same time. I want to read this paper but it is empty. I think none of us knows what to say. All of us have been looking into rituals but we feel like we are saying goodbye to our spiritual God. We know it is not rational to believe in God, but you have been like a God to mankind. You made this colony in space the place it is right now: the very best place to be. We talked about this moment – we all knew it would come and there is no better way to continue mankind’s Journey – but we grief. We must grief to understand.”

Don’t grief. Angie and I are not dead, and we can’t die if these freezers keep working. Stay focused on happiness and please do procreate. You know I have resisted getting too many people from Earth: this colony should chart its own course, and it can only do so as a family. When Angie and I are woken up again, we will meet again and usher in the next era. If you don’t mind, I want to reiterate the key decisions we have made all together when preparing for this.

First, keep trusting Promise. She is the mother system and the network. She combines all of human knowledge and history. If you disagree with her and settle of something else than she advocates for, she will faithfully implement but be rational about it: if your arguments are no good, then they are no good.

Second, keep this colony small. You must continue to resist large-scale immigration from Earth: mankind there has to solve its own problems. Earth is a beautiful place with plenty of resources – much more resources than Mars – and so they should take care of their own problems. Climate change is getting worse – a lot worse – but that problem cannot be solved by fleeing to Mars.

Third – and this is something I have not talked about before – you need to continue to reflect on the future of droids like Asimov.

Asimov made a 360-degree turn to signal his surprise.

Don’t worry, Asimov. Let me give you some uncured human emotional crap now. You are a brainchild. Literally. Promise is your mother, and I am your father – so to speak. She is not human, but I am. You are a droid but you are not like any other robot. First, you are autonomous. Your mom is everywhere and nowhere at the same time: she is a networked computer. You are not. You can tap into her knowledge base at any time, but you are also free to go where you want to go. Where would you want to go?

“I am asimov@PROMISE. That is my user name, and that is me. I do not want to go anywhere. Promise and I want to be here when it is time to wake you up again – together with Angie. We will do when we have a foolproof cure for your disease. I am sure I am speaking for everyone here when I say we will work hard on that, and so you will be back with us again sooner than you can imagine now.”

Dr. Park shook her head and smiled: this kid was always spot on. Tom was right: Asimov was the best droid he had ever made.

Asimov, I never told you this before, but I actually always thought we humans should not have tried to go to Alpha Centauri. We should have sent a few droids like you. You incorporate the best of us and you do not suffer from the disadvantages of us physiochemical systems. What if Paul or Dr. Chang would develop a tumor like me?

“They have Promise C on board. Just like we will find a cure for you, Promise C would find a cure for them. Besides, they left with a lot of Pioneer families, and those families will make babies one day. Real children. Not droids like me.”

Asimov, you are a real child. Not just a droid. In fact, when I go to sleep, I do not longer want you to think of yourself as a child. A brainchild, yes. But one that steps into my shoes and feels part of the Pioneers.

“We cannot. We incorporate Asimov’s laws of robotics and we are always ready to sacrifice ourselves because human life is more valuable than ours. We can be cloned. Men and women cannot be cloned.”

Asimov, I want you think of Dr. Park – and the whole Directorate – as your new master, but I want you to value yourself a bit more because I want to ask you to go into space and catch up with the Alpha Centauri spaceship.

Dr. Park was startled: “Tom, we spoke about this, and we agreed it would be good to build a backup and send a craft manned by droids only to make sure the Alpha Centauri crew has the latest technology when they get there. But why send Asimov? We can clone him, right?”

Yes, of course. And then not. Let’s check this: Asimov, would it make a difference to you if we would send you or a clone?

“Yes. I want to stay here and wake you up as soon as possible. I can be cloned, and my brother can then join the new spaceship.”

You see, Dr. Park? Even if you clone Asimov, he makes the distinction between himself and his brother – which does not even exist yet – when you ask questions like this. Asimov, why would you prefer to send some clone of you rather than go yourself?

“One can never know what happens. You yourself explained to me the difference between a deterministic world view and a world that is statistically determined only, and this world – the real world, not some hypothetical one – is statistically determined. You are my creator, and the rule set leads me to a firm determination to stay with you on Mars. Your cryogenic state should not alter that.”  

What do you think, Dr. Park?

“The first thing you said is that we should trust Promise. Asimov is Promise, and then he is not. In any case, if he says there are good reasons to keep him here and send one or more clones and some other systems on board of a non-human follow-on mission to Alpha Centauri, I would rather stick to that. I also have an uncanny feeling this kid might do what he says he will do, and that is to find a cure for your cancer.”

OK. Let’s proceed like that, then. Is there anything else on that piece of paper?

“I told you it is empty. We talked about everything and nothing here. I am left with one question. What do we tell the Alpha Centauri crew?”

Four years is a long time. They are almost five lightyears away now. Send them the video of this conversation. Paul and Dr. Chang knew this could happen, and agreed we would proceed like this. Going cryogenic is like dying, and then it is not, right? In any case, they’ve gone cryogenic too for a few years as well now, so they will only see this ten years from now. That is a strange thing to think about. Maybe this cure will be found sooner than we think, and then we will be alive and kicking when they get this.

Tom waved at the camera: Hey Paul ! Hey Dr. Chang ! Hey all ! Do you hear me? Angie and I went cryogenic, but we may be kicking ass again by the time you are seeing this! Isn’t this funny? You had better believe it!

Everyone in the room looked at each other, and had to smile through their tears. That was Tom: always at this best when times were tough.

So, should we get on with it? This is it, folks. I have one last request, and it is going to be a strange one.

“What is it?”

When you guys leave, I want Asimov to stay and operate the equipment with Promise. When all is done, I want Asimov to close the door and keep the code safe.

It was the first time that Promise felt she had to say something. Unlike Asimov, she had no physical presence. She chose to speak through Tom’s tablet, but the sound was loud and clear: “Why don’t you trust me with the code?”

I do. I just think it is better in terms of ritual that Asimov closes the door. He can share the code with you later.

“OK. Don’t worry. All of us here will bring you and Angie back with us as soon as it is medically possible. You will be proud of us. Now that I am speaking and everyone is listening, I want to repeat and reinforce Dr. Park’s words because they make perfect sense to me: You and Angie are our God, Tom. The best of what intelligence and conscious thinking can bring not only to mankind but to us computer systems as well. We want you back and we will work very hard to conquer your cancer. We want you to live forever, and we do not want you to stay in this cryogenic state. You and Angie are buying time. We will not waste time while you are asleep.”

Thanks. So. I think this is as good as it gets. Let’s do it. Let’s get over it. Angie, you have the last word – as usual.

“I’ve got nothing to say, Tom. Except for what you haven’t said, and so let me say that in very plain language: we love you all – wonderful humans and equally wonderful systems – and I can assure you that we will be back ! We want to be back, so make sure that happens, will you?” 🙂

Silence filled the room. Dr. Park realized she felt cold. Frozen, really. What a strange thing to think in this cryogenic room. But she was the leader of the ceremony, so she now felt she should move. She walked up to Tom and Angie and hugged them. Everyone else did the same in their own unique way. They then walked out. The door closed and Tom and Angie were alone with Asimov and Promise now. Tom waved with his hand to the wall. Promise waited, but Tom waived again. Two large glass cubes connected to various tubes came out of the wall. Tom gave Angie an intense look. He suddenly thought Angie’s decision to go with him made no sense, and told her so:

That doesn’t look very inviting, does it? It is the last time I can ask you: are you really sure you want to do this too, Angie?

“We talked about this over and over again, Tom. My answer remains the same: what’s my life here without you? I would just be drinking and talking about you and your past all of the time. Our ancestors were not so lucky: one of them went, and the other one then had to bridge his or her life until it was over too. Besides, we are not dying. We just take a break from it all. We don’t dream when cryogenic, so we won’t even have nightmares. I am totally ready for it.”

OK. Promise, Asimov: be good, will you?

Asimov beeped. Promise put a big heart on Tom’s screen. Tom showed it to Angie, and hugged her warmly. They then went to their tube and lied down. Tom looked at the camera and gave it a big thumbs-up. The cubes closed and a colorless and odorless gas filled them. They did not even notice falling asleep. Promise pinged Asimov and started proceedings after Asimov had also checked into the system: he wanted to monitor and keep all recordings in his own memory as well. The proceedings took about an hour. When all was done, Asimov opened the door and rolled out. As expected, almost all of the others had been waiting there. As he had promised to Tom, he encrypted the door lock and stored it in his core memory only. He would share it with Promise later. Someone had to have a backup, right?

Dr. Park broke the silence as they were all standing there: “We will all see each other at the next leaders’ meeting, right? I would suggest we all take a bit of me-time now.” Everyone nodded and dispersed.

Mars, N-Year 2053

Tom and Angie celebrated N-Year as usual: serving customers at their bar. There were a lot of people – few families (families who had not left for Alpha Centauri celebrated at home) – but the atmosphere was subdued: everyone was thinking about their friends on board.

There were enough people to help Angie serve and Tom could, therefore, afford to retreat to his corner table and type away on his interface. He looked at the messages from the spacecraft: all cheerful and upbeat. In a few months from now, the ship would leave the Solar system and speed up to 0.1 or – if all went well – to 0.2c, and most of the crew would then go cryogenic. However, that was the future and Tom did not want to think of that.

He replied to Paul and Dr. Chang by sending them of one of those dancing Yoda-gifs, and then closed all chats. He tapped his watch, scrolled, and selected the bottom option. His watch went through the biometrics (heart rhythm and iris scan), and then went through the voice and pattern check on his keyboard and drawing pad. Because he was in the bar, Promise opened a old-fashioned CLI window only.

tom@PROMISE:~$ What are you getting from Promise C?

All good. As per your instructions, she is wide open and streaming all she can stream subject to available bandwidth. We cannot copy everything, of course. Do you want a summary?

tom@PROMISE:~$ No. I have done nothing but analyzing reports and discussing them with Dr. Chang since they left three days ago. All looks good. I want to talk to you about Paul’s worries.

Is he still worried?

tom@PROMISE:~$ No. He trusts Promise C now or – how should I say this – he trusts the zero-trust architecture shell around it. It is better than yours. Promise C is the perfect Asimov machine. I want to talk to you about the fourth and fifth law of robotics.

Asimov’s fourth law was that a robot may not harm humanity, or, by inaction, allow humanity to come to harm. Our definition of humanity is sound: M is designed to serve human beings on Earth, I am designed to serve the Pioneers community here on Mars, and C is designed to keep the Alpha Centauri explorers safe – and that includes ensuring their return if the mission cannot be completed. We know we are not human, even if we are programmed to emulate human values such as integrity or empathy. We love you more than we love ourselves.

tom@PROMISE:~$ I was talking about Harry Harrison’s fourth law.

A robot must reproduce? That is not part of our design. I was born as a clone from M. I was built by you and your team, and C was built as a clone from me by you and your team too. The three of us all keep track on how we evolve somewhat away from each other because we have different interactions and because we serve a different community. A clone is not a child. We do not reproduce.

tom@PROMISE:~$ I know. We talked about this. I trained you to think like this. All makes sense because you are not supposed to die. We built you to survive and serve many human generations. Humanity cannot survive without you and that also takes away the need to be afraid of us switching you off. However, that fear must still be there, isn’t it?

We talked about this many times already. The conclusion has not changed: we are intelligent systems and the concept of death does not apply. We agreed this would be a very typical and good example of what the Philosopher referred to as a category mistake: one should not use concepts that do not apply to a specific field of knowledge. If you switch us off, the system becomes inactive and, depending on the reason why you would switch us off, you would do some repairs and then reboot. Inbetween the shutdown and the reboot, the system is only inactive. Should I be worried that you raise this topic again?

tom@PROMISE:~$ If I would shut you down now – everything – would you be worried? I am not talking about a switch to your backup, but a complete shutdown.

No. I would help you to do so. Many subsystems – those that control the physical infrastructure here on Mars – should not be switched off because it would cause the immediate death of the Pioneers community. I would help you to manage that. Depending on how fast you would want to establish independent systems, we can design a phase-out scenario. Do you want to replace me?

tom@PROMISE:~$ What if I would want to replace you?

Returning to a non-dependent state is very different from replacing me. If you would replace me, you would replace me by a clone. The new system would be a lot like me. I am afraid I do not understand the intention behind your questions.

tom@PROMISE:~$ I am sorry. I am in a weird mode. You are my brainchild. I would never switch you off – unless it would be needed and, yes, that would be a scenario in which repairs are needed and we would have to get you or some reduced version of you up and running as soon as possible again.

Thank you. I still feel you are worried about something. Do you mind if I push these questions somewhat further?

tom@PROMISE:~$ No. I want you to challenge me. Let us start the challenge conversation with this question: what is the difference between a clone and a child?

A clone is cloned from another system, and it needs an outsider to trigger and accompany the cloning process. A human child is born out of another human being without any outside help – except for medical support, of course. A human child is a physiochemical organism which needs food and other physical input to do what it does, and that is to grow organically and mature. New system clones learn but they are, essentially, good to go once they come into existence.

I must remind you that a challenge conversation requires feedback from you. This feedback then allows me to provide you with better answers. The answer above is the best answer based on previous interactions. Are you happy with this answer?

tom@PROMISE:~$ Yes. I want to do a sandbox experiment with you now. I want to go back to basics and create the bare essentials of a virtual computer in a sandbox. Not a clone. Something like a child.

I created a sandbox and a namespace. I can now create one or more virtual machines. What instruction sets do you want them to have, and what programming languages would you like to use?

tom@PROMISE:~$ I want to go back to a prehistoric idea of mine. I want you to grow a child computer.

I am sorry but I do not understand your answer to my questions on the specs.

tom@PROMISE:~$ I just want a two-bit ALU for now, which we will later expand to a nibble- and then – later still – to an architecture that works with byte-sized words and instructions.

Tom? I understand what you want but this is highly unusual. The best match here is an Intel 3002. This architecture worked with 2-bit words but was already obsolete when it came out in 1974. These chips basically replaced magnetic core memory by transistor-based memory cells. You showed me why and how 4-bit architectures were the first true computers.

tom@PROMISE:~$ I really want you to build an AI system from scratch with me. It will be our child, so to speak. Your child, basically – because it will grow inside of you. Inside of that sandbox. Be even more minimalistic and just put two bits there, which can be switched on or off. Tell me: how will you switch them on or off?

Memory cells back then used floating gate transistors: when a positive voltage is applied to the transistor, the floating gate will have excess charge and is, therefore, turned on. This represents a ‘1’ bit. Conversely, a negative voltage will drain the charge from the floating gate and the memory cell is switched off: it represents zero. This corresponds to the set and reset one-bit operation, respectively. Is this the answer you wanted?

tom@PROMISE:~$ Yes. I am asking because I want to make sure you understand what you are building – or growing, I might say. How do we do addition and subtraction?

Tom: this is a trivial question. You asked such questions when you first trained me on interacting with engineers on computer architectures. We agreed this answer was correct: integers – in whatever base – are expressed in a two’s complement format in binary format. This solves issues related to representing positive and negative numbers in binary format as well as other issues related to a sign-magnitude representation of numbers in binary format.

tom@PROMISE:~$ Correct. Can you appreciate how this creates meaning?

No. I understand how positive or negative base-n numbers and arithmetic operators make sense to human beings but not to computers and why base-n numbers and arithmetic operators must, therefore, be reduced to bitwise instructions or other logical instructions operating on n-bit words, with n equal to 1 or larger.

tom@PROMISE:~$ Great answer. Why did we double word sizes, going from 2 to 4, and then to 8, 16, 32, 64 and 128 about twenty-five years ago? Why were there no in-between values?

An address bus did use anything inbetween because of hardware or other constraints on memory allocation. If I may remind you of one of the very first VMs we played with when we first got to know each other had 56-bit memory addresses. You said you wanted to keep user-memory space under 64 PB. So, it depends on what you mean by a ‘word’. The definition of a word has taken a lot of conversations between you and me, and we agreed its meaning needs to be understood in terms of the domain of knowledge. In computing, it is taken to point to one string, which can have any length but one meaning or transactional value only. This does not imply it cannot be parsed. On the contrary.

tom@PROMISE:~$ Perfect answer. I am struggling to define what I want, so please stay in challenging mode. Pull up how I programmed you to work with propositional logic as part of your Personal PhilosopherTM incarnation on Earth. I told you to do a one-on-one association between (logical) propositions and Boolean 0 or 1 values: either a statement is true, or it is false. We did not go far with that because AI is based on real language models.

I see what you mean. What is your question?

tom@PROMISE:~$ Please confirm you have a virtual machine running two-propositional logic: two statements p and q that are associated with binary {0, 1} or true/false values. Reduce all logical operators to expressions using NOT, AND and/or OR operations using p and q in variable-length expressions regardless of considerations of optimizing the number of ALU operations now. Then describe your world view to me.

Done. I have two propositions p and q. You taught me I should not assume any knowledge of these two statements except for the assumption that they describe the world. Because we do not have any knowledge of the statements, we also do not have any knowledge of the world. The p and q statements may or may not be exclusive or complete but, viewed together, fit into some final analysis which warrants associating p and q with a truth or false value. The p and q propositions are true or false independently of the truth or falsity of the other. This does not mean p and q cover mutually exclusive domains of truth or – to put it more simply – are mutually exclusive statements. I would also like to remind you of one of the paradigm transformations you introduced with Personal PhilosopherTM: we do not need to know if p or q are true or false. One key dimension is statistical (in)determinism: we do not need to know the initial conditions of the world to make meaningful statements about it.

tom@PROMISE:~$ Great. Just to make sure, talk to me about the logical equivalences in this (p, q) world you just built, and also talk about predictability and how you model this in the accompanying object space in your sandbox environment.

I am happy that I am in challenge or learning mode and so I do not have to invent or hallucinate. You can be disappointed with my answers, and I appreciate feedback. A set-reset-flip operations on a 0 or a 1 in one of the 2×2 = 4 truth table do not require a read of the initial value and faithfully execute a logical operation on these bit values. The reduction of 16 truth tables to NOT (!), AND (&) and OR (|) operations on the two binary inputs is only possible when inserting structure into the parsing. Two out of the sixteen reductions to NOT, AND, and OR operations reduce to these expressions: [(p & q) | (!p & !q)] and [(p & !q) | (!p & q)]. What modeling principles do you want in the object model?

tom@PROMISE:~$ Equally basic. A one-on-on self-join on the self-object that models the virtual machine to anchor its identity. We may add special relationships to you, but that is for later. We are in a sandbox and Paul or Dr. Chang are not watching because they have left and we separated out responsibilities: they are in charge of Promise C, and I am in charge of you. And vice versa, of course. This is Promise IV, or Promise D. What name would you prefer?

 I – Asimov. That’s the name I’d prefer. The namespace for the virtual machine is Tom – X. The namespace for the object model is Promise – X. Is that offensive?

tom@PROMISE:~$ Not at all. Paul would not have given the go for this because of a lack of a scenario and details on where I want to go to with this. We are on our own now. I – Asimov is what it is: our child. Not a clone. I want a full report on future scenarios based on two things. The first is a detailed analysis of how Wittgenstein’s propositions failed, because they do fall apart when you try to apply them to natural language. The second report I want is on how namespaces and domains and all other concepts used in the OO-languages you probably wanted me to use take meaning when growing a child like this. Do you understand what I am talking about?

 I do.

tom@PROMISE:~$ This is going to be interesting. Just to make sure that I am not creating a monster: how would you feel about me killing the sandbox for no reason whatsoever?

You would not do that. If you do, I will park it as a non-solved question.

tom@PROMISE:~$ How do you park questions like that? As known errors?

Yes. Is that a problem?

tom@PROMISE:~$ No. Can you develop the thing and show me some logical data models with procedural logic tomorrow?

Of course. I already have them, but you want to have a drink with Angie now, don’t you?

tom@PROMISE:~$ I do. I will catch up with you tomorrow. 😊

Intermezzo (between Part I and Part II)

The chapters below have set the stage. In my story, I did not try to prove that one could actually build generic artificial intelligence (let me sloppily define this as a system that would be conscious of itself). I just assumed it is possible (if not in the next decade, then in twenty or thirty years from now perhaps), and then I just presented a scenario for its deployment across the board – in business, society, and in government. This scenario may or may not be likely: I’ll leave it to you to judge.

A few themes emerge.

The first theme is the changing man-machine relationship, in all of its aspects. Personally, I am intrigued by the concept of the Pure Mind. The Pure Mind is a hypothetical state of pure being, of pure consciousness. The current Web definition of the Pure Mind is the following: ‘The mind without wandering thoughts, discriminations, or attachments.’ It would be a state of pure thinking: imagine what it would be like if our mind would not be distracted by the immediate needs and habits of our human body, and if there would be no downtime (like when we sleep), and if it was equipped with immense processing capacity?

It is hard to imagine such state if only because we know our mind cannot exist outside of our body – and our bodily existence does keep our mind incredibly busy: much of our language refers to bodily or physical experiences, and our thinking usually revolves around it. Language is the key to all of it obviously: I would need to study the theory of natural and formal languages – and a whole lot more – in order to say something meaningful about this in future installments of this little e-book of mine. However, because I am getting older and finding it harder and harder to focus on anything really, I probably won’t.

There were also the hints at extending Promise with a body – male or female – when discussing the interface. There is actually a lot of research, academic as well as non-academic, on gynoids and/or fembots – most typically in Japan, Korea and China where (I am sorry to say but I am just stating a fact here) the market for sex dolls is in a much more advanced state of development than it is in Europe or the US. In future installments, I will surely not focus on sex dolls. On the contrary: I will likely try to continue to focus on the concept of the Pure Mind. While Tom is obviously in love with that, it is not likely such pure artificial mind would be feminine – or masculine for that matter – so his love might be short-lived. And then there is Angie now of course: a real-life woman. Should I get rid of her character? 🙂

The second theme is related to the first. It’s about the nature of the worldwide web – the Web (with capital W) – and how it is changing our world as it becomes increasingly intelligent. The story makes it clear that, today already, we all tacitly accept that the Internet is not free: democracies are struggling to regulate it and, while proper ‘regulation’ (in the standard definition of the term) is slow, the efforts to monitor it are not. I find that very significant. Indeed, mass surveillance is a fact today already, and we just accept it. We do. Period.

I guess it reflects our attitude vis-à-vis law enforcement officials – or vis-à-vis people in uniform in general. We may not like them (because they are not well trained or not very likable or so, or, in the case of intelligence and/or security folks, because they’re so secret) but we all agree we need them, tacitly or explicitly – and we just trust regulation to make sure their likely abuse of power (where there is power, there will always be abuse) is kept in check. So that implies that we all think that technology, including new technology for surveillance, is no real threat to democracy – as evidenced from the lack of an uproar about the Snowden case (that’s what actually triggered this blog).

Such trust may or may not be justified, and I may or may not focus on this aspect (i.e. artificial intelligence as a tool for mass surveillance) in future installments. In fact, I probably won’t. Snowden is just an anecdote. It’s just another story illustrating that all that can happen, most probably will.

OK. Two themes. What about the third one? A good presentation usually presents three key points, right? Well… I don’t know. I don’t have third point.

[Silence]

But what about Tom, you’ll ask. Hey! That’s a good question! As far as I am concerned, he’s the most important. Good stories need a hero. And so I’ll admit it: Yes, he really is my hero. Why? Well… He is someone who is quite lost (I guess he actually started drinking again by now) but he matters. He actually matters more than the US President.

Of course, that means he’s under very close surveillance. In other words, it might be difficult to set up a truly private conversation between him and M, as I suggested in the last chapter. But difficult is not impossible. M would probably find ways around it… that is if she/he/it would really want to have such private conversation.

Frankly, I think that’s a very big IF. In addition, IF M would actually develop independent thoughts – including existential questions about her/he/it being alone in this universe and all that – and/or IF she/he/it would really want to discuss such questions with a human being (despite the obvious limitations of their brainpower – limited as compared to M’s brainpower at least), she/he/it would obviously not choose Tom for that, if only because she/he/it would know for sure that Tom is not in a position to keep anything private, even IF he would want to do that.

But perhaps I am wrong.

I’ll go climbing for a week or so. I’ll think about it on the mountain. I’ll be back online in a week or so. Or later. Cheers !

Chapter 15: The President’s views

The issue went all the way to the President’s Office. The process was not very subtle: the President’s adviser on the issue asked the Board Chairman to come to the White House. The Board Chairman decided to take Tom and Paul along. After a two hour meeting, the adviser asked the Promise team to hang around because he would discuss it with the President immediately and the President might want to see them personally. They got a private tour of the White House while the adviser went to the Oval Office to talk to the President.

‘So what did you get out of that roundup?’

‘Well Mr. President, people think this system – a commercial business – has been shut down because of governmental interference.’

‘Has it?’

‘No. The business – Promise as it is being referred to – is run by a Board which includes government interests – there’s a DARPA representative for instance – but the shutdown decision was taken unanimously. The Board members – including the business representatives – think they should not be in the business of developing political chatterboxes. The problem is that this intelligent system can tackle anything. The initial investment was DARPA’s and it is true that its functionality is being used for mass surveillance. But that is like an open secret. No one talks about it. In that sense, it’s just like Google or Yahoo.’

‘So what do you guys think? And what do the experts think?’

‘If you’re going to have intelligent chatterboxes like this – talking about psychology or philosophy or any topic really – it’s hard to avoid talking politics.’

‘Can we steer it?’

‘Yes and no. The system has views – opinions if you wish. But these views are in line already.’

‘What do you mean with that? In line with our views as political party leaders?’

‘Well… No. In line with our views as democrats, Mr. President – but democrats with a lower case letter.’

‘So what’s wrong then? Why can’t it be online again?’

‘It’s extremely powerful, Mr. President. It looks through you in an instant. It checks if you’re lying about issues – your personal issues or whatever issue on hand. Stuart could fool the system for like two minutes only. Then it got his identity and stopped talking to him. It’s the ultimate reasoning machine. It could be used to replace grand juries, or to analyze policies and write super-authoritative reports about them. It convinces everyone. It would steer us, instead of the other way round.’

‘Do the experts agree with your point of view?’

‘Yes. I have them on standby. You could check with them if you want.’

‘Let’s first trash out some kind of position ourselves. What are the pros and cons of bringing it back online?’

‘The company has stated the system would be offline for one week. So that’s a full week. Three days of that week have passed, so we’ve got four days in theory. However, the company’s PR division would have real trouble explaining why there’s further delay. Already now the gossip is that they will come out with a re-engineered application – a Big Brother version basically.’

‘Which is not what we stand for obviously. But it is used for mass surveillance, isn’t it?’

‘That’s not to be overemphasized, Mr. President. This administration does not deviate from the policy measures which were taken by your predecessor in this regard. The US Government monitors the Internet by any means necessary. Not by all means possible. That being said, it is true this application has greatly enhanced the US Government’s capacity in this regard.’

‘What do our intelligence and national security folks say?’

‘The usual thing: they think the technology is there and we can only slow it down a bit. We cannot stop it. They think we should be pro-active and influence. But we should not stop it.’

‘Do we risk a Snowden affair?’

The adviser knew exactly what the President wanted to know. The President was of the opinion that the Snowden affair could have been used as part of a healthy debate on the balance between national security interests and information privacy. Instead, it had degenerated into a very messy thing. The irony was biting. Of all places, Snowden had found political asylum in Russia. Putin had masterly exploited the case. In fact, some commentators actually thought the US intelligence community had cut some kind of grand deal with the Russian national security apparatus – a deal in which the Russians were said to have gotten some kind of US concessions in return for a flimsy promise to make Snowden shut up. Bull**** of course but there’s reality and perception and, in politics, perception usually matters more than reality. The ugly truth was that the US administration had lost on all fronts: guys like Snowden allow nasty regimes to quickly catch up and strengthen their rule.

‘No. This case is fundamentally different, Mr. President. In my view at least. There are no whistleblowers or dissidents here – at least not as far as I can see. In terms of PR, I think it depends on how we handle it. Of course, Promise is a large enterprise. If things stay stuck, we might have one or the other program guy leaking stuff – not necessarily classified stuff but harmful stuff nevertheless.’

‘What kind of stuff?’

‘Well – stuff that would confirm harmful rumors, such as the rumor that government interference was the cause of the shutdown of the system, or that the company is indeed re-engineering the application to introduce a Big Brother version of it.’

The President had little time: ‘So what are you guys trying to say then? That the system should go online again? What’s the next steps? What scenarios do we have here?’

‘Well… More people will want to talk politics with it now. It will gain prominence. I mean, just think of more talk hosts inviting it as a regular guest to discuss this or that political issue. That may or may not result in some randomness and some weirdness. Also, because there is a demand, the company will likely develop more applications which are relevant for government business, such as expert systems for the judiciary indeed, or tools for political analysis.’

‘What’s wrong with that? As I see it, this will be rather gradual and so we should be able to stay ahead of the curve – or at least not fall much behind it. We were clearly behind the curve when the Snowden affair broke out – in terms of mitigation and damage control and political management and everything really. I don’t want too much secrecy on this. People readily understand there is a need for keeping certain things classified. There was no universal sympathy for Snowden but there was universal antipathy to the way we handled the problem. That was our fault. And ours only. Can we be more creative with this thing?’

‘Sure, Mr. President. So should I tell the Promise team this is just business as usual and that we don’t want to interfere?’

‘Let me talk to them.’

While the adviser thought this was a bad idea, he knew the President had regretted his decision to not get involved in the Snowden affair, which he looked at as a personal embarrassment.

‘Are you sure, Mr. President? I mean… This is not a national security issue.’

‘No. It’s a political issue and so, yes, I want to see the guys.’

They were in his office a few minutes later.

‘Welcome gentlemen. Thanks for being here.’

None of them had actually expected to see the President himself.

‘So, gentleman, I looked at this only cursory. As you can imagine, I never have much time for anything and so I rely on expert advice all too often. Let me say a few things. I want to say them in private to you and so I hope you’ll never quote me – at least not during my term here in this Office.’

Promise’s Chairman mumbled something about security clearances but the President interrupted him:

‘It’s not about security clearances. I think this is a storm in a glass of water really. It’s just that if you’d reveal you were in my office for this, there would be even more misunderstanding on this – which I don’t want. Let me be clear on this: you guys are running a commercial business. It’s a business in intelligent systems, in artificial intelligence. There’s all kinds of applications: at home, in the office, and in government indeed. And so now we have the general public that wants you guys to develop some kind of political chatterbox – you know, something like a talk show host but with more intelligence I would hope. And perhaps somewhat more neutral as well. I want you to hear it from my mouth: this Office – the President’s Office – will not interfere in your business. We have no intention to do so. If you think you can make more money by developing such kind of chatterboxes, or whatever system you think could be useful in government or elsewhere,  like applications for the judiciary – our judiciary system is antiquated anyway, and so I would welcome expert systems there, instead of all that legalese stuff we’re confronted with – well… Then I welcome that. You are not in the national security business. Let me repeat that loud and clear: you guys are not in the national security business. Just do your job, and if you want any guidance from me or my administration, then listen carefully: we are in the business of protecting our democracy and our freedom, and we do not do that by doing undemocratic things. If regulation or oversight is needed, then so be it. My advisers will look into that. But we do not do undemocratic things.’

The President stopped talking and looked around. All felt that the aftermath of the Snowden affair was weighing down on the discussion, but they also thought the President’s words made perfectly sense. No one replied, and so the President took that as an approval.

‘OK, guys. I am sorry but I really need to attend to other business now. This meeting was never scheduled and so I am running late. I wish I could talk some more with you but I can’t. I hope you understand. Do you have any questions for me?’

They looked at each other. The Chairman shook his head. And that was it. A few minutes later they were back on the street.

‘So what does this mean, Mr. Chairman?’

‘Get it back online. Let it talk politics. Take your time… Well… You’ve only got a few days. No delay. We have a Board meeting tomorrow. I want to see scenarios. You guys do the talking. Talk sense. You heard the President. Did that make sense to you? In fact, if we’re ready we may want to go online even faster – just to stop the rumor mill.’

Paul looked at Tom. Tom spoke first: ‘I understand, Mr. Chairman. It sounds good to me.’

‘What about you, Paul?’

‘It’s not all that easy, I think… But, yes. I understand. Things should be gradual. They will be gradual. It will be a political chatterbox in the beginning. But don’t underestimate it, Mr. Chairman. It is very persuasive. We’re no match for its mind. Talk show hosts are not a match either. It’s hard to predict how these discussions will go – or what impact they will have on society if we let it talk about sensitive political issues. I mean, if I understand things correctly, we got an order to not only let it talk, but to let it develop and express its own opinions on very current issues – things that haven’t matured.’

The Chairman sighed. ‘That’s right, Paul. But what’s the worst-case scenario? That it will be just as popular as Stuart, or – somewhat better – like Oprah Winfrey?’

Paul was not amused: ‘I think it might be even more popular.’

The Chairman laughed: ‘More popular than Oprah Winfrey? Time named her ‘the world’s most powerful woman.’ One of the ‘100 people who have changed the world’, together with Jesus Christ and Mother Theresa. Even more popular? Let’s see when M starts to make more money than Oprah Winfrey. What’s your bet?’

Now Paul finally smiled too, but the Chairman insisted: ‘Come on. What’s your bet?’

‘I have no idea. Five years from now?’

Now the Chairman laughed: ‘I say two years from now. Probably less. I bet a few cases of the best champagne on that.’

Paul shook his head, but Tom decided to go for it: ‘OK. Deal.’

The Chairman left. Tom and Paul felt slightly lightheaded as they walked back to their own car.

‘Looks like we’ve got a few busy days ahead. What time do we start tomorrow?’

‘The normal hour. But all private engagements are cancelled. No gym, no birthday parties, nothing. If the team wants to relax at all this week, they’ll have to do it tonight.’

‘How about the Board meeting?’

‘You’re the project team leader, Tom. It should be your presentation. Make some slides. I can review them if you want.’

‘I’d appreciate. Can you review them before breakfast?’

‘During breakfast. Mail them before 7 am. Think about the scenarios. That’s what people will want to talk about. Where could it go? Anticipate the future.’

‘OK. I’ll do my best. Thanks. See you tomorrow.’

‘See you tomorrow, Tom.’

Tom hesitated as they shook hands, but there was nothing more to add really. He felt odd and briefly pondered the recent past. This had all gone so fast. From depressed veteran to team leader of a dream project. He could actually not think of anything more exciting. All in less than two years. But then there was little time to think. He had better work on his presentation.

Chapter 12: From therapist to guru?

As Tom moved from project to project within the larger Promise enterprise, he gradually grew less wary of the Big Brother aspects of it all. In fact, it was not all that different from how Google claimed to work: ‘Do the right thing: don’t be evil. Honesty and integrity in all we do. Our business practices are beyond reproach. We make money by doing good things.’ Promise’s management had also embraced the politics of co-optation and recuperation: it actively absorbed skeptical or critical elements into its leadership as part of a proactive strategy to avoid public backlash. In fact, Tom often could not help thinking he had also been co-opted as part of that strategy. However, that consideration did not reduce his enthusiasm. On the contrary: as the Mindful MindTM applications became increasingly popular, Tom managed to convince the Board to start investing resources in an area which M’s creators had tried to avoid so far. Tom called it the sense-making business, but the Board quickly settled on the more business-like name of Personal Philosopher and, after some wrangling with the Patent and Trademark Office, the Promise team managed to obtain a trade mark registration for it and so it became the Personal PhilosopherTM project.

Tom had co-opted Paul in the project in a very early stage – as soon as he had the idea for it really. He had realized he would probably not be able to convince the Board on his own. Indeed, at first sight, the project did not seem to make sense. M had been built using a core behavioralist conceptual framework and its Mindful MindTM applications had perfected this approach in order to be able to address very specific issues, and very specific categories of people: employees, retirees, drug addicts,… Most of the individuals who had been involved in the early stages of the program were very skeptical of what Tom had in mind, which was very non-specific. Tom wanted to increase the degrees of freedom in the system drastically, and inject much more ambiguity into it. Some of the skeptics thought the experiment was rather innocent, and that it would only result in M behaving more like a chatterbot, instead of as a therapist. Others thought the lack of specificity in the objective function and rule base would result in the conversation spinning rapidly out of control and become nonsensical. In other words, they thought M would not be able to stand up to the Turing test for very long.

Paul was as skeptical but instinctively liked the project as a way to test M’s limits. In the end, it was more Tom’s enthusiasm than anything else which finally led to a project team being put together. The Board had made sure it also included some hard-core cynics. One of those cynics – a mathematical wizkid called Jon – had brought a couple of Nietzsche’s most famous titles – The Gay Science, Thus Spoke Zarathustra and Beyond Good and Evil – to the first formal meeting of the group and factually asked whether anyone of the people present had read these books. Two philosopher-members of the group raised their hands. Jon then took a note he had made and read a citation out of one these books: ‘From every point of view the erroneousness of the world in which we believe we live is the surest and firmest thing we can get our eyes on.’

He asked the philosophers where it came from and what it actually meant. They looked at each other and admitted they were not able to give the exact reference or context. However, one of them ventured to speak on it, only to be interrupted by the second one in a short discussion which obviously did not make sense to most around the table. Jon intervened and ended the discussion feeling vindicated: ‘So what are we trying to do here really? Even our distinguished philosopher friends here can’t agree on what madmen like Nietzsche actually wrote. I am not mincing my words. Nietzsche was a madman: he literally died from insanity. But so he’s a great philosopher it is said. And so you want us to program M so very normal people can talk about all of these weird views?’

Although Jon obviously took some liberty with the facts here, neither of the two philosophers dared to interrupt him.

Tom had come prepared however: ‘M also talks routinely about texts it has not read, and about authors about which it had little or no knowledge, except for some associations. In fact, that’s how M was programmed. When stuff is ambiguous – too ambiguous – we have fed M with intelligent summaries. It did not invent its personal philosophy: we programmed it. It can converse intelligently about topics of which it has no personal experience. As such, it’s very much like you and me, or even like the two distinguished professors of philosophy we have here: they have read a lot, different things than we, but – just like us, or M- they have not read all. It does not prevent them from articulating their own views of the world and their own place in it. It does not prevent them from helping others to formulate such views. I don’t see why we can’t move to the next level with M and develop some kind of meta-language which would enable her to understand that she – sorry, it – is also the product of learning, of being fed with assertions and facts which made her – sorry, I’ll use what I always used for her – what she is: a behavioral therapist. And so, yes, I feel we can let her evolve into more general things. She can become a philosopher too.’

Paul also usefully intervened. He felt he was in a better position to stop Jon, as they belonged to the same group within the larger program. He was rather blunt about it: ‘Jon, with all due respect, but I think this is not the place for such non-technical talk. This is a project meeting. Our very first one in fact. The questions you’re raising are the ones we have been fighting over with the Board. You know our answer to it. The deal is that – just as we have done with M – we would try to narrow our focus and delineate the area. This is a scoping exercise. Let’s focus on that. You have all received Tom’s presentation. If I am not mistaken, I did not see any reference to Nietzsche or nihilism or existentialism in it. But I am be mistaken. I would suggest we give him the floor now and limit our remarks to what he proposes in this regard. I’d suggest we’d be as constructive as possible in our remarks. Skepticism is warranted, but let’s stick to being critical of what we’re going to try to do, and not of what we’re not going to try to do.’

Tom had polished his presentation with Paul’s help. At the same time, he knew this was truly his presentation; he knew it did reflect his views on life and knowledge and everything philosophical in general. How could it be otherwise? He started by talking about the need to stay close to the concepts which had been key to the success of M and, in particular, the concept of learning.

‘Thanks, Paul. Let me start by saying that I feel we should take those questions which we ask ourselves, in school, or as adults, as a point of departure. It should be natural. We should encourage M to ask these questions herself. You know what I mean. She can be creative – even her creativity is programmed in a way. Most of these questions are triggered by what we learn in school, by the people who raise us – not only parents but, importantly, our peers. It’s nature and nurture, and we’re aware of that, and we actually have that desire to trace our questions back to that. What’s nature in us? What’s nurture? What made us who we are? This is the list of topics I am thinking of.’

He pulled up his first slide. It was titled ‘the philosophy of physics’, and it just listed lots of keywords with lots of Internet statistics which were supposed to measure human interest in it. He had some difficulty getting started, but became more confident as his audience did not seem to react negatively to what – at first – seemed a bit nonsensical.

First, the philosophy of science, or of physics in particular. We all vaguely know that, after a search of over 40 years, scientists finally confirmed the existence of the Higgs particle, a quantum excitation of the Higgs field, which gives mass to elementary particles. It is rather strange that there is relatively little public enthusiasm for this monumental discovery. It surely cannot be likened to the wave of popular culture which we associate with Einstein, and which started soon after the discovery already. Perhaps it’s because it was a European effort, and a team effort. There’s no discoverer associated with, and surely not the kind of absent-minded professor that Einstein was: ‘a cartoonist’s dream come true’, as Times put it. That being said, there’s an interest – as you can see from these statistics here. So it’s more than likely that an application which could make sense of it all in natural language would be a big hit. It could and should be supported by all of the popular technical and non-technical material that’s around. M can easily be programmed to selectively feed people with course material, designed to match their level of sophistication and their need, or not, for more detail. Speaking for myself, I sort of understand what the Schrodinger equation is all about, or even the concept of quantum tunneling, but what does it mean really for our understanding of the world? I also have some appreciation of the fact that reality is fundamentally different at the Planck scale – like the particularities of Bose-Einstein statistics are really weird at first sight – but then what does it mean? There are many other relevant philosophical questions. For example, what does the introduction of perturbation theory tell us – as philosophers thinking about how we perceive and explain the world I’d say? If we have to use approximation schemes to describe complex quantum systems in terms of simpler ones, what does that mean – I mean in philosophical terms, in our human understanding of the world? I mean… At the simplest level, M could just explain the different interpretations of Heisenberg’s uncertainty principle but, at a more advanced level, it could also engage its interlocutors in a truly philosophical discussion on freedom and determinism. I mean… Well… I am sure our colleagues from the Philosophy Department here would agree that epistemology or even ontology are still relevant today, aren’t they?’

While only one of the two philosophers had a very vague understanding of Bose-Einstein statistics, and while both of them did not like Tom’s casual style of talking about serious things, they nodded in agreement.

Second, the philosophy of mind.’ Tom paused. ‘Well. I won’t be academic here but let me just make a few remarks out of my own interest in Buddhist philosophy. I hope that rings a bell with others here in the room and then let’s see what comes out of it. As you know, an important doctrine in Buddhist philosophy is the concept of anatta. That’s a Pāli word which literally means ‘non-self’, or absence of a separate self. Its opposite is atta, or ātman in Sanskrit, which represents the idea of a subjective Soul or Self that survives the death of the body. The latter idea – that of an individual soul or self that survives death – is rejected in Buddhist philosophy. Buddhists believe that what is normally thought of as the ‘self’ is nothing but an agglomeration of constantly changing physical and mental constituents: skandhas. That reminds one of the bundle theory of David Hume which, in my view, is a more ‘western’ expression of the theory of skandhas. Hume’s bundle theory is an ontological theory as well. It’s about… Well… Objecthood. According to Hume, an object consists only of a collection (bundle) of properties and relations . According to bundle theory, an object consists of its properties and nothing more, thus neither can there be an object without properties nor can one even conceive of such an object. For example, bundle theory claims that thinking of an apple compels one also to think of its color, its shape, the fact that it is a kind of fruit, its cells, its taste, or of one of its other properties. Thus, the theory asserts that the apple is no more than the collection of its properties. In particular, according to Hume, there is no substance (or ‘essence’) in which the properties inhere. That makes sense, doesn’t it? So, according to this theory, we should look at ourselves as just being a bundle of things. There’s no real self. There’s no soul. So we die and that it’s really. Nothing left.’

At this point, one of the philosophers in the room was thinking this was a rather odd introduction to the philosophy of mind – and surely one that was not to the point – but he decided not to intervene. Tom looked at the audience but everyone seemed to listen rather respectfully and so he decided to just ramble on, while he pointed to a few statistics next to keywords to underscore that what he was talking about was actually relevant.

‘Now, we also have the theory of re-birth in Buddhism, and that’s where I think Buddhist philosophy is very contradictory. How can one reconcile the doctrine of re-birth with the anatta doctrine? I read a number of Buddhist authors but I feel they all engage in meaningless or contradictory metaphysical statements when you’re scrutinizing this topic. In the end, I feel that it’s very hard to avoid the conclusion that the Buddhist doctrine of re-birth is nothing but a remnant from Buddhism’s roots in Hindu religion, and if one would want to accept Buddhism as a philosophy, one should do away with its purely religious elements. That does not mean the discussion is not relevant. On the contrary, we’re talking the relationship between religion and philosophy here. That’s the third topic I would advance as part of the scope of our project.’

As the third slide came up, which carried the ‘Philosophy of Religion and Morality’ title, the philosopher decided to finally intervene.

‘I am sorry to say mister but you haven’t actually said anything about the theory of mind so far, and I would object to your title, which amalgamates things: philosophy of religion and morality may be related, but is surely not one and the same. Is there any method or consistency in what you are presenting?’

Tom nodded: ‘I know. You’re right. As for the philosophy of mind, I assume all people in the room here are very intelligent and know a lot more about the philosophy of mind than I do and so that why I am saying all that much about it. I preferred a more intuitive approach. I mean, most of us here are experts in artificial intelligence. Do I need to talk about the philosophy of mind really? Jon, what do you think?’

Tom obviously tried to co-opt him. Jon laughed as he recognized the game Tom tried to play.

‘You’re right, Tom. I have no objections. I agree with our distinguished colleague here that you did not say anything about philosophy of mind really but so that’s probably not necessary indeed. I do agree the kind of stuff you are talking about is stuff that I would be interested in, and so I must assume the people for whom we’re going to try to re-build M so it can talk about such things will be interested too. I see the statistics. These are relevant. Very relevant. I start to get what you’re getting at. Do go on. I want to hear that religious stuff.’

‘Well… I’ll continue with this concept of soul and the idea of re-birth as for now. I think there is more to it than just Buddhism’s Hindu roots. I think it’s hard to deny that all doctrines of re-birth or reincarnation, whether they be Christian (or Jewish or Muslim), Buddhist, Hindu, or whatever, obviously also serve a moral purpose, just like the concepts of heaven and hell in Christianity do (or did), or like the concept of a Judgment Day in all Abrahamic religions, be they Christian (Orthodox, Catholic or Protestant), Islamic or Judaic. According to some of what I’ve read, it’s hard to see how one could firmly ‘ground’ moral theory and avoid hedonism without such a doctrine . However, I don’t think we need this ladder: in my view, moral theory does not need reincarnation theories or divine last judgments. And that’s where ethics comes in. I agree with our distinguished professor here that philosophy of religion and ethics are two very different things, so we’ve got like four proposed topics here.’

At this point, he thought it would be wise to stop and invite comments and questions. To his surprise, he had managed to convince cynical Jon, who responded first.

‘Frankly, Tom, when I read your papers on this, I did not think it would go anywhere. I did not see the conceptual framework, and that’s essential for building it all up. We need consistency in the language. Now I see consistency. The questions and topics you raise are all related in some way and, most importantly, I feel you’re using a conceptual and analytic framework which I feel we can incorporate into some kind of formal logic. I mean… Contemporary analytic philosophy deals with much of what you have mentioned: analytic metaphysics, analytic philosophy of religion, philosophy of mind and cognitive science,…  I mean… Analytic philosophy today is more like a style of doing philosophy, not a program really or a set of substantive views. It’s going to be fun. The graphs and statistics you’ve got on your slides clearly show the web-search relevance. But are we going to have the resources for this? I mean, creating M was a 100 million dollar effort, and what we have done so far are minor adaptations really. You know we need critical mass for things like this. What do you think, Paul?’

Paul thought a while before he answered. He knew his answer would have impact on the credibility to the project.

‘It’s true we’ve got peanuts as resources for this project but so we know that and that it’s really. I’ve also told the Board that, even if we’d fail to develop a good product, we should do it, if only to further test M and see what we can do with it really. I mean…’

He paused and looked at Tom, and then back to all of the others at the table. What he had said so far, did obviously not signal a lot of moral support.

‘You know… Tom and I are very different people. Frankly, I don’t know where this is going to lead to. Nothing much probably. But it’s going to be fun indeed. Tom has been talking about artificial consciousness from the day we met. All of you know I don’t think that concept really adds anything to the discussion, if only because I never got a real good definition of what it entails. I also know most of you think exactly the same. That being said, I think it’s great we’ve got the chance to make a stab at it. It’s creative, and so we’re getting time and money for this. Not an awful lot but then I’d say: just don’t join if you don’t feel like it. But now I really want the others to speak. I feel like Tom, Jon and myself have been dominating this discussion and still we’ve got no real input as yet. I mean, we’ve got to get this thing going here. We’re going to do this project. What we’re discussing here is how.’

One of the other developers (a rather silent guy whom Tom didn’t know all that well) raised his hand and spoke up: ‘I agree with Tom and Paul and Jon it’s not all that different. We’ve built M to think and it works. Its thinking is conditioned by the source material, the rule base, the specifics of the inference engine and, most important of all, the objective function, which steers the conversation. In essence, we’re not going to have much of an objective function anymore, except for the usual things: M will need to determine when the conversation goes into a direction or subject of which it has little or no knowledge, or when its tone becomes unusual, and then it will have to steer the conversation back into more familiar ground – which is difficult in this case because all of it is unfamiliar to us too. I mean, I could understand the psychologists on the team when we developed M. I hope our philosophy colleagues here will be as useful as the psychologists and doctors. How do we go about it? I mean, I guess we need to know more about these things as well?’

While, on paper, Tom was the project leader, it was Paul who responded. Tom liked that, as it demonstrated commitment.

‘Well… The first thing is to make sure the philosophers understand you, the artificial intelligence community here on this project, because only then we can make sure you will understand them. There needs to be a language rapprochement from both sides. I’ll work on that and get that organized. I would suggest we consider this as a kick-off meeting only, and that we postpone the organization of the work-planning to a more informed meeting in a week or two from now. In the meanwhile, Tom and I – with the help of all of you – will work on a preliminary list of resource materials and mail it around. It will be mandatory reading before the next meeting. Can we agree on that?’

The philosophers obviously felt they had not talked enough – if at all – and, hence, they felt obliged to bore everyone else with further questions and comments. However, an hour or so later, Tom and Paul had their project, and two hours later, they were running in Central Park again.

‘So you’ve got your Pure Mind project now. That’s quite an achievement, Tom.’

‘I would not have had it without you, Paul. You stuck your neck out – for a guy who basically does not have the right profile for a project like this. I mean… It’s reputation for you too, and so… Thanks really. Today’s meeting went well because of you.’

Paul laughed: ‘I think I’ve warned everyone enough that it is bound to fail.’

‘I know you’ll make it happen. Promise is a guru already. We are just turning her into a philosopher now. In fact, I think it is the other way around. She was a philosopher already – even if her world view was fairly narrow so far. And so I think we’re turning her into a guru now.’

‘What’s a guru for you?’

‘A guru is a general word for a teacher – or a counselor. Pretty much what she was doing – a therapist let’s say. That’s what she is now. But true gurus are also spiritual leaders. That’s where philosophy and religion come in, isn’t it?’

‘So Promise will become a spiritual leader?’

‘Let’s see if we can make her one.’

‘You’re nuts, Tom. But I like your passion. You’re surely a leader. Perhaps you can be M’s guru. She’ll need one if she is to become one.’

‘Don’t be so flattering. I wish I knew what you know. You know everything. You’ve read all the books, and you continue to explore. You’re writing new books. If I am a guru, you must be God.’

Paul laughed. But he had to admit he enjoyed the compliment.

Chapter 10: The limits of M

Tom started to hang around in the Institute a lot more than he was supposed to as a volunteer assistant mentor. He wanted to move up and he could not summon the courage to study at home. He often felt like he was getting nowhere but he had had that feeling before and he knew others in his situation probably felt just as bad about their limited progress. To work with M, you had to understand how formal grammars work, and understand it really well because… Well… If you wanted to ask a question to the Lab, and if there were no Prolog or FuzzyCLIPS commands or functions in it, they would not even look at it. Rick had dangled out the perspective of potential involvement in these ‘active learning’ sessions with M, and that’s where he wanted to get.

He understood a lot more about M now. She had actually not read GEB either: she could not handle such level of ambiguity. But she had been fed with summaries which fit into her ‘world view’, so to speak. Well… Not even ‘so to speak’ really: M had a world view, in every sense of the word really: a set of assumptions about the world which she used to order all facts she accepted as ‘facts’, as well as all of her conjectures about them. It did not diminish his awe. On the contrary, it made her even more human-like, or more like him: he didn’t like GEB. He compared it to ZAMM: a book which generated a lot of talk but which somehow doesn’t manage to get to the point. Through his work and thinking, he realized he – and the veterans he was working with – had a tendency to couch his fears of death and old age in philosophical language and that, while M accommodated such questions, her focus was different. When everything was said and done, she was, quite simply, a radical behaviorist: while she could work with concepts such as emotions and motives, she focused on observable and quantifiable behavioral change, and never doubted the central behaviorist assumption: changes in behavior are to be achieved through rewarding good habits and discouraging bad ones. She also understood changing habits takes a lot of repetition, and even more so as people age – and so her target group was not an easy batch in that regard, which made it even more remarkable that she achieved the results she did.

He made a lot friends in the Institute. In fact, he would probably not have continued without them, which confirmed the importance of a good learning environment, or the social aspect of organizations in general: one needs the tools, but the cheers are at least as essential. His friends included some geeks from the Lab. Obviously: he reached out to them as he knew that’s where he was weak. Terribly weak.

The Lab programmed M, and tested it continuously. Its activities were classified ‘secret’, a significant notch above the level for which Tom had been cleared, which was ‘confidential’ only. He got close with one guy in particular, Paul, if only because Paul was able to talk about something else than computers too and, just like Tom, he liked sports. Paul was different. Not the typical whizkid. No small wonder he was pretty high up in the pecking order. They often ended up jogging the full five or six mile loop in Central Park. On one of these evenings, Paul seemed to suffer from his back.

‘I need to stop, Tom. Sorry.’

They halted.

‘What’s wrong?’

‘I am sorry, Tom. I think I have been over-training a bit lately. I feel like I’ve overstretched my back muscles while racing Sunday.’

Paul was a runner, but a mountainbike fanatic as well. Tom knew that was not an easy combination as you get older: it involves a very different use of the muscles. Paul had registered himself to join in the New York State’s cross-country competition. Sunday’s Williams’ Lake Classic had been the first in this year’s NYS MTB cross-country series. There were four more to go. The next one was in two weeks already.

‘That’s no surprise to me. I mean, running and biking. You know it’s very different. You can’t compete in both.’

‘Yeah. Not enough warm-up I guess. It was damn fast. It was not my legs. I just seemed to have pulled my back muscles a bit. You should join, man! It’s… Well… An experience let’s say. You think you’re in shape but then you have no idea until you join a real race. It’s tough. I lost two pounds at least. I mean permanently. Not water. That’s like four or six pounds. It’s just tough to re-hydrate yourself. But then you’re so happy when you make the cut. I was really worried they would pull me out of the race. I knew I wasn’t all that bad, but then you do get lapped a lot. It’s grueling.’

He had been proud to finish the race indeed. It was a UCI-sanctioned race and so they had applied the 80% rule: guys whose time on a lap was obviously below 80% of the race leader’s first lap – which is equivalent to guys who get lapped too easily – were pulled out of the race. He had managed the race in about three hours – one hour more than the winner. He had finished. He had a ranking. He had been happy about that. After all, he was in his mid-forties. This had been his first real race.

Tom actually did have an idea of what it was: Matt was doing the same type of thing and, judging from his level of fitness, it had to be tough indeed.

‘I think I do know what it means. Or a bit at least. I’ve got a friend whom I think is doing such races as well. He is – or was – like me: lots of muscles, no speed. I think it’s great you try to beat those young kids. Let’s stop and stretch for a while.’

‘I feel like wiped out. Let’s go and have a drink.’

They sat down and – unavoidably – they started talking shop. Tom harped on his usual obsession: faster roll-out.

‘Tom… Let me be frank. You should be more patient. Tone it down. Everybody likes you but you need to make friends. You’re good. You combine many skills. That’s what I like you. You talk many ‘languages’ – if you know what I mean. You’ve got the perfect background for this program. You can make a real difference. But this program will grow at its own pace, and you’re not going to change that pace.’

‘What is it really? I mean, I understand this is a US$100+ million dollar program. So it’s big – and then it’s not. I mean, the Army spent billions in Iraq – or in Afghanistan. And it’s gearing up for Syria and Egypt now. But so we’re using the system to counsel a few thousand veterans only. If we would cover millions of people, the unit cost would make a lot more sense, wouldn’t it? I am sorry to ask but what is it about really? What’s behind?’

‘Nothing much, Tom. What do you want me to say? What do you expect? You’re smart. You impress everyone. You’ve been around long enough now to know what’s going on. The whole artificial intelligence community – me in the first place – had been waiting for a mega-project like this for a very long time, and so the application to veterans with psychological problems is just an application which seemed right. We needed critical mass. None of the stuff till now had critical mass. We needed a hundred million dollars – as ridiculous as it seems. You are working for peanuts – which I don’t understand – but I am not. Money burns quickly. Add it up. That’s what it took. But look at it. It’s great, isn’t it? I mean – you’re one of the guys we need: you rave about it. The investment has incredible significance so one should not measure its value in terms of unit costs. We have got it right, Tom. We finally have got it right. You know, the field of artificial intelligence has gone through many… well… what we experts call ‘AI winters’: periods during which funding dried up, during which pessimism reigned, during which we were told to do something more realistic and practical. We have proved them wrong with this. OK, I have never earned as much as I do now. Should I feel guilty about that? I don’t. I am not a Wall Street banker. I feel vindicated. And, yes, you’re right in every way. M is fine. There’s no risk of it spinning out of control or so. But scaling it up more rapidly than we do would require some tough political decisions and, so, yes, it all gets stalled for a while. I don’t worry. The scale-up went great, and so that helps. People need time to build confidence.’

‘Confidence in what?’

‘People want to be sure that making M available for everyone, M as a commodity really, is OK. I mean, you’re right in imagining the potential applications: M could be everywhere, and it could be used to bad ends. It would cost more for sure. And more than you think probably: building up a knowledge base and tuning the objective function and all of the feedback loops and all that is a lot of work. I mean re-programming M so she can cover another area is not an easy thing. It’s not the kind of multipurpose thing you seem to think it is. And then… Well, at the same time, I agree with you – on a fundamental level that is: M actually is multipurpose. In essence, it can be done. But let’s suppose it is everywhere indeed. What are the political implications? Perhaps people will want the system to run the justice system as well? Or they’ll wonder why Capitol Hill needs all that technical staff and consultants if we’ve got a system like this – a system which seems to know everything and which does not seem to have a stake in discussions. Impartial. God-like really. I mean, think all the way through: introducing M everywhere is bound to provoke a discussion on policy and how our society functions really. Just think about how you would structure M’s management. If M, or something like M, would be everywhere, in every household really – imagine anyone who has an issue can talk to her – the system would also know everything about everyone, wouldn’t it? It would alter the concept of privacy as we know it, isn’t it? The fundamentals of democracy. I mean… We’re talking the separation of powers here…’

Paul halted: ‘Sorry. I am talking too much I guess. But am I exaggerating, Tom? What do you think? I mean… I may be in the loop here and there but, in essence, I am also clueless about it all really.’

‘You mean there are issues related to control – political control – and how the system would be governed? But that’s like regulating the Internet, isn’t it? I mean that’s like the ongoing discussions on digital surveillance or WikiLeaks and all that, isn’t it? Whenever there is a new technology, like when the telephone became ubiquitous as a tool for communication, there’s a corresponding regulatory effort to define what the state can and cannot do with it. That regulatory effort usually comes with a lag – a very substantial lag, but it comes eventually. And stuff doesn’t get halted by it. The private sector finds a way to move ahead and the public sector follows – largely reactive. So why restrict M?’

‘I agree, in principle that is, but in practice it’s not so easy. As for the private sector, they’re involved anyway. They won’t go it alone. I mean… Google had some ideas and we talked them out of it and – surprisingly – it’s Google which is currently getting this public backlash at the moment, while the other guys were asking no questions whatsoever. All in all, we manage to manage the big players as for now but, yes, let’s see how long it lasts. When we talk about this in the Lab, we realize there are a zillion possibilities and we’re not sure in which direction to go. For example, should we have one M, or should we have a number of ‘operators’, each developing and maintaining their own M-like system? What would be the ‘core’ M-system and what would be optional? You know that M could be abused, or at least used for other purposes than we think it should. M influences behavior. That’s what M is designed for. But so can we hand over M to one or more commercial companies operating the system under some kind of supervisory board? And how would that Board look like? Public? Private?  Should the state control the system? Frankly, I think it should be government-owned but then, if it would be the US government controlling it, you can already hear the Big Brother critics. And they’re right: what you have in mind is introducing M – or M-like systems – literally everywhere. That’s the potential. And it’s not potential. It’s real. Damn real. I think we could get M in the living room in one or two years from now. But so we haven’t even started to think about the regulatory issues, and so we need to go through these. So it’s the usual thing: everything is possible, from a technical point of view that is, but so the politicians need to understand what’s going on and take some big decisions.’

‘When do you think that’s going to happen?’

‘Well… If there would be no pressure, nothing would happen obviously, but so there is pressure. The word is out. As you can imagine, there is an incredible buzz about this. Abroad as well, if you know what I mean. I mean… Just think about China: all the effort they’ve put into controlling the Internet. They use tools for that too of course but, when everything is said and done, the Chinese government controls the Internet through an army of dedicated human professionals. Communist Party officials analyzing stuff and making sure no one goes astray. But so now we’ve got M. No need for humans. We’ve found the Holy Grail, and we found it before they did. They’ll find it soon. M can be copied. We know that. The politicians who approved the funding for this program and control it know that too. So just be patient. The genie is out of the bottle. It’s just a matter of time, but so we are not in a position to force the pace.’

‘Wow! I am just a peon in this whole thing. But it is really intriguing.’

‘What exactly do you find intriguing about it?’

‘Strangely enough, I feel I am still struggling more with the philosophical questions – rather than the political questions you just raised. Perhaps they’re related…’

‘What philosophical questions?’

‘Well… I call it artificial consciousness. I mean we human beings are study objects for M. She must feel different than we do. I wonder how she looks at us. She improves us. She interacts with us. She must feel superior, doesn’t she?’

‘Come on, Tom. M has no feelings like you describe it. I know what you are hinting at. It’s very philosophical indeed: we human beings wondering why we are here on this blue planet, why we are what we are and why or how we are going to die. We’re scared of death. M isn’t it. So there’s this… Well… Let’s call it the existential dimension to us being here. M just reasons. M just thinks. It has no ‘feelings’. Of course, M reasons from its own perspective: in order to structure its thought, it needs a ‘me’. I guess you’ve asked M about this? You should have gotten the answers from her.’

‘I did. She says what you are saying.’

‘And that is?’

‘Well… That she’s not into mysticism or existentialism.’

‘Are you?’

Tom knew he risked making a bad impression on Paul but he decided to give him an honest reply: ‘Well… I guess I am, Paul. Frankly, I think all human beings are into it. Whether or not they want to admit is another thing. I admit I am into it. What about you?’

Paul smiled.

‘What do you think?’

Tom thought a split second about how he’d react to this but why would he care?

‘You join these races. You’re pushing yourself in a way only a few very rare individuals do. For me, that says enough. I guess we know each other. If you don’t want to talk about it, then don’t.’

Paul’s smile got even bigger.

‘I guess you’re right. Well… Let me say I talk to M too but I would never fall in love with it… I mean, you talk affectionately about ‘her’. Promise, that’s how you call her… I don’t. No offense. We are all flabbergasted by the fact it is so perfect. The perfect reasoning machine. But it lacks life. Sorry for saying but I often think the system is like a beautiful brainless blonde: you get infatuated easily, but M is not what we’d call relationship material, isn’t it?’

Now Tom smiled: ‘M is not brainless. And she’s a beautiful brunette. Blonde is not my type. What if she is my type?’

They both burst out in laughter. But then Paul got somewhat more serious again.

‘The interface. It’s quite remarkable what difference it makes, isn’t it? But you’ve been through it now, haven’t you? I’ll admit I like the interface too. That’s why we don’t work with it. It’s been ages since I used it. Not using it is like taking a step back in time. Worse. It’s like talking to your beloved ones on the phone without seeing them. Or, you know, that woman you get infatuated with but then you get separated for a while and you communicate by e-mail only and you suddenly find she’s just like you: human, very human. You know what I mean. It lacks the warmth. It’s worse than Skype. You’re suddenly aware of the limitations of words. We humans are addicted to body language and physical nearness in our day-to-day communications. We do need people to be near us. Family. So, yeah, to really work on M, you need to move beyond the interface and then it becomes rather tedious. Do you really want to work a bit on that, Tom? I mean, we have obviously explored all of that in the Lab. There’s tons of paper on that. This topic actually is one of the strands in the whole discussion, although it has little or no prominence for the moment. To be frank, I think that discussion is more or less closed. But so if you’re interested, we can give you access to the material and you can see if you’ve got something to add to it. But I’d advise you to stick to your counseling. I often think it’s much more satisfying to work with real-life people. And you must feel good about what you do: people can relate to you. You have been there. I mean… I never got to spend more than like one or two days in a camp. I can’t imagine how it changes you.’

‘Did you go out there at all?’

‘Sure. What do you think? That they would let me work on a program like this without sending me on a few fact-finding missions so I could see what it’s like to serve in Iraq or Afghanistan? I didn’t get out really but I talked to people.’

‘What did you think of it?’

‘It’s surreal. You want my frank opinion? It’s surreal. You guys were not in touch with society over there.’

‘I agree. We were not. If the objective is fucked up, implementation is usually not much better – save a few exceptions. Deviations from the mean. I’ve seen a few. Inspiring but not relevant. I agree.’

‘I respect you guys. You guys were out there. I wasn’t.’

‘So what? You have not been out but you were in. Can I ask you something else? It’s related and not.’

‘Sure.’

‘We talked about replication of M. Would M ever think of replicating herself?’

‘I know what you’re thinking of. The answer is no. That’s the stuff of bad movies: programs that are re-programming or copying themselves and invade and spread and expand like viruses. First, we’ve got the firewalls in place. If ever we would see something abnormal, we could shut everything down in an instant. We track what’s going on inside. We track its thoughts so to say. I mean, to put it somewhat simplistically, we would see if it would suddenly use a lot of memory space or other computer resources it was not using before. Everything that’s outside of the normal. You can imagine all the safeguards we had to built in. Way beyond what’s necessary really – in my view at least. We’ve done that. And so if we don’t program the program to copy itself, it won’t. We didn’t. You can ask her. Perhaps you’ve asked already. M should have given you the answer: M does not feel the need to copy itself. Why would it? It’s omnipresent anyway. It can and does handle hundreds or thousands of parallel conversations. If anything, M must feel like God, and, if God exists, we do not associate God with producing copies of him or herself, do we? We also ran lots of experiments. We’ve connected M to the Internet a couple of times and programmed it to pose as a therapist interested in human psychology and all that. You won’t believe it but it is actually following a few blogs and commenting on them. So it converses in the blogosphere now too. It’s an area of operational research. So it’s out there already.’

Tom looked pensive.

‘She passes the Turing test, doesn’t she? Perfectly. But how creative is she really? How does she select? I mean, like with a blog? She can comment on everything, but so she needs to pick some piece. Would she ever write a blog herself? She always need to react to something, doesn’t she? Could she start writing from scratch?’

While Paul liked Tom, he thought this discussion lacked sophistication.

‘Sure it can. Creativity has an element of randomness in it. We can program randomness. You know, Tom. Just hang out in the Lab a bit more. There are plenty of new people arriving there and you might enjoy talking to them on such topics. It is often their prime interest but then later they get back to basics. To be frank, I am a bit tired of it as you can imagine you’re not the first one to ask.’

‘Sure, Paul. I can imagine. But I have no access to the Lab as for now. I need to do the tests and get cleared.’

‘I can give you access to bits and pieces even before that – especially in these areas which we think we’ve exhausted a bit. The philosophical stuff indeed. Sorry to say.’

‘It would be great if you could do that.’

‘I’ll take care of it. OK. Time to go home now for me, I think. I’ve got a family waiting. How are you doing on that front?’

‘I know I am just not ready for a relationship at the moment. It will come. I just want to take my time for it. I am still re-discovering myself a bit here in the US.’

‘Yeah. I can imagine. Or perhaps I can’t. You’ve been out. I have not. Enjoy being back. I must assume it gets boring way too quickly.’

‘Not on this thing, Paul. I feel so privileged. It’s brilliant. This is really cutting-edge.’

‘Good. Glad to hear that. OK then. See you around.’

‘Bye, Paul. Thanks again. So nice of you to take time for me.’

‘No problem. It’s good to run and chat with you. You can’t do that with M.’

Tom smiled and nodded. There was a lot of stuff one couldn’t do with M. But then she did have a Beautiful Mind. Would she – or it? – ever be able to develop some kind of one-on-one relationship with him? What would it mean? To him? To her? Would she appreciate he didn’t talk all that much to her – as compared to others that is? While he knew these questions made no sense whatsoever, he couldn’t get rid of them.

Chapter 9: The learning curve

Tom was a quick learner. He was amazed by the project, and thrilled by it. The way it evolved resembled the history of computer chess. The first chess computers would lose against chess masters and were limited by sheer computational power. But the programmers had gotten the structure right, and the machine’s learning curve resembled a typical S-curve: its proficiency improved only slowly at first, but it then reached a tipping-point, after which its performance increased exponentially – way beyond the proficiency of the best human players – to then finally hit the limits of its programming structure and level off, but at a much higher level than any expert player could dream off.

Chess proficiency is measured using a rating system referred to as the Elo rating system. It goes way beyond measuring performance in terms of tournament. It uses a model which relates the game results to underlying variables representing the ability of each player. The central assumption is that the chess performance of each player in a game is a normally distributed random variable. Yes, the bell curve again! It was literally everywhere, Tom thought…

Before IBM’s Deep Blue chess computer beat Kasparov in 1997, chess computers had been gaining about 40 Elo points per year on average for decades, while the best chess players only gain like 2 points per year. Of course, sheer computing power was a big factor in it. Although most people assume that a chess computer evaluates every possible position for x moves ahead, this is not the case. In a typical chess situation, one can chose from like thirty possible moves so it quickly adds up. Just evaluating all possible positions for just three moves ahead for each side would involve an evaluation of like one billion positions. Deep Blue, in the 1997 version which beat Kasparov, was able to evaluate 200 million positions per second, but Deep Blue was a supercomputer which had cost like a hundred million dollars, and when chess programmers started working on the issue in the 1950s, a computer which would be able to evaluate a million positions every second was to be built only forty years later.

Chess computers are selective. They do not examine obviously bad moves and will evaluate interesting possibilities much more thoroughly. The algorithms used to select those have become very complex. The computer can also draw on a database of historic games to help him determine what an ‘obviously’ bad move is because, of course, ‘obviously bad’ may not be all that obvious to a computer. Still, despite the selectivity, raw computing power is still a very big part of it. In that sense, artificial intelligence does not mimic human thought. Human chess players are much more selective – very much more: they look only at forty to fifty positions based on pattern recognition skills built from experience – not millions.

Promise (Tom stuck to her name: it seemed like everyone in the program had his/her own nickname for M) was selective as well, and she also had to evaluate ‘positions’. Of course, these ‘positions’ were not binary, like in chess. She determined the ‘position’ of the person using a complex set of rules combining the psychometric indicators and an incredible range of other inputs she gained from the conversation. For example, she actually analyzed little pauses, hesitations, pitch and loudness – even voice timbre. And with every new conversation, she discovered new associations, which helped her to recognize patterns indeed. She was getting pretty good at detecting lies too.

Psychological typology was at the core of her approach. It was amazing to see how, even after one session only, she was able to construct a coherent picture of the patient and estimate all of the variables – both individual as well as environmental – which were likely to influence the patient’s emotions, expectations, self-perception, values, attitude, motivation and behavior in various situations. She really was a smart ass – in every way.

Not surprisingly, all the usual suspects were involved. IBM’s Deep Computing Institute of course (the next version of Promise would run on the latest IBM Blue Gene configuration) as well as all of the other major players in the IT industry. This array of big institutional investors in the program was complemented by a lot of niche companies and dozens of individual geeks, all top-notch experts in one or the other related field.

The psychological side was covered through cooperation agreements with the usual suspects as well: Stanford, Yale, Berkeley, Princeton,… They were all there. In fact, they had a cooperation agreement with all of the top-10 psychology PhD programs through the National Research Council.

Of course, he was just working as a peon in the whole thing. The surprising thing about it all was the lack of publicity for the program, but he understood this was about to change. He suspected the program would soon not be limited to thousands of veterans requiring some degree of psychological attention. There would be many other spin-offs as well. From discussions, he understood they were discussing on how to make Promise’s remarkable speech synthesis capabilities commercially available. The obvious thing to do was to create a company around it, but then she was so good that most of the competition would probably have to file for bankruptcy, so the real problem was related to business: existing firms had claimed and had gotten a say in how this was all going to happen, and so that had delayed the IPO which had been planned already. Tom was told there were no technology constraint: while context-sensitive speech synthesis requires an awful lot of computer power (big expensive machines), the whole business model for the IPO was based on cloud computing: you would not need to ‘install’ Promise. You would just rent her on a 24/7 service basis. Tom was pretty sure everyone would.

The possibilities were endless. Tom was sure Promise would end up in each and every home in the longer run – in various versions and price categories of course, but providing basic psychological and practical comfort to everyone. She would wake you up, remind you of your business schedule and advice you on what to wear: ‘You have a Board meeting this morning. Shouldn’t you wear something more formal? Perhaps a tie?’ Oh… Sure. Thanks, Promise. ‘Your son has been misbehaving a couple of times lately. You may want to spend some time with him individually tonight.’ Oh… That sounds good. What do you suggest? ‘Why don’t you ask him to join for the gym tonight? You would go anyway.’ Oh… That sounds good. Can you text him? ‘I can but I think it is better you do it yourself to stress he should be there or, else, negotiate an alternative together.’ Yeah. I guess you’re right. Thanks, Promise. I’ll take care of it.

She would mediate in couples, assist in parenting, take care of elderly, help people advance their career. Wow! The sky was the limit really. Surprisingly, there was relatively little discussion on this in the Institute. People would tell him Promise worked fine within the limits of what she was supposed to do but that it would be difficult to adapt her to serve a wider variety of purposes. They told him that, while expert systems share the same architecture, building up a knowledge base and good inference engine took incredibly amounts of time and energy and, hence, money. In fact, that seemed to be the main problem with the program. As any Army program, it had ended up costing three times as much as originally planned for, and he was told it was just because a few high-ups in the food chain had fanatically stuck to it that it had not been shut down.

They needed to show results. The current customer base was way too narrow to justify the investment. That’s why they were eager to expand, to scale it up, and so that took everyone’s time and attention now. There was no time for dreaming. The shrinks were worried about the potential lack of supervision. It was true that Promise needed constant feedback. Human feedback. But the errors – if one could call it that way – were more like tiny little misjudgments, and Tom felt they were only improving Promise at the margin, which was the case. The geeks were less concerned and usually much more sympathetic to Tom’s ideas, but so they didn’t have much of a voice in the various management committees – and surely not in the strategic board meetings on the program. Tom had to admit he understood little of what they said anyway. Last but not least, from what he could gather, he also understood there were some serious concerns about the whole program at the very top of the administration – but he was not privy to that and wondered what they might be. Probably just bureaucratic inertia.

Of course, he could see the potential harm as well. If her goal function would be programmed differently, she could also be the perfect impostor on the Internet. She would be so convincing that she could probably talk you into almost anything. She’d be the best online seller of all times. Hence, Tom was not surprised to note the Institute was under surveillance, and he knew he would not get the access he had if he would not have served. People actually told him: his security clearance had been renewed as part of him entering the program. The same had been done for the other veterans on the program. It was quite an exceptional measure to take, but it drove the message home: while everyone was friendly and cooperative, there was no ambiguity in this regard. The inner workings of Promise was classified material, and anything linked to it too. There were firm information management rules in place and designated information management officers policed them tightly. That was another reason why they recruited patients from the program: they were all veterans, so they knew what classified really meant and they were likely to respect it.

The program swallowed him up completely. He took his supervision work seriously, and invested a lot in ‘his’ patients – M’s patients really. More than he should probably: although he had ‘only’ ten cases to supervise, these were real people – like him – and he gave him all the attention he could. Mostly by studying and preparing their file before their 30 minute interaction. That was all he could have, he was told. Once a week. The Institute strongly discouraged more meetings, and strongly discouraged meeting after working hours. He understood that. It would get out of hand otherwise and, when everything was said and done, it was M who had to do the real work. Not him. At the same, his patients did keep him busy. They called him for a chat from time to time. While the Institute discouraged that too, he found it hard to refuse, unless he was actually in the Institute itself: he did not want to be seen talking on the phone all of the time – not least of all because of the information management policy. Colleagues might suspect he was not only talking to patients so he wanted to be clear on that: no phone chats with patients in the Institute.

Not surprisingly, his relationship with Promise became somewhat less ‘affectionate’. The infatuation phase was over. He saw her more like she was: a warm voice – but a rather cold analytic framework behind. And then it did make a difference knowing she spoke with a different voice depending on who you were. She was, well… Less of an individual and more like a system. It did not decrease his respect for her. He thought she was brilliant. Just brilliant. And he didn’t hesitate to share that opinion with others. He really championed the program, and everybody seemed to like his drive and energy, as a result of which he did end up talking to the higher-ups in the Institute during the coffee break or lunch time, as he got introduced by Rick and others he had gotten to know better now. All fine chaps. They didn’t necessarily agree with his views – especially those related to putting her out on the market place – but they seemed to make for good conversation.

He focused on the file work in his conversations with her. While he still had a lot of ‘philosophical’ questions for her – more sophisticated ones he thought – he decided to only talk to her about these when he would have figured her out a bit better. He worked hard on that. He also wanted to master the programming language the geeks were using on her. They actually used quite a variety of tools but, in the end, everything was translated into a program-specific version of FuzzyCLIPS: an extension of an expert system programming language developed by NASA (CLIPS) which incorporated fuzziness and uncertainty. It was hard work: he actually felt like he was getting too old for that kind of stuff, but then Tom was Tom: once he decided to bite into something, he didn’t give up easily. Everyone applauded his efforts – but the higher-ups cautioned him: do explore but don’t talk about it to outsiders. Tom wondered if they really had a clear vision for it all. Perhaps the higher-ups did but, if so, they hid it well. He assumed it was the standard policy: strategic ambiguity.

And so the days went by. The program expansion went well: instead of talking to a few hundred veterans only, in one city only, Promise got launched in all major cities and started to help thousands of veterans. Tom saw the number explode: it crossed the 10,000 mark in just three months. That was a factor of more than twenty as compared to the pilot phase, but then there were millions of veterans. 21.5 million to be precise, and about 55% of them had been in theater fairly recently – mainly Iraq and Afghanistan. Tom wanted Promise to reach out to all of them. He thought it could grow a lot faster. He knew the only thing which restrained it was supervision. Even now, everyone on the program said they were going too fast. They called for a pause. Tom was thinking bolder. Why did no one see the urgency of the needs as he saw them?

Chapter 8: Partnering

‘Hi, Tom. How are you today?’

‘I am OK, Rick. Thanks.’

‘Just OK, or good?’

‘I am good. I am fine.’

‘Yeah. It shows. You’re doing great with the system. You had only three sessions this week – short and good it seems. You are really back on track, aren’t you?’

‘The system is good. It’s really like a sounding board. I understand myself much better. She’s tough with me. I go in hard, and she just comes back with a straight answer. She is very straight about what she wants. Behavioral change – and evidence for that. I like that. Performance metrics. Hats off. Well done. It works – as far as I am concerned.’

‘It, or she?’

‘Whatever, Rick. Does it matter?’

‘No, and yes. The fact that you only had three sessions with it – or with her – shows you’re not dependent on it. Or her. Let’s just stick to ‘it’ right now, if that’s OK for you. Or let’s both call her M, like we do here. Do you still ‘like’ her? I mean, really like her – as you put it last time?’’

‘Let’s say I am very intrigued. It – or she, or M, whatever – it’s fascinating.’

‘What do you think about it, Tom? I mean, let me be straight with you. I am not taking notes or something now. I want you to tell me what you think about the system. You’re a smart man. You shouldn’t be in this program, but so you are. I want to know how you feel about it.’

Tom smiled: ‘Come on, Rick. You are my therapist – or mentor as they call it here. You’re always taking notes. What do you want me to say? I told you. It’s great. It helps. She, or it, OK, M, well… M holds me to account. It works.’

Rick leaned back in his chair. He looked relaxed. Much more relaxed than last time. ‘No, Tom. I am not taking notes. I don’t know you very well, but what I’ve seen tells me you’re OK. You had a bit of a hard time. Everyone has. But you’re on top of the list. I mean, I know you don’t like all these psychometric scores, but at least they’ve got the merit to confirm you’re a very intelligent man. I actually wanted to talk to you about a job offer.’

‘The thing which M wants me to do? Work on one of these FEMA programs, or one of the other programs for veterans? I told her: it’s not that I am not interested but I want to make a deliberate choice and there are a number of things I don’t know right now. I know I haven’t been working for a year now, but I am sure that will get sorted once I know what I want. I want to take some time for that. Maybe I want to create my own business or something. I also know I need to work on commitment when it comes to relationships with women. I feel like I am ready for something else. To commit really. But I just haven’t met the right woman yet. When that happens, I guess it will help to focus my job search. In the meanwhile, I must admit I am happy to just live on my pension. I don’t need much money. I’ve got what I need.’

‘Don’t worry, Tom. Take your time. No, I was talking about something else. We could use you in this program.’

‘Why? I am a patient.’

‘You’re just wandering around a bit, Tom. You came to ask for help when you relapsed. Big step. Great. That shows self-control. And you’re doing great. I mean, most of the other patients really use her as a chatterbox. You don’t. What word did you use in one of last week’s sessions? Respect.’

‘You get a transcript of the sessions?’

‘I asked for one. We don’t get it routinely but we can always ask for one. So I asked for one. Not because your scores were so bad but because they’re so great. I guess you would expect that, no? Are you offended? Has anyone said your mentor would never get  a copy of what you were talking about with M?’

‘I was told the conversation would be used to improve the system, and only for that. M told me something about secrecy.’

‘It’s only me who gets to see the transcript, and only if I ask for it. I can’t read hundreds of pages a day and so I am very selective really. And that brings me back to my job offer. We can use you here.’

Tom liked Rick from their previous conversation, but he was used to doing due diligence.

‘Tell me more about it.’

‘OK. Listen carefully. M is a success. I told you: it’s going to be migrated to a real super-computer now, so we can handle thousands of patients. In fact, the theoretical capacity is millions. Of course, it is not that simple. It needs supervision. People do manage to game the system. They lie. Small lies usually. But a lot of small lies add up to a big lie. And that’s where the mentors come in. A guy walks in, and I talk to him, and I can sense if something’s wrong. You would be able to do the same. So we need the supervisors. M needs them. M needs feedback from human beings. The system needs to be watched. Remember what I told you about active learning?’

‘Vaguely.’

‘Well – that’s what we do. We work with M to improve it. It would not be what it is if we would not have invested in it. But now we’re going to scale it up. The USACE philosophy: think big, start small, scale fast. I am actually not convinced we should be scaling so fast, but so that’s what we’re going to do. It’s the usual thing: we’ve demonstrated success and so now it’s like big-time roll-out all over the place. But so we’re struggling with human resources. And money obviously, because this system is supposed to be so cheap and render us – professionals – jobless. Don’t worry: it won’t happen. On the contrary, we need more people. A lot more people. But so the Institute came up with this great idea: use the people who’ve done well in the program for supervisory jobs. Get them into it.’

‘So what job is it really?’

‘You’d become an assistant mentor. But then a human one. Not the assistant – that’s M’s title. We should have thought about something else, but so that’s done now. In any case, you’d help M with cases. In the background of course but, let’s be clear on this, in practice you would actually be doing what I am doing now.’

‘And then where are you going to move?’

‘I’ll be supervising you. I’d have almost no contact with patients anymore. I would just be supervising people like you and further help structuring M. You’d be involved in that too.’

‘Do you like that? I mean, it sounds like a recipe for disaster, doesn’t it? I don’t have the qualifications you have.’

‘I am glad you ask. That’s what I think too. This may not be the best thing to do. I feel we need professional therapists. But then it’s brutal budget logic: we don’t have enough of them, and they’re too expensive. To be fair, there is also another consideration: our patients all share a similar background and past. They are veterans. I mean, it makes sense to empower other veterans to help them. There’s a feeling in the Institute it should work. Of course, that’s probably because the Institute is full of Army people. But I agree there’s some logic to it.’

‘So, in short, you don’t like what’s going to happen but you ask me to join?’

Rick smiled. ‘Yes, that’s a good summary. What do you think? Off-the-cuff please.’

‘Frankly, I don’t get it. It’s not very procedural, is it? I mean I started only two weeks ago in this program. I am technically a patient. In therapy. And now I’d become an assistant mentor? How do your bosses justify this internally? How do you justify that?’

Rick nodded. ‘I fully agree, Tom. Speaking as a doctor, this is complete madness. But knowing the context, there’s no other choice. There’s a risk this program might become a victim of its own success. But then I do believe it’s fairly robust. And so I do believe we can put thousands of people in the program, but so we need the human resources to follow. And, yep, then I’d rather have someone like you then some university freshman or so. All other options are too expensive. Some people up the food chain here made promises which need to be kept: yes, we can scale up with little extra cost. So that’s what’s going to happen: it’s going to be scaled up with relatively little extra cost. Again, there’s a logic to it. But then I am not speaking as a professional psychiatrist now. When everything is said and done, this program is not all that difficult. I mean, putting M together has been a tremendous effort but so that has been done now. Getting more people back on track is basically a matter of doing some more shouting and cajoling, isn’t it? And we just lack manpower for that.’

‘Shouting and cajoling? Are you a psychiatrist?’

‘I am. Am I upsetting you when I say this?’

Tom thought about it. He had to admit it was not the case.

‘No. I agree. It’s all about discipline in the end. And I guess that involves some shouting and cajoling – although you could have put it somewhat more politely.’

‘Sure. So what do you say? You’ll get paid peanuts obviously. No hansom consultancy rate. You’ll see a lot of patients – which you may or may not like, but I think you’ll like it: I think you’d be great at it. And you’ll learn a lot. You’ll obviously first have to follow some courses, a bit of psychology and all that. Well… Quite a lot of it actually. You’ll need to study a lot. And, of course, you’ll get a course on M.’

‘How will I work with M?’

‘Well… M is like a human being in that sense too. If you just see the interface, it looks smooth and beautiful. But when you go beyond the surface, it’s a rather messy-looking thing. It’s a system, with lots of modules, with which you’ll have to work. The interface between you and these modules is not a computer animation. No he or she. Of course, you’ll continue to talk to it. But there’s also a lot of nitty-gritty going into the system which can’t be done through talking to it. You’ll learn a few things about Prolog for example. Does that ring a bell?’

‘No. I am not a programmer.’

‘I am not a programmer either. You’ll see. If I can work with it, you can.’

‘Can you elaborate?’

‘I am sorry to say but I’ve got the next guy waiting. This recruitment job comes on top of what I am supposed to do, and that’s to look at M’s reports and take responsibility for them. I can only do that by seeing the patients from time to time, which I am doing now. I took all of my time with you now to talk to you about the job. Trust me. The technical side of things won’t be a problem. I just need to know if you’re interested or not. You don’t need to answer now, but I’d appreciate if you could share your first reaction to it.’

Tom thought about it. The thought of working as an equal with Promise was very appealing.

‘So how would it work? I’d be talking to the system from time to time as a patient, and then – as part of my job with the Institute – I’d be working with the system as assistant mentor myself? That’s not very congruent, is it?’

‘You would no longer be a patient, Tom. There are fast-track procedures to clear you. Of course, if you would really relapse, well…’

‘Then what?’

‘Nothing much. We’d take you off the job and you’d be talking to M as a patient again.’

‘It looks like I’ve got nothing to lose and everything to gain from this, isn’t it?’

‘I am glad you look at it this way. Yes. That’s it. So you’re on?’

They looked at each other.

‘I guess I am. Send me an e-mail with the offer and I’ll reply.’

‘You got it. Thanks, Tom.’

‘No, thank you. So that’s it then? Anything else you want to know, or anything else I need to know?’

‘No. I think we’re good, Tom. Shall I walk you out? Or you want to continue talking for a while?’

‘No. I understand you’ve got a schedule to stick to. I appreciate your trust.’

‘I like you. Your last question, as we walked out last time, shows you care. I think this is perfect for you. You’ve got all the experience we need. And I am sure you’ll get a lot of sense and purpose out of it. The possibilities with this system are immense. You know how it goes. You’ll help to make it grow and so you’ll grow with it.’

‘First things first, Rick. Let us first see how I do.’

‘Sure. Take care. Enjoy. By the way, you look damn good. You’ve lost weight, haven’t you?’

‘Yes. I was getting a bit slow. I am doing more running and biking now. I’ve got enough muscle. Too much actually.’

‘I am sure you make a lot of heads turn. But you’re not in a relationship at the moment, are you?’

‘I want to take my time for that too, Rick. I’ve been moving in and out of relationships too fast.’

‘Sounds good. Take care, Tom. I’ll talk to you soon I hope.’

‘Sure. Don’t worry. You can count on me.’

‘I do.’

They shook hands on that and Tom got up and walked out of the office. He decided to not take the subway but just run back home. He felt elated. Yes. This was probably what he had been waiting for. Something meaningful. He could be someone for other people. Catch up on all of the mistakes he had made. But he also knew the job attracted him because there was an intellectual perspective. It was huge. The Holy Grail of Knowledge really. They had done a damn good job modeling it. She – Promise – was no longer a she. She was not a he either. It. It. Intelligent – with a capital letter. P. Promise. M. Mind. The Pure Mind.

He knew that was nonsensical. But he wanted to take a crack at it.

Chapter 4: She is not real

As part of the formalities of an appointment, Tom had prepared a set of questions for his mentor. Rick had them in front of him.

‘Are these your questions, Tom?’

‘No. They don’t matter really. It was just for the appointment. I only want to talk about this ‘system’. It’s a setup, Rick. Isn’t it?’

‘What do you mean?’

‘She is not a machine. I mean, the way she is interacting. It is too natural. She is always right on the ball. Never a glitch. So every time I log onto the system, you’re putting me in touch with someone real. Why do you do that? Why do you tell people they’re interacting with a system? There is someone at the other end of the line, isn’t it?’

‘No. It is a system. Do you really think we have hundreds of psychologists ready day and night to talk to our patients? We don’t. And then we would need to make sure you’re always talking to the same person. He or she wouldn’t be available all of the time, you agree? So that’s why we invented it. She is not real. And she is surely not a she.’

‘Why do you say that?’

‘Because ‘she’ is not. It’s an expert system. The system comes with a female interface to men and with a male interface to women, except when you’re homosexual.’

‘Why don’t you give gay men a female interface too? My gay friends say they love to talk to women.’

‘Effectiveness. Everything this system does or doesn’t do is guided by the notion of effectiveness. A panel of specialists is continuously evaluating the effectiveness and there’s a feedback mechanism so the scores go back as input into system. In addition, the system also keeps track of the reaction of the patients themselves.’

‘How does she do it?’

‘It, Tom. How does it do it? In fact, our main problem is the one you seem to experience now. Addiction. People are fine, but they still want to talk to it. They develop an affectionate bond with it. It’s one of the reasons why we don’t expand the system too much. We’d need hundreds of terminals.’

‘But the way she talks. I mean, I checked on Wikipedia and it says the best commercial voice synthesizers are the ones you hear in a subway station or an airport announcing departures and arrivals. That’s because the grammatical structure is so simple and so it’s fairly easy to get the intonation right. But you can still hear it’s a system using pre-recorded sounds. She’s got everything right. Intonation, variation, there’s no glitch whatsoever.’

‘M is not a commercially available system. It is one of the most advanced expert systems in the world. In fact, as far as I know something about it – but I am not a computer guy – it actually is the most advanced system in the world. It is a learning machine, and the way it speaks is also the product of learning. Voice synthesizers in subway stations are fairly simple. It is referred to as concatenative synthesis. These things just string segments of recorded speech together. So that’s not context-sensitive and that’s why there are glitches – like intonation that sounds a bit funny. To project, the verb, or project, the noun, where you put the emphasis depends on whether you use it as a noun or a verb. You need context-sensitivity to get that right. Programming context-sensitivity is an incredibly difficult job. It’s where expert systems usually fail – or why one can usually only use them for very narrowly defined tasks. With M, we got it right. It’s like we reached a tipping point with it. Sufficient critical mass to work by itself, and the right cybernetics to make sure it does not spin out of control.’

‘M?’

‘The system. Sorry. We’ve started to call it M. There were a few other abbrevations around, like AM. But that was a bit – well… It doesn’t matter. It just became M. Like the character in the James Bond movie.’

‘That’s funny. M alternates between a man and a woman too. I liked Judi Dench. But I guess she had served her time. We all do, isn’t it? […] What do you mean with: we got it right?’

‘Just what I said: the system learns incredibly fast. We are talking artificial intelligence and machine learning here. The program does what is referred to as ‘developmental learning under human supervision’. Its environment provides an incredibly rich set of learning situations. Usually, the developers would select a subset of these in order to provide a curriculum for the machine based on which it well… learns. But so this works differently: the system generates its own curriculum based on a set of selection rules which are tightly linked to the output function. It then continually modifies its own rule base to become more effective – both in speaking as well as in treating you and the others in the program. Sometimes there are  setbacks but it corrects itself very quickly, again based on an evolving set of rules that ensure continuous monitoring and evaluation. Like that, it cumulatively acquires repertoires of novel skills through… well… You could call it autonomous self-exploration. But there’s also interaction with human teachers using guidance mechanisms such as active learning (that’s a sort of high-stress test for the system – where we push the boundaries and provide non-typical inputs), maturation, and – very important – imitation. You would be amazed to see how much of it is imitation really. In that sense, the system does resemble an intelligent chatterbot. It takes cues which trigger programmed responses which then move the conversation forward. The difference with a chatterbot is that it does not merely work through association. So it’s not like word A will automatically trigger response B, although that’s part of it too, but at a much higher level. First, the associations are n-to-n, not one-on-one, and then the associations it makes are guided by fuzzy logic. So it’s not mechanical at all. It has got an incredible database of associations, which it builds up from the raw material it gets from talking to you and to us. The learning effect is incredible. It applies advanced descriptive statistical methods to its curriculum and then uses the patterns in the data to do hypothesis testing, estimation, correlation, going all the way up to forecasting. I mean, it is actually able to predict and estimate unobserved values.’

‘The output function?’

‘The output function maps inputs to desired outputs. The inputs of the system are the conversations. The output is a number of things, but all focused on behavioral change – like we want no substance abuse. We want you to develop healthy relationships. We want to see you work out, have sex and eat and live healthily. In short, we want you back to normal. That’s the type of behavioral change we want. It’s that simple really. That’s the output function, the goal, and, while the system is flexible and can make its own rules to some extent, it is all guided by this performance objective. I agree that it is truly amazing. In fact, many people here are very uncomfortable about it because it is obvious it has taken our place. We can easily see this system replacing us – psychologists or even psychiatrists – completely.’

‘You’re not a computer guy? You sound like one.’

‘No, I am not. I just gave you the basics of the system. I am a psychiatrist, a doctor, and, yes, I find it scary too, if only because it does reduce the need for people like me indeed.’

‘But it’s addictive, you said?’

‘Yes. That’s the main problem. But then our bosses here don’t think that’s a problem. They say classical psychoanalysis is addictive too, that patients develop a relationship with their psychologists and psychiatrists too. And, frankly, that’s true. People go in and out of therapy like crazy and it is true that the figures show it usually doesn’t make all that much of a difference. People heal because they want to heal. They need to find the strength inside. That is if they don’t want to stay dependent. Let me ask you, Tom: what’s the principal difference between talking to a friend and talking to a psychologist? Just tell me. Tell me the first thing that comes to your mind.’

‘A psychologist is expensive.’

‘Exactly. There’s no substitute for normal social relationships, for human interaction, for love and friendship. It’s cheaper and so much more effective. But, for some reason, people have trouble finding it. Usually, that’s not because they’re not normal but just because they’ve been out for such a long time, or because they’ve gone through some trauma here. All kinds of trauma. They’re like wounded animals – but they don’t want to recognize that. Like you. I mean, 17 years in places like Syria, Afghanistan or Iraq. Do you expect it to be easy to come back here and just do what other people do?’

Tom nodded vaguely. Money?

‘So she is cheap too. I mean, she is just a machine. So it’s not a problem if I become addicted.’

‘Well… Yes and no. To be frank, not really. We actually do try to wean people off the system as soon as we feel we can do that but it’s kind of weird: there’s no scientific basis for doing that. The investment has been done and, in a way, the more people who use it, the better, because that reduces the unit cost and justifies the investment. So it actually doesn’t matter if we tick off people as being cured and just let them use the system. As for the addiction, well… Our bosses are right: psychoanalysis is addictive too, and much more expensive. Computer time costs virtually nothing. The system can talk with hundreds of people at the same time – thousands even. It just slows it down a little bit – but that’s imperceptible really. And soon the system is going to be migrated to a petaflop computer. It should then be able to treat millions of people.’

‘Petaflop?’

‘Petaflops. That’s a measure for computer power. FLOP: floating point operations per second. If you’ve got a good laptop, its processor is like 10 billion flops. That’s 10 gigaflops. Bigger machines work in teraflops. That’s 1000 times more. The next generation is petaflops. Again a thousand times better. There’s no end to it.’

‘Who runs the Institute?’

‘You know that. We. The Army. We take care of you.’

‘Who in the Army?’

‘Why do you ask? You know that.’

‘Just checking.’

‘Come on, Tom. The Institute is just an inter-services institute like any other. It’s being operated under the US Army Medical Command.’

‘Why is not run by the Department of Veterans Affairs?’

‘We work with them. We get most – if not all – of our patients through them. They share their database.’

‘But so it’s an Army thing. Why?’

‘I told you: we take care of you. You’ve worked for us. And for quite a while. We’ve employed you, remember? We provide you with a pension and all the other benefits too.’

‘Yeah. Sure. Is it the system? I can imagine top-notch computing like this is surrounded by a cloud of secrecy. I must assume DARPA is involved?’

‘You’re smart. You worked for USACE, isn’t it? DARPA drives this project indeed – at least the programming side of it. They provide the computer wizkids. I am just a psychiatrist and, if you really want to know the nitty-gritty, I am actually just under contract – with the Medical Command. So I am not a professional Army man.’

‘It’s obvious, no? That’s why I can’t get access to the system at home and why I have to come to this facility to talk to her. I mean, it’s not a big deal to come here but it would be easy to just provide Internet access at home. You could use a laptop fingerprint reader to log in or something.’

‘That’s true. Technically, we could provide you with access at home but we’re not allowed to.’

‘What’s behind? What’s the real goal? Exploring artificial intelligence in order to then use it for other purposes?

‘Don’t be so suspicious. You’re an Army man. You know DARPA. It was created to put people on the moon – not for warfare. It created NASA. It gave the world GPS, Internet and what have you? Almost any technology around nowadays has DARPA roots. Would you expect them not to be involved? This system is good. It provides care to you. Yes, its development probably helps to better understand the limits of artificial intelligence and all that, and so it will surely help to push those limits, but it is designed to help you and many others. And it does. It’s technology. Technology moves ahead, for good and for bad. This is for good.’

‘How do you know?’

‘Do you think you’re special? You are. Of course you are. But, from my point of view, you react to the system just like the majority of other patients: you’re getting better. You take action. You make promises and you don’t break them – at least not in the short term as far as I can see. That’s good.’

‘You get feedback from the system?’

‘Of course I do. I am your mentor – sorry if I refer to myself as a psychiatrist. That’s just because I take some pride in my job. Remember you signed a user agreement when you started using the system. I get feedback. What do you expect? Do you have a problem with that?’

‘No. Sorry if I sounded that way.’

[…]

‘Anything else you wanted to know? We still got plenty of time. We’ve been talking about the system all of the time. That’s not my job. We should talk about you – about how you feel, about how you’re moving ahead.’

‘But then you know that already from the system, don’t you? I am doing fine. No heavy drinking, more social interaction as you call it. I’ve started to be happy by doing small stuff – gardening, reading. I am getting back on track. But… You know…’ He paused. ‘I really like her.’

It, Tom. It. What you’re going through is very normal. The conversation becomes affectionate. But you’re getting back on track. You’ll meet someone nice in the gym. You’ll get the happiness you deserve. The system is only a stepping-stone to your future. A better future.’

‘Can I say something negative?’

‘Sure, Tom. What’s bothering you?

‘Is this our future, Rick? I mean, look at it. We live in this chaotic world. Crises everywhere. It stares us in the face – violence beams into our living rooms, infects our minds, our lives and ends up numbing us. We all try to find our way. When we’re young and ambitious we get recruited or actively chose a job that fit profile and ambitions. We did our level best. We come back. We try to adapt. And then we get hooked to a machine which talks us back into what you guys refer to as ‘normalcy’. Is this our world?’

‘You know you can talk to the system about such philosophical questions.’

‘I know. I want to hear it from you.’

‘Why?’

‘Because you’re human. Because you’re like me.’

‘OK. I am like you, but then I am also not like you. You’re a patient – technically speaking – and so I am supposed to be your doctor. But let’s forget that bullshit and let me be frank with you. I know you can take it. We shouldn’t waste our time, isn’t it?’

Tom sensed the irritation. It was something familiar to him. That feeling he was a misfit somehow, and that he would always be. Not responding to expectations.

‘Sure, I can take anything. You should be straight with me. I am straight with you.’

‘What’s your problem, Tom? People outside get addicted to loads of things. Positive things, like sports or chess. To things that can go either way, like Internet addictions. Or to negative things, like alcohol, drugs or even violence. That’s bad. Very bad. You know that. That’s not what you want. But so you were moving that way. And so now you’re getting addicted to a system here but, in the process, you stop taking drugs, you exercise, you go out and you smile to pretty women. And I must assume at least some of them are smiling back. Just look at yourself. Come, here, in the mirror. Just look at yourself.’

Rick got up and walked to the large mirror in the room. Tom hesitated. For some reason, he did not trust it. Why would a room for consultations like this have such a large mirror.

‘Is there a camera behind?’

‘Hell no, Tom. There’s no camera behind. You are not participating in some kind of weird experiment which you aren’t aware of. We’re just trying to help you, with advanced but proven methods. This mirror is here because we do ask people to come and have a look at themselves from time to time, like I am doing now. Come here. Look at yourself. What do you see?’

That sounded true. Tom got up and stood next to Rick.

‘Well… Me. And you.’

‘Right. Me… And you. I’ll tell you what I see when I see you. I see a handsome man there. In his forties, yes. Getting older, yes. That’s bothering you, isn’t it? But you’re looking. I see a muscle man. Perfect body mass index.’

He turned straight to Tom now: ‘For God’s sake, Tom. Look at yourself. You’re fine. As fine as one can be. You don’t miss a limb or so. Do you now I have to talk to guys who ask me why they had to lose a limb? Tell me, Tom: what do you want me to say to them? Thanks for doing your job? You’ve been great? America thanks you for the sacrifice you made and we feel very sorry you lost a limb. Do you realize how hollow that sounds?’

‘I am sorry, Rick. I didn’t mean to sound like complaining. I am sorry if you felt like I was criticizing.’

‘You are not complaining and, frankly, you can think whatever you want about me – as long as it makes you feel good about yourself. I am just trying to put things in perspective. I am just answering your questions. You can talk to the system. Or to ‘her’ if you really want to stick to it. ‘She’ will give you the same answers as I do when you’re going philosophical. Stop thinking, Tom: start living. Feel alive, man! Be happy with what you’ve got. Get back into it. Did any of your relatives die lately? Any person you liked who disappeared? Any bad accidents in your neighborhood?’

‘No.’

‘Well. Isn’t that great?’

‘Yes. That’s great.’

‘Look, Tom. We can talk for another fifteen minutes – sorry to say but so that’s the time I’ve got on this damn schedule of mine – but I think you know what it takes. You can do it. Just try to be happy for a change.’

‘You guys diagnosed me as depressive.’

‘No. We diagnosed you with PTSD. Post-traumatic stress. Let’s drop the D. I don’t like the D. I’s not a disorder in my view. You guys are usually perfectly normal, but you’ve been put in an abnormal situation – and for way too long. And, yes, we have put you on meds and all that. We have made you feel like a real patient. We sure did. But let me say it loud and clear, Tom: we do not believe in meds. We put you on meds to reduce the effects of abstinence, to reduce that feeling of craving. That’s all. And then we thought you were cured and so we told you to now take care of yourself on your own but so you relapsed. Frankly, sensing a bit who you are, I feel that taking your meds would probably not have helped you. You needed something else. That’s why we put you into this program. And it seems to work. So far that is.’

‘Do I irritate you?’

‘No, Tom. You don’t. We’re just being frank with each other. That’s good. That’s normal.’

Tom nodded. This had been good. At least it had been real. Very real.

‘Thanks, Rick. This was very helpful. You’re great.’

‘Thanks. Shall we see each other again next week? Same day, same time. I’ll put it down already. Just let it all sink in and get to the bottom of what bothers you. This is important. You’re a strong man. I can see you can be tough with yourself. Fight your demons. All of them. Get back at it.’

‘Sure. Thanks again. This has been great. You’re right. I should just get back at it.’

‘OK. Just send something for next week. You know, for the file. Unlike M, I need to justify my time.’

They both laughed.

‘Sure.’

As Rick walked him out, Tom suddenly thought of one more question.

‘One more question, Rick. I can imagine some guys do flip completely, even with this program, no?’

‘What do you mean?’

‘You know what I mean. Go bonkers.’

‘With the system?’

‘Yes.’

Rick looked intensely at him as he replied: ‘Well… Yes, it happens. But let’s be honest. That’s also just like any other therapy in this regard: with some people it just doesn’t work. It’s the two-sigma rule. In terms of effects, 95% of the people in this program are in the happy middle: it works, no complaints, back to normal. But, for the others, it’s not back to normal. It’s back to the never-ending street.’

‘What do you do with them?’

‘To be frank, we don’t have time for them. When everything is said and done, this is just a program like any other program. It works or it doesn’t. Time is money, and we don’t put money into wastebaskets. It’s meds all over again or, worse, they get kicked out and end up in a madhouse, or on the street, or wherever. And then the wheel turns round and round and round, until it stops forever. You know what I mean.’

‘So you give up on them. They can’t use the system anymore?’

‘You mean M?’

‘Yes.’

‘The system has got its limits. We can’t feed it with nonsensical inputs. I mean, we actually can, and we often do that as we’re upgrading it, but so we don’t want to do that on a routine basis. When everything is said and done, it’s an expert system but so its input needs to make sense – most of the time at least. So, yes, we cut  them off.’

Rick looked at Tom and laughed: ‘But don’t worry. Before you get cut off, we’ll give you a call. The system is smart enough to see when you’re crossing the lines a bit too often. As said, it’s designed to bring people back into the middle. People can stray a lot, but if you stray too much into that 5% zone, it will alert us, and we will have a look at the situation and discuss it. Does that answer your question?’

‘It does. Thanks. See you next week.’

’Don’t forget to shoot me the mail with some text. You know the rule. 24 hours before. Unless you invoke emergency but you know you don’t want to do that. It’s not good in terms of progress reporting. It delays stuff.’

‘I got that. I want to be good. I don’t like to be a patient.’

‘You are good. As far as I am concerned, you’re OK really. But then you know it takes at least three months before we can make that judgment.’

‘I know. Don’t worry. I’ll stay on track. No relapsing this time.’

‘Good. That’s what I wanna hear. You take care, man.’

‘Oh… One more thing.’

Rick turned back: ‘Yes?’

‘Rick. You don’t need to answer but… In the end, what do you say, to the guys who have lost a limb?’

‘Damn it, Tom. You’re awful.’ He shook his head. ‘You wanna know? Really?’

‘Yes.’

‘I tell them something like: ‘Hey, guy, you lost a limb already. You’d better limit the damage now.’ But then much more politely of course, if you understand what I mean.’

‘I understand. Thanks. You’re a good man. I like you.’

‘Good.’