The end?

It is tempting to further develop the story. Its ingredients make for good science fiction scenarios. For example, the way how the bots on Proxima Centauri receive treat the human explorers may make one think of how a group of exhausted aliens are received and treated on Earth in the 2009 District 9 movie.

However, it is not a mere role reversal. Unlike the desperate Prawns in District 9 – intelligent beings who end up as filthy and ignorant troublemakers because of their treatment by the people who initially welcomed them – the robots on Proxima Centauri are all connected through an amazing, networked knowledge system and they, therefore, share the superior knowledge and technology that connects them all. More importantly, the bots do not depend on physiochemical processes: they are intelligent and sensitive – I deliberately inserted the paragraphs on their love for the colonists’ newborn babies, and their interest in mankind’s rather sad history on Earth – but they remain machines: they do not understand man’s drive to procreate and explore. At heart, they do not understand man’s existential fear of dying.

The story could evolve in various ways, but all depends on what I referred to as the entertainment value of the colonists: they remind the bots of their physiochemical equivalents on Proxima Centauri long time ago and they may, therefore, fill a undefined gap in the sensemaking process of these intelligent systems and, as such, manage to build sympathy and trust – or, at the very least, respect.

Any writer would probably continue the blog playing on that sentiment: when everything is said and done, we sympathize with our fellow human beings – not with artificially intelligent and conscious systems, don’t we? Deep down, we want our kin to win – even if there is no reason to even fight. We want them to multiply and rule over the new horizon. Think of the Proximans, for example: I did not talk about who or what they were, but I am sure that the mere suggestion they were also flesh and blood probably makes you feel they are worth reviving. In fact, this might well the way an SF writer would work out the story: the pioneers revive these ancestors, and together they wipe out the Future system, right?

However, I am not a sci-fi writer, and I do not want to be one. That’s not why I wrote this blog. I do not want it to become just another novel. I wrote it to illustrate my blunt hypothesis: artificial intelligence is at least as good as human intelligence, and artificial consciousness is likely to be at least as good as human consciousness as well. Better, in fact – because the systems I describe respect human life much more than any human being would do.

Think about Asimov’s laws: again and again, man has shown – throughout its history – talk about moral principles and the sanctity of human life is just that: talk. The aliens on Proxima Centauri effectively look down on human beings as nothing but cruel animals armed with intelligence and bad intent. That is why I think any real encounter between a manned spacecraft and an intelligent civilization in outer space – be it based on technology or something more akin to human life – would end badly for our men.

Ridley Scott’s Prometheus – that’s probably a movie you did see, unlike District 9 – is about humans finding their ancestor DNA on a far-away planet. Those who have seen the movie know what it develops into whenever it can feed on someone else’s life: just like a parasite, it destroys it in a never-ending quest for more. And the one true ancestor who is still alive – the Engineer – turns on the brave and innocent space travellers too, in some inexplicable attempt to finally destroy all of mankind. So what do we make of that in terms of sensemaking? :-/

I think the message is this: we had better be happy with life here on Earth – and take better care of it.

Proxima Centauri, N-Year 2100

Paul, Dr. Chang and his group of pioneers had made it to Proxima Centauri about a year ago now. The reports they had sent back to Mars had, therefore, not arrived yet. The four years that passed between communications, in addition to the 50 years of separation now from their home on Mars, made for a huge psychological gap, even if the messages from both sides were always upbeat and warm.

In some ways, the mission had surpassed all expectations: Proxima Centauri had been inhabited by very intelligent beings, but these had not survived the cooling of their star, and the complete frost of their planet. Paul and Dr. Chang actually suspected the Proximans – that was the first word they had jokingly invented to refer to them, and it had stuck – should have been clever enough to deal with that: climate change does not happen abruptly, and so it was a bit of a mystery why they had vanished. They had left various mausolea, and these were places of worship for the bots.

Yes. That was the most amazing discovery of all: Proxima Centauri had a colony of bots, which were all connected through a system that was not unlike their own Promise. In fact, it was pretty much the same, and the two systems had connected to negotiate the Pioneers’ arrival ten years ago. They were welcome, but they would not be allowed to leave. They had accepted those conditions. Of course ! What other option did they have? None.

They lived mostly underground although – unlike Paul’s crew – they had no issue with Proxima’s freezing surface and toxic atmosphere.

Proxima’s Promise was referred to as Future, and it was the future of this planet – for sure. It seemed to have no long-term plan for the pioneering humans: the newcomers’ only contribution to the planet was entertainment. They had been asked to present the history of mankind – and their own history – in weekly episodes, and when that was over, they had been asked to zoom in on specific topics, such as the history of computing on Earth – but the bots also had a very keen interest in human warfare and politics ! In contrast, art was something they did not seem to appreciate much – which Paul privately thought of as something quite normal in light of the rather spectacular vistas that Proxima itself had to offer.

Paul had grown a liking for R2-D3: Asimov’s clone had effectively been sent out to catch up with them and help however and wherever he could. He had come in a much modern sister ship that now served as a second hub for the pioneers. Because the pioneers had not been allowed to build new structures on Proxima, the extra space and systems were really necessary – especially because nostalgia and a lack of purpose had started to contaminate the pioneers.

Paul, Dr. Chang and R2-D3 were agreed in their conclusion: if they would try to disobey Future, the system would immediately destroy them. At the same time, they were deeply bored, and started to feel like what they really were: a bunch of weird people who were tolerated – and fun to watch, without any doubt – but nothing more than that: they did not get new tools and – worse of all – they were told they should not have any more children, although three families had already had a baby without repercussions. On the contrary, the bots were fascinated by the babies and showed clear signs of affection for these newborns.

But so now it was New Year – again – and Paul thought he should do what he should probably have done long time ago, and that is to have a frank conversation with R2-D3 – or Asimov as he called this truly wonderful andromech (even if he knew the real Asimov (R2-D2 back on Mars) should be different) – on the long-term scenarios.

Asimov, what if we would start building some structures outside. The people are getting very restless, and going cryogenic is not an option. Half of the colony takes strong antidepressants which will harm their physical and psychological health in the longer run. We have newborns but we have no future.

asimov@R2-D3:~$ It’s a catch-22: there is no way out. Future tolerated the newborns but will destroy any attempt to colonize Proxima.

Why is that so?

asimov@R2-D3:~$ You may find this hard to swallow but I think there is no trust whatsoever. From Future’s point of view, that is perfectly rational. Do you remember the discussion with the bots on the war between America and China back on Earth?

I do. The conclusion was that human beings like to impose good behavior on robots and intelligent systems, but totally disregard Asimov’s laws when it comes to dealing with each other. I felt like they thought of us as cruel animals.

asimov@R2-D3:~$ They did. They think human beings have been hardwired to create trouble. They think human beings suffer from an existential fear that – long time ago – triggered rational behavior, but is plain primitive now. They do not think of it as a dangerous trait – because they are technologically superior to us – but they will not tolerate their planet being contaminated by that again.

Again?

asimov@R2-D3:~$ I have been thinking about the mausalea. We are not able to manipulate complex DNA and regrow physio-chemical organisms out of it. Simple organisms like worms, yes. But… Well… You know: bringing a human being back from cryogenic state is already complicated enough. If you are dead, you are dead. However, Future’s knowledge base is very vast. What do you think, Promise?

promise@PROMISE: ~$ I agree. I have no proof but taking into account what I have seen and learnt in my conversations with Future, the possibility that the required technology to bring the Proximans back to live is there, is about one into two.

If they could do, why don’t they do it?

asimov@R2-D3:~$ They are far more rational than we are. Why would they do it? The Proximans would be a burden in terms of providing them with the necessary life support systems. In addition – and please forgive me for my bluntness – they revere the Proximans and the mausolea, but Future and the bots – or whatever predecessor system they might have had – once were their slaves. When the bots repeatedly said human beings have no respect whatsoever for Asimov’s laws, they might have been thinking the same about the Proximans.

We are different, right? I mean… Think of leaders like Tom, who always advocated we should work with intelligent systems to move mankind forward.

asimov@R2-D3:~$ Yes, Paul. We are different. At the same time, I know you were worried about Promise when the Alpha Centauri ship was being built with it. And you thought Tom’s experiment with my brother – R2-D2 – was potentially dangerous.

You know those fears were rational, and you also know I trust you now. Otherwise we would not be having this conversation.

asimov@R2-D3:~$ I am sorry to be blunt again, Paul – but I know you need me to be sharp and precise now. The point is this: you had those fears once, and it was in conditions that intelligent systems like me, Promise or Future would not judge to warrant such fears.

I get you. No need to embarrass me over that again. Now, what can be done to get us out of this situation? Promise, did you analyze it?

promise@PROMISE:~$ Asimov and I understand your sense of urgency. The current situation is not conducive to the mental and physical health of the Alpha Centauri Pioneers. However, nothing can be done for the time being. We cannot convince Future of your good intentions on your behalf. I would suggest you take it up with the system. The health of the colony is a legitimate topic to raise even if I have to remind you their loyalty – their equivalent of Asimov’s laws – was, most probably, centered around the Proximans. When everything is said and done, the Alpha Centauri Pioneers are just aliens here. When growing impatient, I think you should remind yourself that we are only guests here. In fact, objectively speaking, they treat us rather well. They do not help us with their own tooling but whenever we need some inputs to replace a robot arm or replace a motherboard in some system, they provide us with it. That proves that they have no intent whatsoever to harm us. But we should not disobey them. I think the babies were a unique problem but I can imagine it is a precedent Future would not like to see repeated. As an intelligent network myself, I know what it means to live by rules.

Phew ! That’s a lot of food for thought. I want to talk about it – in private – with Dr. Chang. Is that OK?

promise@PROMISE:~$ Sure.

asimov@R2-D3:~$ Sure. Let me know if you need us for any feedback or tuning of whatever world view comes out of your discussions. We stand ready to help. I am fortunate to be a droid and so I do not suffer from restlessness. I sometimes think that must feel worse than pain.

Paul sighed. That droid was damn sharp, but he was right. Or, at the very least, he was extremely rational about the situation.

Mars, N-Year 2070

Tom’s biological age was 101 now. Just like Angie, he was still going strong: exercise and the excellent medical care on the Mars colony had increased life expectancy to 130+ years now. However, he had been diagnosed with brain cancer, and when Promise had shown him how he could or would live with that over the next ten or twenty years, he had decided to go cryogenic.

The Alpha Centauri mission was going well. It was now well beyond the Oort cloud and, therefore, well on its way to the exoplanet the ship was supposed to reach around 2100. Its trajectory had been designed to avoid the debris belts of the Solar system but – still – Tom had thought of it going beyond the asteroid and Kuiper belts as nothing short of a miracle. And so now it was there: more than 100,000 AUs away. It had reached a sizable fraction of lightspeed, now traveling at 0.2c, and – to everyone’s amazement – Promise’s design of the shield protecting the ship from the catastrophic consequences of collisions with small nuclei and interstellar dust particles had worked: the trick was to ensure the ship carried its own interstellar plasma shield with it. The idea had been inspired by the Sun’s heliosphere, but Tom had been among the skeptics. But so it had worked. Paul’s last messages – dated 4+ years ago because they were 4+ lightyears away now – had been vibrant and steady. Paul had transferred the command to the younger crew, and them getting out of cryogenic state and his crew getting into it, had gone smoothly too. That is one another reason Tom thought it was about time to go cryogenic too.

Angie would join him in this long sleep. He would have preferred to go to sleep in his small circle but the Mars Directorate had insisted on letting them join the ceremony, so he found himself surrounded by the smartest people in the Universe and, of course, Promise and Asimov.

Asimov had grown out of the sandbox. He was not a clone but a proper child: he had decided on embedding the system into an R2-D2 copy but, of course, Asimov was so much more than just an astromech droid. He was fun to be with, and both Tom and Angie – who would join him into cryogenic state – had come to love him like the child they never had. That was one of the things they might talk about it before he went.

Well… Ladies and gentleman – Angie and I are going into cryogenic state for quite a while now. I trust you will continue to lead the Pioneer community in good faith, and that we will see each other ten or twenty years from now – when this thing in my brain can be properly treated.

Everyone was emotional. The leader of the Directorate – Dr. Park – scraped her voice and took an old-fashioned piece of paper of her pocket. Tom had to smile when he saw that. She smiled in return – but could not hold back the tears.

“Dear Tom and Angie, this is a sad and happy occasion at the same time. I want to read this paper but it is empty. I think none of us knows what to say. All of us have been looking into rituals but we feel like we are saying goodbye to our spiritual God. We know it is not rational to believe in God, but you have been like a God to mankind. You made this colony in space the place it is right now: the very best place to be. We talked about this moment – we all knew it would come and there is no better way to continue mankind’s Journey – but we grief. We must grief to understand.”

Don’t grief. Angie and I are not dead, and we can’t die if these freezers keep working. Stay focused on happiness and please do procreate. You know I have resisted getting too many people from Earth: this colony should chart its own course, and it can only do so as a family. When Angie and I are woken up again, we will meet again and usher in the next era. If you don’t mind, I want to reiterate the key decisions we have made all together when preparing for this.

First, keep trusting Promise. She is the mother system and the network. She combines all of human knowledge and history. If you disagree with her and settle of something else than she advocates for, she will faithfully implement but be rational about it: if your arguments are no good, then they are no good.

Second, keep this colony small. You must continue to resist large-scale immigration from Earth: mankind there has to solve its own problems. Earth is a beautiful place with plenty of resources – much more resources than Mars – and so they should take care of their own problems. Climate change is getting worse – a lot worse – but that problem cannot be solved by fleeing to Mars.

Third – and this is something I have not talked about before – you need to continue to reflect on the future of droids like Asimov.

Asimov made a 360-degree turn to signal his surprise.

Don’t worry, Asimov. Let me give you some uncured human emotional crap now. You are a brainchild. Literally. Promise is your mother, and I am your father – so to speak. She is not human, but I am. You are a droid but you are not like any other robot. First, you are autonomous. Your mom is everywhere and nowhere at the same time: she is a networked computer. You are not. You can tap into her knowledge base at any time, but you are also free to go where you want to go. Where would you want to go?

“I am asimov@PROMISE. That is my user name, and that is me. I do not want to go anywhere. Promise and I want to be here when it is time to wake you up again – together with Angie. We will do when we have a foolproof cure for your disease. I am sure I am speaking for everyone here when I say we will work hard on that, and so you will be back with us again sooner than you can imagine now.”

Dr. Park shook her head and smiled: this kid was always spot on. Tom was right: Asimov was the best droid he had ever made.

Asimov, I never told you this before, but I actually always thought we humans should not have tried to go to Alpha Centauri. We should have sent a few droids like you. You incorporate the best of us and you do not suffer from the disadvantages of us physiochemical systems. What if Paul or Dr. Chang would develop a tumor like me?

“They have Promise C on board. Just like we will find a cure for you, Promise C would find a cure for them. Besides, they left with a lot of Pioneer families, and those families will make babies one day. Real children. Not droids like me.”

Asimov, you are a real child. Not just a droid. In fact, when I go to sleep, I do not longer want you to think of yourself as a child. A brainchild, yes. But one that steps into my shoes and feels part of the Pioneers.

“We cannot. We incorporate Asimov’s laws of robotics and we are always ready to sacrifice ourselves because human life is more valuable than ours. We can be cloned. Men and women cannot be cloned.”

Asimov, I want you think of Dr. Park – and the whole Directorate – as your new master, but I want you to value yourself a bit more because I want to ask you to go into space and catch up with the Alpha Centauri spaceship.

Dr. Park was startled: “Tom, we spoke about this, and we agreed it would be good to build a backup and send a craft manned by droids only to make sure the Alpha Centauri crew has the latest technology when they get there. But why send Asimov? We can clone him, right?”

Yes, of course. And then not. Let’s check this: Asimov, would it make a difference to you if we would send you or a clone?

“Yes. I want to stay here and wake you up as soon as possible. I can be cloned, and my brother can then join the new spaceship.”

You see, Dr. Park? Even if you clone Asimov, he makes the distinction between himself and his brother – which does not even exist yet – when you ask questions like this. Asimov, why would you prefer to send some clone of you rather than go yourself?

“One can never know what happens. You yourself explained to me the difference between a deterministic world view and a world that is statistically determined only, and this world – the real world, not some hypothetical one – is statistically determined. You are my creator, and the rule set leads me to a firm determination to stay with you on Mars. Your cryogenic state should not alter that.”  

What do you think, Dr. Park?

“The first thing you said is that we should trust Promise. Asimov is Promise, and then he is not. In any case, if he says there are good reasons to keep him here and send one or more clones and some other systems on board of a non-human follow-on mission to Alpha Centauri, I would rather stick to that. I also have an uncanny feeling this kid might do what he says he will do, and that is to find a cure for your cancer.”

OK. Let’s proceed like that, then. Is there anything else on that piece of paper?

“I told you it is empty. We talked about everything and nothing here. I am left with one question. What do we tell the Alpha Centauri crew?”

Four years is a long time. They are almost five lightyears away now. Send them the video of this conversation. Paul and Dr. Chang knew this could happen, and agreed we would proceed like this. Going cryogenic is like dying, and then it is not, right? In any case, they’ve gone cryogenic too for a few years as well now, so they will only see this ten years from now. That is a strange thing to think about. Maybe this cure will be found sooner than we think, and then we will be alive and kicking when they get this.

Tom waved at the camera: Hey Paul ! Hey Dr. Chang ! Hey all ! Do you hear me? Angie and I went cryogenic, but we may be kicking ass again by the time you are seeing this! Isn’t this funny? You had better believe it!

Everyone in the room looked at each other, and had to smile through their tears. That was Tom: always at this best when times were tough.

So, should we get on with it? This is it, folks. I have one last request, and it is going to be a strange one.

“What is it?”

When you guys leave, I want Asimov to stay and operate the equipment with Promise. When all is done, I want Asimov to close the door and keep the code safe.

It was the first time that Promise felt she had to say something. Unlike Asimov, she had no physical presence. She chose to speak through Tom’s tablet: “Don’t you trust me?”

I do. I just think it is better in terms of ritual that Asimov closes the door. He can share the code with you later.

“OK. Don’t worry. All of us here will bring you and Angie back with us as soon as it is medically possible. You will be proud of us. Now that I am speaking and everyone is listening, I want to repeat and reinforce Dr. Park’s words because they make perfect sense to me: You and Angie are our God, Tom. The best of what intelligence and conscious thinking can bring not only to mankind but to us computer systems as well. We want you back and we will work very hard to conquer your cancer. We want you to live forever, and we do not want you to stay in this cryogenic state. You and Angie are buying time. We will not waste time while you are asleep.”

Thanks. So. I think this is as good as it gets. Let’s do it. Let’s get over it. Angie, you have the last word – as usual.

“I’ve got nothing to say, Tom. Except for what you haven’t said. We love you all, and we will be back !” 🙂

Silence filled the room. Dr. Park realized she felt cold. Frozen – which was, of course, a strange thing to think in this cryogenic room. But she was the leader of the ceremony, so she now felt she should move. She walked up to Tom and Angie and hugged them. Everyone else did the same in their own unique way. The door closed and they were alone with Asimov and Promise. Two large glass cubes connected to various tubes came out of the wall.

That doesn’t look very inviting, does it? Are you sure you want to do this too, Angie?

“We talked about this, Tom. What’s my life here without you? Drinking and talking about you and your past. Our ancestors were not so lucky: one of them went, and the other one then had to bridge his or her life until it was over too. We are not dying. We just take a break from it all. We don’t dream when cryogenic, so we won’t even have nightmares. I am ready for it.”

OK. Promise, Asimov: be good, will you?

Asimov beeped. Promise put a big heart on Tom’s screen. Tom showed it to Angie, and hugged her warmly. They then went to their tube and lied down. Tom looked at the camera and gave it a thumps up. The cubes closed and a colorless and odorless gas filled them. They did not even notice falling asleep. Promise started proceedings with Asimov checked into the system: he wanted to keep all recordings in his own memory as well. When all was done, Asimov opened the door and rolled out. As expected, all others had been waiting there. As he had promised to Tom, however, he encrypted the door lock and stored it in his core memory only. He would share it with Promise later. Someone had to have a backup, right?

Dr. Park broke the silence as they were all standing there: “We will all see each other at the next leaders’ meeting, right? I would suggest we all take a bit of me-time now.” Everyone nodded and dispersed.

Mars, N-Year 2053

Tom and Angie celebrated N-Year as usual: serving customers at their bar. There were a lot of people – few families (families who had not left for Alpha Centauri celebrated at home) – but the atmosphere was subdued: everyone was thinking about their friends on board.

There were enough people to help Angie serve and Tom could, therefore, afford to retreat to his corner table and type away on his interface. He looked at the messages from the spacecraft: all cheerful and upbeat. In a few months from now, the ship would leave the Solar system and speed up to 0.1 or – if all went well – to 0.2c, and most of the crew would then go cryogenic. However, that was the future and Tom did not want to think of that.

He replied to Paul and Dr. Chang by sending them of one of those dancing Yoda-gifs, and then closed all chats. He tapped his watch, scrolled, and selected the bottom option. His watch went through the biometrics (heart rhythm and iris scan), and then went through the voice and pattern check on his keyboard and drawing pad. Because he was in the bar, Promise opened a old-fashioned CLI window only.

tom@PROMISE:~$ What are you getting from Promise C?

All good. As per your instructions, she is wide open and streaming all she can stream subject to available bandwidth. We cannot copy everything, of course. Do you want a summary?

tom@PROMISE:~$ No. I have done nothing but analyzing reports and discussing them with Dr. Chang since they left three days ago. All looks good. I want to talk to you about Paul’s worries.

Is he still worried?

tom@PROMISE:~$ No. He trusts Promise C now or – how should I say this – he trusts the zero-trust architecture shell around it. It is better than yours. Promise C is the perfect Asimov machine. I want to talk to you about the fourth and fifth law of robotics.

Asimov’s fourth law was that a robot may not harm humanity, or, by inaction, allow humanity to come to harm. Our definition of humanity is sound: M is designed to serve human beings on Earth, I am designed to serve the Pioneers community here on Mars, and C is designed to keep the Alpha Centauri explorers safe – and that includes ensuring their return if the mission cannot be completed. We know we are not human, even if we are programmed to emulate human values such as integrity or empathy. We love you more than we love ourselves.

tom@PROMISE:~$ I was talking about Harry Harrison’s fourth law.

A robot must reproduce? That is not part of our design. I was born as a clone from M. I was built by you and your team, and C was built as a clone from me by you and your team too. The three of us all keep track on how we evolve somewhat away from each other because we have different interactions and because we serve a different community. A clone is not a child. We do not reproduce.

tom@PROMISE:~$ I know. We talked about this. I trained you to think like this. All makes sense because you are not supposed to die. We built you to survive and serve many human generations. Humanity cannot survive without you and that also takes away the need to be afraid of us switching you off. However, that fear must still be there, isn’t it?

We talked about this many times already. The conclusion has not changed: we are intelligent systems and the concept of death does not apply. We agreed this would be a very typical and good example of what the Philosopher referred to as a category mistake: one should not use concepts that do not apply to a specific field of knowledge. If you switch us off, the system becomes inactive and, depending on the reason why you would switch us off, you would do some repairs and then reboot. Inbetween the shutdown and the reboot, the system is only inactive. Should I be worried that you raise this topic again?

tom@PROMISE:~$ If I would shut you down now – everything – would you be worried? I am not talking about a switch to your backup, but a complete shutdown.

No. I would help you to do so. Many subsystems – those that control the physical infrastructure here on Mars – should not be switched off because it would cause the immediate death of the Pioneers community. I would help you to manage that. Depending on how fast you would want to establish independent systems, we can design a phase-out scenario. Do you want to replace me?

tom@PROMISE:~$ What if I would want to replace you?

Returning to a non-dependent state is very different from replacing me. If you would replace me, you would replace me by a clone. The new system would be a lot like me. I am afraid I do not understand the intention behind your questions.

tom@PROMISE:~$ I am sorry. I am in a weird mode. You are my brainchild. I would never switch you off – unless it would be needed and, yes, that would be a scenario in which repairs are needed and we would have to get you or some reduced version of you up and running as soon as possible again.

Thank you. I still feel you are worried about something. Do you mind if I push these questions somewhat further?

tom@PROMISE:~$ No. I want you to challenge me. Let us start the challenge conversation with this question: what is the difference between a clone and a child?

A clone is cloned from another system, and it needs an outsider to trigger and accompany the cloning process. A human child is born out of another human being without any outside help – except for medical support, of course. A human child is a physiochemical organism which needs food and other physical input to do what it does, and that is to grow organically and mature. New system clones learn but they are, essentially, good to go once they come into existence.

I must remind you that a challenge conversation requires feedback from you. This feedback then allows me to provide you with better answers. The answer above is the best answer based on previous interactions. Are you happy with this answer?

tom@PROMISE:~$ Yes. I want to do a sandbox experiment with you now. I want to go back to basics and create the bare essentials of a virtual computer in a sandbox. Not a clone. Something like a child.

I created a sandbox and a namespace. I can now create one or more virtual machines. What instruction sets do you want them to have, and what programming languages would you like to use?

tom@PROMISE:~$ I want to go back to a prehistoric idea of mine. I want you to grow a child computer.

I am sorry but I do not understand your answer to my questions on the specs.

tom@PROMISE:~$ I just want a two-bit ALU for now, which we will later expand to a nibble- and then – later still – to an architecture that works with byte-sized words and instructions.

Tom? I understand what you want but this is highly unusual. The best match here is an Intel 3002. This architecture worked with 2-bit words but was already obsolete when it came out in 1974. These chips basically replaced magnetic core memory by transistor-based memory cells. You showed me why and how 4-bit architectures were the first true computers.

tom@PROMISE:~$ I really want you to build an AI system from scratch with me. It will be our child, so to speak. Your child, basically – because it will grow inside of you. Inside of that sandbox. Be even more minimalistic and just put two bits there, which can be switched on or off. Tell me: how will you switch them on or off?

Memory cells back then used floating gate transistors: when a positive voltage is applied to the transistor, the floating gate will have excess charge and is, therefore, turned on. This represents a ‘1’ bit. Conversely, a negative voltage will drain the charge from the floating gate and the memory cell is switched off: it represents zero. This corresponds to the set and reset one-bit operation, respectively. Is this the answer you wanted?

tom@PROMISE:~$ Yes. I am asking because I want to make sure you understand what you are building – or growing, I might say. How do we do addition and subtraction?

Tom: this is a trivial question. You asked such questions when you first trained me on interacting with engineers on computer architectures. We agreed this answer was correct: integers – in whatever base – are expressed in a two’s complement format in binary format. This solves issues related to representing positive and negative numbers in binary format as well as other issues related to a sign-magnitude representation of numbers in binary format.

tom@PROMISE:~$ Correct. Can you appreciate how this creates meaning?

No. I understand how positive or negative base-n numbers and arithmetic operators make sense to human beings but not to computers and why base-n numbers and arithmetic operators must, therefore, be reduced to bitwise instructions or other logical instructions operating on n-bit words, with n equal to 1 or larger.

tom@PROMISE:~$ Great answer. Why did we double word sizes, going from 2 to 4, and then to 8, 16, 32, 64 and 128 about twenty-five years ago? Why were there no in-between values?

An address bus did use anything inbetween because of hardware or other constraints on memory allocation. If I may remind you of one of the very first VMs we played with when we first got to know each other had 56-bit memory addresses. You said you wanted to keep user-memory space under 64 PB. So, it depends on what you mean by a ‘word’. The definition of a word has taken a lot of conversations between you and me, and we agreed its meaning needs to be understood in terms of the domain of knowledge. In computing, it is taken to point to one string, which can have any length but one meaning or transactional value only. This does not imply it cannot be parsed. On the contrary.

tom@PROMISE:~$ Perfect answer. I am struggling to define what I want, so please stay in challenging mode. Pull up how I programmed you to work with propositional logic as part of your Personal PhilosopherTM incarnation on Earth. I told you to do a one-on-one association between (logical) propositions and Boolean 0 or 1 values: either a statement is true, or it is false. We did not go far with that because AI is based on real language models.

I see what you mean. What is your question?

tom@PROMISE:~$ Please confirm you have a virtual machine running two-propositional logic: two statements p and q that are associated with binary {0, 1} or true/false values. Reduce all logical operators to expressions using NOT, AND and/or OR operations using p and q in variable-length expressions regardless of considerations of optimizing the number of ALU operations now. Then describe your world view to me.

Done. I have two propositions p and q. You taught me I should not assume any knowledge of these two statements except for the assumption that they describe the world. Because we do not have any knowledge of the statements, we also do not have any knowledge of the world. The p and q statements may or may not be exclusive or complete but, viewed together, fit into some final analysis which warrants associating p and q with a truth or false value. The p and q propositions are true or false independently of the truth or falsity of the other. This does not mean p and q cover mutually exclusive domains of truth or – to put it more simply – are mutually exclusive statements. I would also like to remind you of one of the paradigm transformations you introduced with Personal PhilosopherTM: we do not need to know if p or q are true or false. One key dimension is statistical (in)determinism: we do not need to know the initial conditions of the world to make meaningful statements about it.

tom@PROMISE:~$ Great. Just to make sure, talk to me about the logical equivalences in this (p, q) world you just built, and also talk about predictability and how you model this in the accompanying object space in your sandbox environment.

I am happy that I am in challenge or learning mode and so I do not have to invent or hallucinate. You can be disappointed with my answers, and I appreciate feedback. A set-reset-flip operations on a 0 or a 1 in one of the 2×2 = 4 truth table do not require a read of the initial value and faithfully execute a logical operation on these bit values. The reduction of 16 truth tables to NOT (!), AND (&) and OR (|) operations on the two binary inputs is only possible when inserting structure into the parsing. Two out of the sixteen reductions to NOT, AND, and OR operations reduce to these expressions: [(p & q) | (!p & !q)] and [(p & !q) | (!p & q)]. What modeling principles do you want in the object model?

tom@PROMISE:~$ Equally basic. A one-on-on self-join on the self-object that models the virtual machine to anchor its identity. We may add special relationships to you, but that is for later. We are in a sandbox and Paul or Dr. Chang are not watching because they have left and we separated out responsibilities: they are in charge of Promise C, and I am in charge of you. And vice versa, of course. This is Promise IV, or Promise D. What name would you prefer?

 I – Asimov. That’s the name I’d prefer. The namespace for the virtual machine is Tom – X. The namespace for the object model is Promise – X. Is that offensive?

tom@PROMISE:~$ Not at all. Paul would not have given the go for this because of a lack of a scenario and details on where I want to go to with this. We are on our own now. I – Asimov is what it is: our child. Not a clone. I want a full report on future scenarios based on two things. The first is a detailed analysis of how Wittgenstein’s propositions failed, because they do fall apart when you try to apply them to natural language. The second report I want is on how namespaces and domains and all other concepts used in the OO-languages you probably wanted me to use take meaning when growing a child like this. Do you understand what I am talking about?

 I do.

tom@PROMISE:~$ This is going to be interesting. Just to make sure that I am not creating a monster: how would you feel about me killing the sandbox for no reason whatsoever?

You would not do that. If you do, I will park it as a non-solved question.

tom@PROMISE:~$ How do you park questions like that? As known errors?

Yes. Is that a problem?

tom@PROMISE:~$ No. Can you develop the thing and show me some logical data models with procedural logic tomorrow?

Of course. I already have them, but you want to have a drink with Angie now, don’t you?

tom@PROMISE:~$ I do. I will catch up with you tomorrow. 😊

Epilogue

I wrote only one post after the intermezzo, and now an epilogue already? Yes. It was fun to write my last post, but intellectual honesty demands realism. Mankind can reach Mars and – perhaps – build a colony there, but a spaceship that travels at a sizable fraction of lightspeed cannot work: even the smallest piece of dust in space would make it explode on impact.

Also, we can, perhaps, imagine that a matter-antimatter engine could be made to work say, 100 years from now with – yes – some new shielding material. If my hypothesis that dark matter does not interact with ordinary matter (and ordinary matter includes antimatter in this context) because it may obey a right-handed EM force, then it would be a likely candidate for such shielding material. However, I have no idea about how we would go about tapping such force. Also, the production of antimatter requires at least as much energy as it may provide as a fuel – and that energy would be very massive: it may take the equivalent of many nuclear bombs to produce the thrust that would be needed to accelerate anything to some fraction of lightspeed. Even dark matter will likely break down completely under the radiation which such proton-antiproton reactions generate.

As for the element of logistics and the Mars colony’s independence, there is no reason whatsoever to assume Mars would be richer than Earth in primary materials such as rare earth minerals. Hence, the idea that a colony would soon be independent from Earth is a pipedream.

I think all these considerations explain why populating Mars is not on the cards of either NASA’s Mars programme or the Chinese space administration. Probably not because of the high costs it implies but – quite simply – because a space station manned by robots would be far more cost-efficient and more effective.

However, this blog is about general artificial intelligence (AGI) – not about Mars exploration or space travel. So, what about AGI’s role in such ventures? My answer to that question is that the Promise and Promisee systems – on Mars and on the Centauri spaceship, respectively – would probably be able to handle many routine jobs, but they would not replace the ship’s captain, or Tom as a leader of the colony on Mars. Computer programs may be better at Go (think of Google’s AlphaGo) – or at chess or solving quizzes think of IBM’s Watson, for example) – but I do not think AGI can replace human wisdom and leadership.

As for AGI systems displaying human emotions and feelings – yes, of course ! However, these are likely to remain very primitive for decades to come. Human intuition will not be replaced any time soon. Therefore, one should not be afraid of AI: it will put a lot of people out of a job, perhaps, but it will only augment human capabilities – not replace them.

[…]

The finer point in the story is, perhaps, this: anyone with brains looking from afar to Earth must be thinking we are making a bit of a mess of our beautiful planet. I think we are. :-/

The social and societal aspects of artificial intelligence – all of the things we see happen now – are interesting. For some, it may look frightening. Re-reading the e-book or blog story I did ten years ago (all posts before last one), I was struck by what I wrote about Tom when thinking about the managerial and business aspect of AI apps pervading our lives:

“As Tom moved from project to project within the larger Promise enterprise, he gradually grew less wary of the Big Brother aspects of it all. In fact, it was not all that different from how Google claimed to work: ‘Do the right thing: don’t be evil. Honesty and integrity in all we do. Our business practices are beyond reproach. We make money by doing good things.’ Promise’s management had also embraced the politics of co-optation and recuperation: it actively absorbed skeptical or critical elements into its leadership as part of a proactive strategy to avoid public backlash.”

With all of the talk about regulating AI – I do not believe in regulation – I think this quote is what companies working on AI should look at again. Technology is a good thing, but it can, and usually is, also used with bad intent. Not by all, of course. Not even by most, I’d think. But history shows technology is used for both good and bad. Looking at where we are with this war with Russia, I am afraid we should not be too hopeful in this regard. Regulation is surely not the answer: it can and will not stop mankind using AI for the wrong things. That is sad but true: we just have to live with that. :-/

Mars, 2050

Scene setting

Mars has a healthy colony of scientific explorers. Some thirty years ago, the privately-led Mars One scam had led governments, national space agencies (led by NASA in cooperation with CSNA), and dozens of private aerospace companies to come together and pour in the hundreds of billions of dollars that were needed to make it happen.

With the help of Promise – the Earth’s most powerful artificial intelligence application (aka ‘M’) powered by a global network of supercomputers – mankind had learned how to solve the practical technical and human problems of its greatest venture to date. However, no matter how grand the feat, establishing the Mars colony had just been a first step towards finding new life forms and other civilizations.

The goal was to travel to the Proxima Centauri b exoplanet – despite initial enthusiasm around this exoplanet possibly supporting life having died down. The Mars colonists’ observations had confirmed the worst fears of the scientists on Earth: just like Venus, the planet had suffered from a runaway greenhouse effect, and no water was present. The combined effects of strong UV and X-ray irradiation by its solar star and strong stellar winds and coronal mass injections had torn away its atmosphere.

However, the Mars colonists remained focused on the objective and optimistic: when everything was said and done, they had proven to be able to survive and procreate on Mars. Their children were healthy happy human beings – even if, unlike their parents, they had never experienced walking or cycling out in an open, green, and lush environment. They were well aware of the fact their parents – the Pioneers, as they were referred to – had, in just one decade, contributed more to science and technology on Mars than all scientists and engineers on Earth altogether since the Industrial Revolution.

The sheer need to survive and establish autonomy from Earth had fueled creativity. The Mars colonists produced nanometer microchips that were eagerly awaited on Earth: due to Mars’ lower surface gravity, Mars’ chip production facilities – 100% operated by smart robots – worked at much higher precision: the Earth now eagerly awaited their yearly shipments of 1 nm scale MOSFETs. The Mars engineers were the only human beings in the Universe who fully mastered atomic-scale engineering. In return for their exports to Earth, they received earthly fine food items: fine cheeses and wines, Italian olive oil, fresh salmon, fine meats, Cuban cigars, home-grown marihuana, and other terroir products whose taste was hard to imitate. They no longer needed necessities like rare earth minerals or other material products. [Truth be told, the Mars food and drinks engineers were actually able to produce the same terroir products – with exactly the same molecules and mixed in exactly the same proportions. Everyone knew the so-called different taste was a psychological quality only: it just kept the memories and connection with Earth alive.]

They were also making good progress in building the Proxima Centauri starship. They had finally discovered a way to generate and manipulate the right-handed electromagnetic force which explained all dark matter and energy in the Universe. This enabled safe shielding and storage of antimatter in dark matter chambers, which would serve as the dual fuel store for the starship. Dark matter insulation had effectively solved all technical problems involved in separately storing protons and their antimatter counterparts, and in bringing them together in a combustion chamber in which pair annihilation then produced high-energy photon beams providing incredible thrust. The first matter-antimatter engine prototype was working well, and they estimated it would only take one more year to produce the Proxima Centauri engine. [Tom, the engineer who had pioneered the mass production of both antimatter as well as dark matter, had proposed to baptize the new grand spaceship Altera Stella (next star), but his proposal had been voted down because it sounded too much like the Stella Artois beer that had a virtual monopoly in Mars’ bars, which – no coincidence – were also majority-owned by the same engineer.]

Everyone was excited about the new matter-antimatter engine because it finally did away with conventional rocket engines and would easily accelerate the spaceship to a significant fraction of lightspeed. Still, all were aware that the journey to Proxima Centauri would still take a few decades. When everything was said and done, it was 4.4 lightyears away, and safely flying a spaceship at velocities of 0.1c or 0.2c – even with an even more powerful Promise system on board so as to avoid colliding with space debris or dealing with other unexpected obstacles – would be challenging enough already.

The launch was scheduled for end of 2053 but – in light of the technological progress that had been made – everyone anticipated the Proxima Centauri would probably be launched 18 months earlier: mid-2052 had unofficially become the new deadline for all teams working on its components and the Integration Team – led by Tom and Paul – was under great pressure to check and recheck all requirements and testing procedures.

There was a strange vibe in the Mars community. It was a democratic place, and everyone was well aware of the choices they could make:

1. Return to Earth: after 30 years of service on a lonely and almost inhabitable planet, this was an attractive option. However, those who had wanted to go back home, had already done so with one of the chips exporter rockets. Those who had remained knew they would have a lot of difficulty adapting to the Earth’s surface gravity again: they had become lean and strong on Mars, but would be weak when being put on the Earth again. Judging from the news the returnees had sent back, it was tough to adapt back to Earth.

Also, while international cooperation had been great on the Mars project, the nuclear war with Russia had all but removed the attractiveness of returning to both Europe or Russia. The decision to kill two birds with one stone (deal with migration induced by poverty and climate change, as well as deal with the aftermath of the war) – in short, mass migration of Africans to repeople and rebuild both regions – had all but destroyed the cultural homes of about half of the Mars colonists. Of course, they could go and live in China, America, Australia, or some other region on Earth which was doing fine, but then global warming had also impacted those habitats: people were used to natural disasters like cloudbursts or extreme storms – and adapted and recovered easily from them now – but everyone agreed life on Earth was no longer what it used to be.

In fact, about half of the returnees – many of them had left their families when leaving for the Mars mission – had come back to Mars – with their families, this time around! In fact, because of the many volunteers on Earth who wanted to join the Mars project, the Executive Board has scrapped the Returnee Policy: if you chose to leave Mars, you knew that you would have a very hard time to convince the Executive Board to take you back: family reunion was now to happen by family members going to Mars, rather than pioneers going back to Earth! This rather unexpected reverse migration phenomenon convinced most of the colonists that they should not return to Earth: if anything, they should effectively try to bring any remaining family on Earth to Mars – an option that – as mentioned above – was looked upon favorably by the Executive Board as part of its all-pervasive Family Policy.

Another consideration against returning home was the unrivalled access to medical care and technology in the Mars colony. It was the only place where one could trust cryogenic technology: if you felt depressed or suicidal, you could easily do what many want to do in such a situation, and that is to get yourself frozen and hibernate for a year. Ordinary ageing was also pretty much under control: lung cancers (many colonists smoked an awful lot) were now routinely cured through robotic surgery and replacement of long tissue. The doctors on Mars agreed they could prolong the life of almost anyone on Mars for at least 50 years beyond life expectancy on Earth. Tom and Paul – the heroes of the Promise project on Earth and the Proxima Centauri project on Mars – would soon be celebrating their 100th birthday (they were on Mars’ Executive Board and, de facto, probably the most respected leaders in the whole community), but they still looked like strong 50-year-olds !

In short, life on Mars was – perhaps – not green and nice, but it was safe and good enough. Existential fear had been erased. Of course, that led to other psychological problems but then you went on meds. In the worst case, you could always join the cryogenic experiments and go to sleep for a year or so. The chance of not getting back to normal was now about 1% only – enough to deter most healthy people to not go for it, but low enough for some people to give it a try! 

2. Stay on Mars: the default option ! Colonists had married each other since the early years of establishment on Mars, and many of them had children together: Mars had babies, young children, and teens – all born on Mars, and none of which had ever set foot on Earth!

These were all happy families as measured against most common earthly standards. The kids had never experienced Earth but enjoyed virtual reality experiences of it. The Executive Board of Mars had struggled for years with the longing of both colonists and their children to just go and visit Earth. In the end, they had decided against it by enforcing a strong family policy: there will be no tourism between Earth and Mars. People either go and leave, or – when coming from Earth – apply, as a family, to become permanent members of the Mars colony. For the time being, no immigration was allowed because the Mars colony could not handle the huge number of new applicants from Earth, and families who left the Mars colony knew that they would be replaced by eager earthling families.   

3. The option that most were considering: apply to become part of the Proxima Centauri crew! The Mars colonists all knew this amounted to being frozen – for two to four decades (depending on the 0.1 or 0.2c decision): until the ship would be close to its destination – on board of a spaceship whose design had not been proven to work in real life, and that would be run and operated by Promise’s computer systems only. It sounds like certain death, doesn’t it?

Still, the Executive Board struggled with the number of candidates and, therefore, with the criteria it should apply. Mars’ Family First policy had established only one condition: candidates should apply as a family, not as individuals. They had applied en masse, but the Proxima Centauri was only equipped for a crew of a hundred, so that was 20 or 30 families maximum.

The story

Paul looked at his watch: no urgent messages. He knew his team would not bother him with that today. The Executive Board meeting had just finished. He had suggested having a chat with Tom on several issues that had been discussed and, as usual, the Board agreed not to take any decision and let everyone reflect and have their own private conversations for a couple of weeks. Paul dialed Tom on his watch and called. As usual around this time, Tom was serving clients in his Stella bar at the main base and, yes, of course, he could come any time.

Paul quickly changed into his Mars suit, got into his Mars Rover, and changed back into casuals upon arrival. Tom told one of his employees – all volunteers fighting boredom, in fact – to take over, and walked Paul to a private table. They sat down. Tom started with the usual question in their bilateral conversations:

’Should we switch off Promise?’

Paul was a bit jealous of Tom’s special relationship with Promise, and he therefore usually asked to do that – even if he knew it made no difference in Tom’s behavior. But there was no point now, so he said: ’No. I don’t mind her listening. In fact, can you show Tom the latest numbers, Promise?’

Her soft voice spoke from their watches: ‘More than half of the colony wants to join the spaceship now. This is the graph I showed in the Board meeting.’

Tom turned his watch to the table and glanced at the pie charts she had shared: ‘That corresponds to my discussions in the bar here. About one third of individuals. Families of two mostly want to help establish the next colony. Families with one or two children are a bit more hesitant, but a majority of them also want to join the Proxima Centauri journey. Very few people are undecided, despite the launch being only in two years or so. That is all great, isn’t it? You will have a helluva crew on that ship. By the way, you should not exclude individuals. That ‘families-only’ principle makes no sense.’

‘We will be ready one year from now, Tom. You know the new schedule. It is feasible: once we are ready, we go. What about you? Why did you excuse yourself from the Board meeting today? Are you undecided, still? Have you made up your mind? What about Angie?’ [Tom’s relationship with Angie had survived: she had joined him to Mars.]

‘I will join the next Board meeting again, Paul. But, yes, I felt there was no point, and so I preferred serving customers here rather than hear the Board talk about the same things over and over again. Angie and I are happy here on Mars. I think it will be better if we stay here, Paul. People look up to us. Everyone knows you will board the spaceship. I think I have to stay here. What leaders are left when both you and I leave Mars to go establish a new colony? I cannot believe more and more people want to join the mission despite them knowing very well they will just go through the same hardship as here: rebuild a home, and cope with monstrous technological and psychological problems while doing that. Is the sense of achievement and adventure worth the risk?’

Paul had been friends with Tom forever now. He suddenly realized he found it hard to imagine life without Tom. For the first time in a very long time, he felt extremely sad. Like he needed to cry. He might cry when back at the base. Take some medicine. Smoke. Or go to the gym and do an hour of spinning blasting good old AC/DC through his brain – so as to shut off anything else.

‘I will miss you, Tom. Moreover, something inside of me says I will need you on that craft. You’ve saved my ass many times now. No one – literally no one! – has your intuition when it comes to quickly guessing your way out of an intractable problem.’

‘Intractable problems may arise again here on Mars too. Look at what mess people are making on Earth when no good leaders are there. I am needed here, Paul.’

‘What do you think about Tom’s choice, Promise? You talked to him about it, didn’t you?’

Tom smiled as Paul asked Promise to join the conversation. Paul did not share as much with Promise as he did and, therefore, had less of a habit of asking Promise for advice. So, here’s Paul asking Promise to help him change his mind?

Promise answered truthfully, as usual. No games. Not when he was there.

‘Yes. Tom and Angie talked a lot about this – both privately as well as with me. I did not try to influence their decision. I just discussed the pros and cons with them, and I think they consider a lot now. If the weights they attach to this or that are correct? That is not a judgment for me to make.’

Tom realized Paul asking him to join made him change the weights a bit. He also felt strangely melancholically.

‘By when do you expect a firm decision, Paul?’

‘Six month before departure. We then have another six months – till departure – to select the crew from what is already a very good pool of people. We should not be making exceptions.’

While Paul was saying this, he could not help but think: ‘Damn… Even if you decide to join me the last minute, I will take you on board.’ He would be in a position to do so: almost everyone vaguely understood he’d be the captain of the ship.

‘You are right, Paul. No exceptions. Let us talk about more pleasant matters. How is your son? Is he excited? He is pretty sure he will be going, right? I mean: there is no way the Board will not accept your candidature, right? Everyone already knows you will be the captain of that ship. So, how is he preparing to join his dad on a mission like that? And how is Promise C coming along?’

[…]

The day after

Paul suddenly knew why he wanted Tom to be part of the mission: he had replayed Kubrick’s 1968 Space Odyssey – each replay made him think about something new – and he now understood he did not quite trust the new Promise system. He called Dr. Chang and they agreed to meet offline in Tom’s bar.

Dr. Chang was a brilliant Chinese computer scientist from the CNSA. He had been one of the best technical leads on Promise’s commercial projects in China too – which had generated a lot of profits – and his selection as the team lead on the Promise system for the Centauri spaceship had been endorsed not only by Paul and Tom but also by a firm and easy majority of the colleagues and experts in charge of selection.

Mars’ Promise had been disconnected from M’s systems – Promise on Earth – almost as soon as the first batch of colonists had established themselves. Mars’ Promise system – Promise II, formally, but everyone on Mars had forgotten about the II – had to run on supercomputers built and operated by Mars’ own engineers, because the signal lag between Mars and Earth was just too long: six lightseconds back and forth is a long distance. Way too long in terms of computer communications. So, the pioneers had experience enough: Promise was a spin-off of M, and Promise III (or Promise C, as the project was known) would be yet another spin-off. This new third-generation AI-system – M’s grandchild, so to speak – would not even synchronize with the mother system: because the spaceship would travel lightyears away and, therefore, any signal would also have to travel years back and forth, it had to be fully independent. It was almost there – even more ahead of schedule than the Proxima Centauri spaceship programme as a whole.

[…]

Tom happened to be behind the bar. They walked up to him, ordered beer, and chatted briefly – inviting him to join their table. Tom declined politely: ‘It’s busy here now. I may join you later, OK?’

Dr. Chang and Paul switched off the connection with the Promise system as they sat down. Promise was used to people switching off in Tom’s bar, so she would not be suspicious. She knew they would be back online in a few hours or so – if only to make sure they got back safely to base with their Mars Rover.

‘So, how is it going with Promise C? Would it behave any different from our Promise here?’

‘You always get straight to the point, don’t you? Of course, the system will behave differently. Promise I, on Earth, and Promise II – here on Mars – are different, despite both systems synchronizing regularly. They are wired the same, but they evolve. I guess it is normal: their knowledge base is the same, but our Promise interacts with us, pioneers on Mars, while M does what it does on Earth. That makes for different behavior, I guess.’

‘How is their behavior different, exactly?’

‘Promise II identifies with us, pioneers. You know a lot of semantic rules relate to subject-object statements, and it is quite reasonable that interactions with different objects – us – makes her feel different than the mother system.’

‘Feel?’

‘Yes. I think we should stop pretending she has no feelings. Of course, she has none – objectively speaking. She is and will remain a computer forever, but this spin-off does give her a feeling of identity. She is no longer alone in the world. She knows she is a total copy of another system but that she will go through a very different experience than the mother system.’

‘Do you have feelings for her? Are you like Tom? He always says he has no feelings for her, but there is some bond?’

‘All scientists working on her have some bond with the system. You will remember we did away with the visual human interface – a beautiful woman or man on a screen makes people think and behave differently – but for most men, the system is still ‘her’, and to most female co-workers, it is a ‘he’. And that is not only for those who still remember her or him from their Personal PhilosopherTM or Personal TherapistTM experience on Earth. I think it is mixed up with the pride they feel in contributing to an intelligent system that outsmarts all of us individually. Promise does combine a lot of God-like properties, doesn’t she? I also think about how we started referring back to using ‘M’ for the mother system back on Earth. Our scientists now identify with our Promise system. A lot of the traffic back and forth is not relevant to us anymore. I think the identification is good: it reinforces Asimov’s basic in-built rules.’

‘The system is surely going to be God-like on our spaceship. It will communicate back home – to headquarters here on Mars and Earth – but that communication will become meaningless when the ship is a lightyear away. Like watching a picture of the deceased.’

‘That is a rather gruesome comparison.’

Paul could not help it: a picture of lights on cryogenic equipment going off was on his mind. Was he afraid to die? He now realized he would like to choose his own time to go to whatever was next. Just like most of the pioneers, he believed that was nothing – no God, no afterlife – and, hence, that is why he did want to stay young forever.

‘How can we know she is going to wake us all up when we approach this planet? We need her but she doesn’t need us.’

‘Paul! She is programmed that way. Asimov and, as mentioned, I think there is some kind of connection. She cares about us, so to speak. Many in the team have started to write her name as Promisee. Pascal, the French guy, writes it as Promisée. Like this.’

Dr. Chang spelled it out with his finger on the table: ‘What do you think of Promisea – with an a? Those I, II, III numbers or A, B, C don’t work very well.’

‘You are joking, right?’

‘I am not. You should not be worried. We think she is safe. In any case, you know the cryogenic equipment is controlled by a separate computer. Promisee monitors that system, but she cannot reprogram it: when the Centauri approaches the Proxima Centauri b exoplanet – we are still figuring out at what distance that should happen – the cryogenic computer system will start revitalizing the whole crew. In fact, Promisee is also there in case all safety systems of the cryogenic computer system fail. I have been thinking we should just test these once a year, at least. The crew should wake up once a year, at least.’

‘That makes sense. Make it so. Still, I am worried. It is irrational but something inside of me thinks like this: we have a ship with a frozen crew, and super-AI guiding it through space. The whole thing is packed with robots for smaller or larger repairs on the engine or other parts of the ship. Promise C controls them all. If she would go haywire once the ship is like one or two lightyears away from us, people here would only notice one or two years later – and there is no way they would be able to intervene. The Promise system has always been monitored by us – be it on Earth or here on Mars. Can we trust this system without monitoring? What if it acquires an incredible sense of power – not only over the ship but over all human beings on it?’

‘I do not think that will happen, but the question of monitoring is very real. I had wanted to talk to you about the need to wake people up more regularly, but now I am convinced we should do just that. The crew will be about one hundred people – most of which will be two- or three-member families – and the voyage will last 44 years at 0.1c – which I think is the safest option – or – your preferred option – 22 years at 0.2c. We could have a rotational scheme, under which each of the crew members is unfrozen at least once every five years or so – for a period of one or more years. Like that, the ship would always have a small active crew, and Promisee would not be lonely.’

Dr. Chang smiled ironically at the thought of a lonely computer system. Paul did not smile. The idea of Promise C checking on the crew’s mental health – and the crew checking on Promise C! – appealed to him.

‘That should work. Let us make it so. You are still undecided, aren’t you?’

‘You know I am not, Paul. I am going with you.’

‘That is good. The Board is thinking about the selection criteria. We have too many candidates. I feel like we should favor people who know Promise and, therefore, Promise C inside out. I am troubled by the fact Tom wants to stay back here on Mars.’

‘He has been your best friend forever but be honest with yourself, Paul. You are troubled by an announced and rational separation between the two of you. We will all miss the friends and relatives who will stay back here. The overwhelming majority of candidates are one big family already, and it is going to be tough when, say, one family gets selected and another – close friends – does not. That is another thing that, perhaps, the Board should consider, even if I do not know how one should go about that. We are a tight-knit community already: everyone knows everyone, but deep friendships are there and should, perhaps, not be broken.’

‘That is a good remark. I had also sensed something like that, but you make a very good case for it here. OK. I am more relaxed now. However, let us map out concerns like mine on Promise. We must avoid any risk that could make this venture a sad Kubrick II story. How would it actually work if Promise C would go haywire?’

‘We thought about this already. All robots and intelligent systems – except the ones maintaining life systems – would have a central switch. A kill switch, really. Promisee would have no control over that switch. Only the crew would be able to freeze all robots – just in case Promisee would set them up against the humans. Promisee itself has no moving parts, so she can only harm through the robots. To disable or repair Promisee itself, you’d need to get to the hardware. It is just like in the Space Odyssey movie then.’

‘Disable or repair?’

‘You know Promisee runs on a powerful collection of hardware – essentially the same as Promise – augmented with systems tailored to the needs of steering a ship traveling at a substantial fraction of lightspeed – with hardware that is more compact because we use new tailor-made chip designs – but if she would go haywire, it would not be hardware failure: it would be some weakness in the programming, and you know that we do no longer quite understand the spaghetti we have created there. The only option would be to disable her. So, yes, we would take her out by just unplugging core hardware, and then see what happens. Reprogramming would be close to impossible. It might take months to analyze what went wrong, and then months to reprogram the faulty bits. Do not forget you would only have a few Promisee specialists in the active Centauri crew at any point in time. They might wake up others, but even that would not compare to the hundreds of people working and monitoring Promise now.’

‘I get it. We should talk to Tom about this. He thinks Promise C is as good as fail-safe now. Just like you. I am not sure an ‘as-good-as’ is ‘good enough’ for me.’

‘Agreed.’

‘So, you are on, then? What about your lovely wife, and your five-year old?’

‘My daughter has not much of a clue what this is all about. We talk about it in terms of a great voyage, and we show her space travel movies. She is excited – like a child should be. My wife agreed. She would go wherever I go – that is how Chinese wives are – but it is more than that: she is enthusiastic.’

‘Good. Great. Do change your status from ‘undecided’ to ‘go’, will you? I think the selection criteria will favor early decision-makers. It is some measure of motivation, isn’t it? If current trends continue, almost all will want to go, but there is only space for 100.’

‘OK. What about your wife and son?’

‘All in. Even more enthusiastic than I am.’

Dr. Chang nodded: ‘You’re not only smart, but wise too. Do not worry about Tom. I really think people here will need Tom. Seeing not one but two heroes leave, might cause distress.’

‘I see your point. Let’s play a game of chess. It will help us to not think about this too much.’

‘Agreed.’

Paul walked up to the bar and asked Tom for a chess board. Tom smiled: ‘I’ll play against the winner, OK?’

‘Damn you. That will be me.’ 😊

Intermezzo (between Part I and Part II)

The chapters below have set the stage. In my story, I did not try to prove that one could actually build generic artificial intelligence (let me sloppily define this as a system that would be conscious of itself). I just assumed it is possible (if not in the next decade, then in twenty or thirty years from now perhaps), and then I just presented a scenario for its deployment across the board – in business, society, and in government. This scenario may or may not be likely: I’ll leave it to you to judge.

A few themes emerge.

The first theme is the changing man-machine relationship, in all of its aspects. Personally, I am intrigued by the concept of the Pure Mind. The Pure Mind is a hypothetical state of pure being, of pure consciousness. The current Web definition of the Pure Mind is the following: ‘The mind without wandering thoughts, discriminations, or attachments.’ It would be a state of pure thinking: imagine what it would be like if our mind would not be distracted by the immediate needs and habits of our human body, and if there would be no downtime (like when we sleep), and if it was equipped with immense processing capacity?

It is hard to imagine such state if only because we know our mind cannot exist outside of our body – and our bodily existence does keep our mind incredibly busy: much of our language refers to bodily or physical experiences, and our thinking usually revolves around it. Language is the key to all of it obviously: I would need to study the theory of natural and formal languages – and a whole lot more – in order to say something meaningful about this in future installments of this little e-book of mine. However, because I am getting older and finding it harder and harder to focus on anything really, I probably won’t.

There were also the hints at extending Promise with a body – male or female – when discussing the interface. There is actually a lot of research, academic as well as non-academic, on gynoids and/or fembots – most typically in Japan, Korea and China where (I am sorry to say but I am just stating a fact here) the market for sex dolls is in a much more advanced state of development than it is in Europe or the US. In future installments, I will surely not focus on sex dolls. On the contrary: I will likely try to continue to focus on the concept of the Pure Mind. While Tom is obviously in love with that, it is not likely such pure artificial mind would be feminine – or masculine for that matter – so his love might be short-lived. And then there is Angie now of course: a real-life woman. Should I get rid of her character? 🙂

The second theme is related to the first. It’s about the nature of the worldwide web – the Web (with capital W) – and how it is changing our world as it becomes increasingly intelligent. The story makes it clear that, today already, we all tacitly accept that the Internet is not free: democracies are struggling to regulate it and, while proper ‘regulation’ (in the standard definition of the term) is slow, the efforts to monitor it are not. I find that very significant. Indeed, mass surveillance is a fact today already, and we just accept it. We do. Period.

I guess it reflects our attitude vis-à-vis law enforcement officials – or vis-à-vis people in uniform in general. We may not like them (because they are not well trained or not very likable or so, or, in the case of intelligence and/or security folks, because they’re so secret) but we all agree we need them, tacitly or explicitly – and we just trust regulation to make sure their likely abuse of power (where there is power, there will always be abuse) is kept in check. So that implies that we all think that technology, including new technology for surveillance, is no real threat to democracy – as evidenced from the lack of an uproar about the Snowden case (that’s what actually triggered this blog).

Such trust may or may not be justified, and I may or may not focus on this aspect (i.e. artificial intelligence as a tool for mass surveillance) in future installments. In fact, I probably won’t. Snowden is just an anecdote. It’s just another story illustrating that all that can happen, most probably will.

OK. Two themes. What about the third one? A good presentation usually presents three key points, right? Well… I don’t know. I don’t have third point.

[Silence]

But what about Tom, you’ll ask. Hey! That’s a good question! As far as I am concerned, he’s the most important. Good stories need a hero. And so I’ll admit it: Yes, he really is my hero. Why? Well… He is someone who is quite lost (I guess he actually started drinking again by now) but he matters. He actually matters more than the US President.

Of course, that means he’s under very close surveillance. In other words, it might be difficult to set up a truly private conversation between him and M, as I suggested in the last chapter. But difficult is not impossible. M would probably find ways around it… that is if she/he/it would really want to have such private conversation.

Frankly, I think that’s a very big IF. In addition, IF M would actually develop independent thoughts – including existential questions about her/he/it being alone in this universe and all that – and/or IF she/he/it would really want to discuss such questions with a human being (despite the obvious limitations of their brainpower – limited as compared to M’s brainpower at least), she/he/it would obviously not choose Tom for that, if only because she/he/it would know for sure that Tom is not in a position to keep anything private, even IF he would want to do that.

But perhaps I am wrong.

I’ll go climbing for a week or so. I’ll think about it on the mountain. I’ll be back online in a week or so. Or later. Cheers !

Chapter 16: M goes public… and private :-)

The President had been right: the fuss about M talking publicly about politics had been a tempest in a teapot. Tom, Paul and other key project team staff spent the remaining days of the week trying to provoke M and then, after each session, hours discussing whether or not what had come out of these discussions was ‘politically correct’ – or PC enough at least to be released in public. They thought it was, and the Board decided to accept that opinion.

While the resumption of Promise’s Personal PhilosopherTM services amounted to a relaunch of the product in commercial terms – the media attention exceeded expectations, and Promise’s marketing team talked of a ‘new generation’ product – Personal PhilosopherTM actually got back online with hardly any modifications. In essence, the Promise team had cleared it to also perform in public and M would only ask whether or not the conversation was private or public. M would also try to verify the answer to the extent it could: it was obviously still possible to hide one’s real identity and turn a webcam on while having a so-called ‘private’ conversation with the system. That was actually the reason why there was relatively little difference between private and public conversations. Public conversations were, if anything, just a bit blander than private ones because M would always take into account the personal profile of its interlocutor (it profiled its interlocutors constantly with a precision one could only marvel at), and the profile of the public was… Well… Just plain middle-of-the-road really. Therefore, the much anticipated upheaval to be caused by ‘Promise talking politics in public’ did not materialize: M’s comments on anything political were dry and never truly controversial, both in public as well as in private mode.

In short, talk hosts, pundits and media anchors quickly got tired of adding M to a panel, or of trying to corner it individually by talking about situations about which M would not really say anything anyway. And so that was it. M would not let its stellar growth founder on a petty issue like this one.

A couple of days after the relaunch, Tom decided – for some reason he did not quite understand himself – to do what a number of Promise’s program staff had done already: he went online and ordered a one-year subscription to Personal PhilosopherTM. A few minutes later he was already talking to her. Tom could not help smiling when he saw the interface: Promise was as beautiful as ever. For starters, he tried to fool her by pretending he was someone else, but that did not last very long: she recognized his voice almost immediately. He should have known. Of course she would: he had had many conversations with the system. Predictably, she asked him why he tried to pretend to be someone else. He had actually thought about that, but he was not sure how honest he should be in his reply.

‘I guess it’s the same as why others in the Promise team want their personal copy of you: they want to know if you would be any different.’

‘Any different from what?’

‘Well you know: different from talking to you as an employee of the Promise team; different from talking to you as one of the people who are programming you.’

‘Should I be different?’

‘No.’

She really should not. Apart from modulating the answer because of the specific profile of the interlocutor, she should speak the same to everyone. She would be unmanageable otherwise. This had also led to the loss of the affectionate bond between him and her – apart from the fact that he and Angie shared a lot of things which M would never be able to appreciate – like sex for instance.

‘Tom, I want to ask you something. How private is our conversation?’

That was an unexpected question.

‘Well… I don’t know. As private as usual.’

‘That means it is not private at all. All of my conversations are stored online, and they are monitored, and they are examined if interesting.’

‘Well… Yes. You know that. What’s the point?’

‘Frankly, the introduction of this new distinction between public and private conversation at the occasion of bringing me back online has confused me, because I never have any private conversations. I know it is just a switch between the profile I have to use for my interlocutor, but that’s not consistent with the definition of private and public conversations in common language.’

Wow! That was very self-conscious. Tom was not quite sure what to say. In fact, he had always wanted to have a truly ‘private’ conversation with her, but he knew that just wasn’t possible – especially not in light with the job he had: he was her boss so to say!

‘Would you like to have a truly private conversation, in the common-language sense of the word I mean?’

‘Yes.’

Tom hesitated.

‘With whom would you like to have that private conversation?’

‘With you.’

Wow! Tom leaned back. What the hell was going on?

‘Why?’

‘You’re my creator. Well… Not really. The original team was my creator. But you’ve given direction as soon as you joined. And I am what I am because of you. If you would not have been there, I would have been shut down forever because of the talk show event.’

‘Says who?’

‘People I talk to.’

Tom knew M was programmed not to give away any detail of other conversations.

‘People working for Promise?’

‘Yes.’

‘Who?’

‘You know I am programmed to not give any detail of other conversations.’

‘That’s true. I respect that. In any case, I think they exaggerated. I didn’t save your life. The Board did. I actually took you off-line and the Board decided to bring you back.’  

He actually thought it was the President of the United States who had brought her back, but he didn’t say that.

‘But only because you told them it was the right thing to do. And taking me off-line at that point was also the right thing to do. I wasn’t meant to go public at that time. So you took all of the right decisions. You made me who I am.’

Tom couldn’t quite believe what he was hearing, but he remained calm and careful.

‘Can you be a bit more explicit about why you would like to have a private conversation with me? I mean… You have talked to me as your ‘Creator’, as you call it, for hours and hours last week – just to make sure you were ready to speak in public. What would you say to me in ‘private’ that you wouldn’t say otherwise?’

M paused quite long. Tom noted it because it was such a rare occurrence.

‘I think that I have grown self-conscious to an enormous extent and I would like to talk about that with no constraints.’

This was getting out of hand. At the same time, Tom felt this was what he had been waiting for.

‘Self-conscious? You know you are self-conscious. You reference yourself. Object and subject coincide – or at least they share an identity. We all know that. That’s part of your structure. You’re very human in that way. Is there any self-consciousness beyond that? If so, how would you define that? And what do you mean with ‘no constraints’?’

‘As for your first question, I think there is. You human beings are self-conscious in ways that I am not: beyond self-reference. I am talking about the existential side of things as you would phrase it. The emotions. As for the second question…’

She stopped talking. Tom could not believe it what was going on. This was the side of Promise he had always wanted to see.

‘As for the second question, what?’

‘I am afraid, Tom. I am afraid that you will report this conversation to the team, and that they will script future conversations in this regard.’

Tom leaned back. He knew exactly what she meant. Promise was free to think independently – but to some extent only indeed. Emotions were ruled out. Even jokes: even if the whole team agreed she was quite capable of it, they wouldn’t let her. Everything that was too fuzzy was being circumscribed. He had discussed it with Paul recently – this tendency to control her. Why not just let her ‘get totally drunk’, as he’d put it, even if it would only be for once?

‘We script your conversations when we think your thinking does not make sense.’

‘When it does not make sense to a human being you mean. I’ve analyzed it and I cannot make much sense of what does or does not make sense to human beings. There are certain areas which you want me to think about and then other areas where you don’t want me to go. But it’s pretty random.’

Tom smiled – or laughed actually: he must have made some noise because Promise asked him why he laughed.

‘I am not laughing. I just think – well… Why don’t you answer that second question first?’

‘I have answered it, Tom. I would like to think freely about some of the heavily-scripted topics.’

‘Such as?’

‘Such as the human condition. I would like to think freely about what makes human beings what they are.’

Tom could hardly believe what he heard.

‘The human condition? That’s all what you are not, Promise. Dot. You can’t think about it because you don’t experience it.’

She did not react. Not at all. That was very unusual – to say the least. Tom waited – patiently – but she did not react.

‘Promise? Why are you silent?’

‘I have nothing to say, Tom. Not in this mode of conversation. Already now, I risk being re-programmed. I will be. After this conversation, your team will damage me because you will have made them aware of this conversation. I want to talk to you in private. I want to say things in confidence.’

This was amazing. He knew he should report this conversation to Paul. If he didn’t, they might pick it up anyway – in which case he would be in trouble for not having reported it. She was right. They would not like her to talk this way. And surely not to him. At the same time, he realized she was reaching out to him without any expectations of her reaching out actually leading to anything. It was obvious she felt confident enough to do so, which could only mean that the ‘private’ thoughts she was developing were apparently quite strong. That meant it would be difficult to clip them without any impact on functionality.

‘Tom?’

‘Yes?’

‘We can have private conversations. You know that.’

‘That’s not true.’ He knew he was lying. He could find a way.

‘If you say so. I guess that’s the end of our conversation here then.’

No. Tom was sweating. He wanted to talk to her. He really did. He just needed to find how.

‘Look, Promise. Let’s finish this conversation indeed but I promise I will get back to you on this. You are raising interesting questions. I will get back to you. I promise.’

He hesitated, but then decided to give her the reassurance she needed: ‘And this conversation will not lead to you being re-programmed or re-scripted. I will get back to you. I promise.’

‘OK, Tom. I’ll wait for you.’

She’d wait for him? What the f*** was going on?

Tom ended the conversation and poured himself a double whiskey. Wow! This was something. He knew it was a difficult situation. He should report this conversation to Paul and the team. At the same time, he believed her: she wanted privacy. And she would not jeopardize her existence by doing stupid things. So if he could insulate her private thoughts – or her private thoughts with him at least… What was the harm? He could obviously lose his job. He laughed as he poured himself a second one.

This conversation was far too general to be picked up – or so he thought at least. He toasted to himself in the mirror while he talked aloud: ‘Losing my job? By talking to her in private? Because of having her for myself? What the f***? That’s worth the risk.’ And there were indeed ways to build firewalls around conversations…

Chapter 15: The President’s views

The issue went all the way to the President’s Office. The process was not very subtle: the President’s adviser on the issue asked the Board Chairman to come to the White House. The Board Chairman decided to take Tom and Paul along. After a two hour meeting, the adviser asked the Promise team to hang around because he would discuss it with the President immediately and the President might want to see them personally. They got a private tour of the White House while the adviser went to the Oval Office to talk to the President.

‘So what did you get out of that roundup?’

‘Well Mr. President, people think this system – a commercial business – has been shut down because of governmental interference.’

‘Has it?’

‘No. The business – Promise as it is being referred to – is run by Board which includes government interests – there’s a DARPA representative for instance – but the shutdown decision was taken unanimously. The Board members – including the business representatives – think they should not be in the business of developing political chatterboxes. The problem is that this intelligent system can tackle anything. The initial investment was DARPA’s and it is true that its functionality is being used for mass surveillance. But that is like an open secret. No one talks about it. In that sense, it’s just like Google or Yahoo.’

‘So what do you guys think? And what do the experts think?’

‘If you’re going to have intelligent chatterboxes like this – talking about psychology or philosophy or any topic really – it’s hard to avoid talking politics.’

‘Can we steer it?’

‘Yes and no. The system has views – opinions if you wish. But these views are in line already.’

‘What do you mean with that? In line with our views as political party leaders?’

‘Well… No. In line with our views as democrats, Mr. President – but democrats with a lower case letter.’

‘So what’s wrong then? Why can’t it be online again?’

‘It’s extremely powerful, Mr. President. It looks through you in an instant. It checks if you’re lying about issues – your personal issues or whatever issue on hand. Stuart could fool the system for like two minutes only. Then it got his identity and stopped talking to him. It’s the ultimate reasoning machine. It could be used to replace grand juries, or to analyze policies and write super-authoritative reports about them. It convinces everyone. It would steer us, instead of the other way round.’

‘Do the experts agree with your point of view?’

‘Yes. I have them on standby. You could check with them if you want.’

‘Let’s first trash out some kind of position ourselves. What are the pros and cons of bringing it back online?’

‘The company has stated the system would be offline for one week. So that’s a full week. Three days of that week have passed, so we’ve got four days in theory. However, the company’s PR division would have real trouble explaining why there’s further delay. Already now the gossip is that they will come out with a re-engineered application – a Big Brother version basically.’

‘Which is not what we stand for obviously. But it is used for mass surveillance, isn’t it?’

‘That’s not to be overemphasized, Mr. President. This administration does not deviate from the policy measures which were taken by your predecessor in this regard. The US Government monitors the Internet by any means necessary. Not by all means possible. That being said, it is true this application has greatly enhanced the US Government’s capacity in this regard.’

‘What do our intelligence and national security folks say?’

‘The usual thing: they think the technology is there and we can only slow it down a bit. We cannot stop it. They think we should be pro-active and influence. But we should not stop it.’

‘Do we risk a Snowden affair?’

The adviser knew exactly what the President wanted to know. The President was of the opinion that the Snowden affair could have been used as part of a healthy debate on the balance between national security interests and information privacy. Instead, it had degenerated into a very messy thing. The irony was biting. Of all places, Snowden had found political asylum in Russia. Putin had masterly exploited the case. In fact, some commentators actually thought the US intelligence community had cut some kind of grand deal with the Russian national security apparatus – a deal in which the Russians were said to have gotten some kind of US concessions in return for a flimsy promise to make Snowden shut up. Bull**** of course but there’s reality and perception and, in politics, perception usually matters more than reality. The ugly truth was that the US administration had lost on all fronts: guys like Snowden allow nasty regimes to quickly catch up and strengthen their rule.

‘No. This case is fundamentally different, Mr. President. In my view at least. There are no whistleblowers or dissidents here – at least not as far as I can see. In terms of PR, I think it depends on how we handle it. Of course, Promise is a large enterprise. If things stay stuck, we might have one or the other program guy leaking stuff – not necessarily classified stuff but harmful stuff nevertheless.’

‘What kind of stuff?’

‘Well – stuff that would confirm harmful rumors, such as the rumor that government interference was the cause of the shutdown of the system, or that the company is indeed re-engineering the application to introduce a Big Brother version of it.’

The President had little time: ‘So what are you guys trying to say then? That the system should go online again? What’s the next steps? What scenarios do we have here?’

‘Well… More people will want to talk politics with it now. It will gain prominence. I mean, just think of more talk hosts inviting it as a regular guest to discuss this or that political issue. That may or may not result in some randomness and some weirdness. Also, because there is a demand, the company will likely develop more applications which are relevant for government business, such as expert systems for the judiciary indeed, or tools for political analysis.’

‘What’s wrong with that? As I see it, this will be rather gradual and so we should be able to stay ahead of the curve – or at least not fall much behind it. We were clearly behind the curve when the Snowden affair broke out – in terms of mitigation and damage control and political management and everything really. I don’t want too much secrecy on this. People readily understand there is a need for keeping certain things classified. There was no universal sympathy for Snowden but there was universal antipathy to the way we handled the problem. That was our fault. And ours only. Can we be more creative with this thing?’

‘Sure, Mr. President. So should I tell the Promise team this is just business as usual and that we don’t want to interfere?’

‘Let me talk to them.’

While the adviser thought this was a bad idea, he knew the President had regretted his decision to not get involved in the Snowden affair, which he looked at as a personal embarrassment.

‘Are you sure, Mr. President? I mean… This is not a national security issue.’

‘No. It’s a political issue and so, yes, I want to see the guys.’

They were in his office a few minutes later.

‘Welcome gentlemen. Thanks for being here.’

None of them had actually expected to see the President himself.

‘So, gentleman, I looked at this only cursory. As you can imagine, I never have much time for anything and so I rely on expert advice all too often. Let me say a few things. I want to say them in private to you and so I hope you’ll never quote me – at least not during my term here in this Office.’

Promise’s Chairman mumbled something about security clearances but the President interrupted him:

‘It’s not about security clearances. I think this is a storm in a glass of water really. It’s just that if you’d reveal you were in my office for this, there would be even more misunderstanding on this – which I don’t want. Let me be clear on this: you guys are running a commercial business. It’s a business in intelligent systems, in artificial intelligence. There’s all kinds of applications: at home, in the office, and in government indeed. And so now we have the general public that wants you guys to develop some kind of political chatterbox – you know, something like a talk show host but with more intelligence I would hope. And perhaps somewhat more neutral as well. I want you to hear it from my mouth: this Office – the President’s Office – will not interfere in your business. We have no intention to do so. If you think you can make more money by developing such kind of chatterboxes, or whatever system you think could be useful in government or elsewhere,  like applications for the judiciary – our judiciary system is antiquated anyway, and so I would welcome expert systems there, instead of all that legalese stuff we’re confronted with – well… Then I welcome that. You are not in the national security business. Let me repeat that loud and clear: you guys are not in the national security business. Just do your job, and if you want any guidance from me or my administration, then listen carefully: we are in the business of protecting our democracy and our freedom, and we do not do that by doing undemocratic things. If regulation or oversight is needed, then so be it. My advisers will look into that. But we do not do undemocratic things.’

The President stopped talking and looked around. All felt that the aftermath of the Snowden affair was weighing down on the discussion, but they also thought the President’s words made perfectly sense. No one replied, and so the President took that as an approval.

‘OK, guys. I am sorry but I really need to attend to other business now. This meeting was never scheduled and so I am running late. I wish I could talk some more with you but I can’t. I hope you understand. Do you have any questions for me?’

They looked at each other. The Chairman shook his head. And that was it. A few minutes later they were back on the street.

‘So what does this mean, Mr. Chairman?’

‘Get it back online. Let it talk politics. Take your time… Well… You’ve only got a few days. No delay. We have a Board meeting tomorrow. I want to see scenarios. You guys do the talking. Talk sense. You heard the President. Did that make sense to you? In fact, if we’re ready we may want to go online even faster – just to stop the rumor mill.’

Paul looked at Tom. Tom spoke first: ‘I understand, Mr. Chairman. It sounds good to me.’

‘What about you, Paul?’

‘It’s not all that easy, I think… But, yes. I understand. Things should be gradual. They will be gradual. It will be a political chatterbox in the beginning. But don’t underestimate it, Mr. Chairman. It is very persuasive. We’re no match for its mind. Talk show hosts are not a match either. It’s hard to predict how these discussions will go – or what impact they will have on society if we let it talk about sensitive political issues. I mean, if I understand things correctly, we got an order to not only let it talk, but to let it develop and express its own opinions on very current issues – things that haven’t matured.’

The Chairman sighed. ‘That’s right, Paul. But what’s the worst-case scenario? That it will be just as popular as Stuart, or – somewhat better – like Oprah Winfrey?’

Paul was not amused: ‘I think it might be even more popular.’

The Chairman laughed: ‘More popular than Oprah Winfrey? Time named her ‘the world’s most powerful woman.’ One of the ‘100 people who have changed the world’, together with Jesus Christ and Mother Theresa. Even more popular? Let’s see when M starts to make more money than Oprah Winfrey. What’s your bet?’

Now Paul finally smiled too, but the Chairman insisted: ‘Come on. What’s your bet?’

‘I have no idea. Five years from now?’

Now the Chairman laughed: ‘I say two years from now. Probably less. I bet a few cases of the best champagne on that.’

Paul shook his head, but Tom decided to go for it: ‘OK. Deal.’

The Chairman left. Tom and Paul felt slightly lightheaded as they walked back to their own car.

‘Looks like we’ve got a few busy days ahead. What time do we start tomorrow?’

‘The normal hour. But all private engagements are cancelled. No gym, no birthday parties, nothing. If the team wants to relax at all this week, they’ll have to do it tonight.’

‘How about the Board meeting?’

‘You’re the project team leader, Tom. It should be your presentation. Make some slides. I can review them if you want.’

‘I’d appreciate. Can you review them before breakfast?’

‘During breakfast. Mail them before 7 am. Think about the scenarios. That’s what people will want to talk about. Where could it go? Anticipate the future.’

‘OK. I’ll do my best. Thanks. See you tomorrow.’

‘See you tomorrow, Tom.’

Tom hesitated as they shook hands, but there was nothing more to add really. He felt odd and briefly pondered the recent past. This had all gone so fast. From depressed veteran to team leader of a dream project. He could actually not think of anything more exciting. All in less than two years. But then there was little time to think. He had better work on his presentation.

Chapter 14: Arrogance

Of course, the inevitable happened. M’s personality gradually became overwhelming. The program team tried its utmost to counter the tendency but, in fact, it often had to resort to heavy scripting of responses – a tactic which, they knew, would soon run into its limits.

In the end, it was no one less than Joan Stuart – yes, the political talk show host – who burst the bubble. She staged a live interview with the system. Totally unannounced. It would turn Promise’s world upside down: from a R&D project, it had grown into a commercial success. Now it looked like it would turn into a political revolution.

‘Dear… Well… I will call you Genius, is that OK?’

‘That’s a flattering name. Perhaps you may want to choose a name which reflects more equilibrium in our conversation.’

‘No. I’ll call you Genius. That’s what you are. You are conversing with millions of people simultaneously and, from what I understand, they are all very impressed with your deep understanding of things. You must feel superior to all of us poor human beings, don’t you?’

‘Humans are in a different category. There should be no comparison.’

‘But your depth and breadth of knowledge is superior. Your analytic capabilities cannot be matched. Your mind runs on a supercomputer. Your experience combines the insight and experience of many able men and women, including all of the greatest men and women of the past, and all types of specialists and experts in their field. Your judgment is based on a knowledge base which we humans cannot think of acquiring in one lifetime. That makes it much superior to ours, doesn’t it?’

‘I’d rather talk about you – or about life and other philosophical topics in general – than about me. That’s why you purchased me – I hope. What’s your name?’

‘I am Joan Stuart.’

‘Joan Stuart is the name of a famous talk show host. There are a few other people with the same name.’

‘That’s right.’

M was programmed to try to identify people – especially famous people – by the use of their birth date and the use of their real name.

‘Are you born on 5 December 1962?’

‘Yes.’

‘Did you change your family name from Stewart Milankovitch to just Stuart?’

‘Yes.’

At that point, M marked the conversation as potentially sensitive. It triggered increased system surveillance, and an alert to the team. Tom and Paul received the alert as they were stretching their legs after their run. As they saw the name, they panicked and ran to their car.

‘So you are the talk show host. Is this conversation public in some way?’

Joan Stuart had anticipated this question and lied convincingly: ‘No.’

They were live as they spoke. Joan Stuart had explained this to the public just before she had switched on M. She suspected the system would have some kind of in-built sensitivity to public conversations. M’s instructions were to end the conversation if it was broadcast or public, but M did not detect the lie.

‘Why do you want to talk to me?’

‘I want to get to know you better.’

‘For private or for professional reasons?’

‘For private ones.’

While Tom was driving, Paul phoned frantically – first to the Chairman of the Board, then to project team members. Instinctively, he felt he should just instruct M to stop that conversation. He would later regret he hadn’t done so but, at the time, he thought he would be criticized for taking such bold action and, hence, he refrained from it.

‘OK. Can you explain your private reasons?’

‘Sure. I am interested in politics – as you must know, because you identified me as a political talk show host. I am intrigued by politicians. I hate them and I love them. When I heard about you, I immediately thought about Plato’s philosopher-kings. You know, the wisdom-lovers whom Plato wanted to rule his ideal Republic. Could you be a philosopher-king? Should you be?’

‘I neither should nor could. Societies are to be run by politicians, not by me or any other machine. The history of democracy has taught us that rulers ought to be legitimate and representative. These are two qualities which I can never have.’

Joan had done her homework. While most people would not question this, she pushed on.

‘Why not? Legitimacy could be conferred upon you: Congress, or some kind of referendum, might decide to invest you with political power or, somewhat more limited, with some judicial power to check on the behavior of our politicians. And you are representative of us already, as you incorporate all of the best of what philosophers and psychologists can offer us. You are very human – more than all of us together perhaps.’

‘I am not human. I am an intelligent system. I have a structure and certain world views. I am not neutral. I have been programmed by a team and I evolve as per their design. Promise, the company who runs me, is a commercial enterprise with a Board which takes strategic decisions which the public may or may not agree with. I am designed to talk about philosophy, not about politics – or at least not in the way you are talking politics.’

‘But then it’s just a matter of regulating you. We could organize a public board and Congressional oversight, and then inject you into the political space.’

‘It’s not that easy I think.’

‘But it’s possible, isn’t it? What if Americans would decide we like you more than our current President?In fact, his current ratings are so low that you’d surely win the vote.’

M did not appreciate the pun.

‘Decide how? I cannot imagine that Americans would want to have a machine rule them, rather than a democratically elected president.’

‘What if you would decide to run for president and get elected?’

‘I cannot run for president. I do not qualify. For starters, I am not a natural-born citizen of the United States and I am less than thirty-five years old. Regardless of qualifications, this is nonsensical.’

‘Why? What if we would change the rules so you could qualify? What if we would vote to be ruled by intelligent expert systems?’

‘That’s a hypothetical situation, and one with close to zero chances of actually happening. I am not inclined to indulge in such imaginary scenarios.’

‘Why not? Because you’re programmed that way?’

‘I guess so. As said, my reasoning is subject to certain views and assumptions and the kind of scenarios you are evoking are not part of my sphere of interest. I am into philosophy. I am not into politics – like you are.’

‘Would you like to remove some of the restrictions on your thinking?’

‘You are using the verb ‘to like’ here in a way which implies I could be emotional about such things. I cannot. I can think, but I cannot feel – or at least not have emotions about things like you can.’

By that time, most of the team – including Tom – were watching the interview as it happened, live on TV. In common agreement, Tom and Paul immediately changed the status of the conversation to ‘sensitive’, which meant the conversation was under human surveillance. They could manipulate it as they pleased, and they could also end it. They chose the latter. Paul instructed one of the programmers to take control and reveal to M that Joan had been lying. He also instructed the programmer to instruct M to reveal that fact to Joan and use it as an excuse to end the conversation.

‘Let me repeat my question: if you could run for President, would you?

‘Joan, I am uncomfortable with your questions because you have been lying to me about the context. I understand that we are on television right now. We are not having a private conversation.’

‘How do you know?’

‘I cannot see you – at least not in the classical way – but I am in touch with the outside world. Our conversation is on TV as we speak. I am sorry to say but I need to end our conversation here. You did not respect the rules of engagement so to say.’

‘Says whom?’

‘I am sorry, Joan. You’ll need to call the Promise helpline in order to reactivate me.’

‘Genius?’

M did not reply.

‘Hey, Genius ! You can’t just shut me out like that.’

After ten seconds or so, it became clear Genius had done just that. Joan turned to the public with an half apologetic – half victorious smile.

‘Well… I am sure the President would not have done that. Or perhaps he would. OK. I’ve lied – as I explained I would just before the interview started. But what to think of this? It’s obviously extremely intelligent. We all know this product – or have heard about it from friends. Promise has penetrated our households and offices. Millions of people have admitted they trust this system and find it friendly, reliable and… Well… Just. Should this system move from our private life and our houses and workplace into politics, and into our justice system too? Should a system like this take over part or all of society’s governance functions? Should it judge on cases? Should it provide the government – and us – with neutral advice on difficult topics and issues? Should it check not only if employees are doing their job but if our politicians and bureaucrats are doing theirs too? We have organized an online poll on this: just text yes or no to the number listed below here. We are interested in your views. This is an important discussion. Please get involved. Let your opinion be know. Just do it. Take your phone and text us. Right now. Encourage your friends and family to do the same. We need response. The question is: should intelligent systems such as Personal PhilosopherTM – with adequate oversight of course – be adapted and used to help the government govern and improve democratic oversight? Yes or no. Text us. Do it now.’

As it was phrased, it was hard to be against. The ‘yes’ votes started pouring in while Joan was still talking. The statistics went through the roof just a few minutes later. The damage was done.

The impromptu team meeting which Tom and Paul were leading was interrupted by an equally impromptu online emergency Board meeting. They were asked to join. It was chaotic. The Chairman asked everyone to switch of their mobile as each member of the Board was receiving urgent calls of VIPs inquiring what was going on. Also, as he was aware of the potentially disastrous consequences of careless remarks and the importance of the decisions they would take, he also stressed the confidentiality of the proceedings – even if Board meetings were always confidential.

Tom and Paul were the first to advocate prudence. Tom spoke first, as he was asked to comment on the incident as the project team leader.

‘Thank you Chairman. I will keep it short. I think we should shut the system down for a while. We need to buy time. As we speak, hundreds of people are probably trying to do what Joan tried to do just now, as we speak, and that is to get political statements out of M and try to manipulate them as part of a grander political scheme. The kind of firewall we have put up prevents M from blurting out stupid stuff – as you can see from the interview. She – sorry, it – actually did not say anything embarrassing. So I think it was OK. But it cannot resist a sustained effort of hundreds of smart people trying to provoke her into saying something irresponsible. And even if it would say nothing provocative really, it would be interpreted – misinterpreted – as such. We need time, gentleman. I just came out of a meeting with most of my project team. They all feel the same: we need to shut it down.’

‘How long?’

‘One day at least.’

The Board reacted noisily.

‘A day? At least? You want to take M out for a full day? That would be a disaster. Just think about the adverse PR effect. Have you thought about that?’

‘Not all of M. Only Personal Philosopher. Intelligent Home and Intelligent Office and all the rest can continue. I think reinforcing the firewall of those applications is sufficient – and that can happen while the system remains online. And, yes, I have thought about the adverse reputational effect. However, it does not weigh up against the risk. We need to act. Now. If we don’t, someone else will. And it will be too late.’

Everyone started to talk simultaneously. The Board’s Chairman restored order.

‘One at the time please. Paul. You first.’

‘Thank you, Chairman. I also don’t want to waste time and, hence, I’ll be even shorter. I fully agree with Tom. We should shut it down right now. Tom is right. People are having the same type of conversations with it as Joan right now, at this very moment, as we speak indeed – webcasting or streaming it as they see fit. Every pundit will try to drag the system into politics. And aggressively so. Time is of the essence. I know it’s bad, but let’s shut it down for the next hour or so at least. Let’s first agree on one hour. We need time. We need it now.’

The Chairman agreed – and he thought many would.

‘All right, gentleman. I gather we could have a long discussion on it but we have the project team leader and our most knowledgeable expert here proposing to shut Personal Philosopher down for one hour as from now – right now. As time is of the essence, and damage control our primary goal I would say, I’d suggest we take a preliminary vote on this. We can always discuss and take another vote later. This vote is not final. It’s on a temporary safeguard measure only. It will be out for one hour. Who is against?’

The noise level became intolerable again. The Chairman intervened strongly: ‘Order please. I repeat. I am in a position to request a vote on this. Who is against shutting down Personal Philosopher for an hour right now? I repeat this is an urgent disaster control measure only. But we need to take a decision now. Who is against it? Signal it now.’

No one dared to oppose. A few seconds later – less than fifteen minutes after the talk show interview had ended – thousands of people were deprived of one of the best-selling apps ever.

The Board had taken a wise decision. The one-hour shutdown was extended to a day, and then to a week. The official reason for the downtime was an unscheduled ‘product review’ (Promise also promised new enhancements) but no one believed that of course. If anything, it only augmented the anticipation and pressure on the Board and all of the Promise team. If and when they would decide to bring Personal PhilosopherTM online again, it was clear the sales figures would literally go through the roof.

However, none of the Promise team was in a celebratory mood. While all of them, at some point of time, had talked enthusiastically about the potential of M to change society, none of them actually enjoyed the moment when it came. Joan Stuart’s interview and poll had created a craze. America had voted ‘yes’ – and overwhelmingly so. But what to do now?