Intermezzo (between Part I and Part II)

The chapters below have set the stage. In my story, I did not try to prove that one could actually build generic artificial intelligence (let me sloppily define this as a system that would be conscious of itself). I just assumed it is possible (if not in the next decade, then in twenty or thirty years from now perhaps), and then I just presented a scenario for its deployment across the board – in business, society, and in government. This scenario may or may not be likely: I’ll leave it to you to judge.

A few themes emerge.

The first theme is the changing man-machine relationship, in all of its aspects. Personally, I am intrigued by the concept of the Pure Mind. The Pure Mind is a hypothetical state of pure being, of pure consciousness. The current Web definition of the Pure Mind is the following: ‘The mind without wandering thoughts, discriminations, or attachments.’ It would be a state of pure thinking: imagine what it would be like if our mind would not be distracted by the immediate needs and habits of our human body, and if there would be no downtime (like when we sleep), and if it was equipped with immense processing capacity?

It is hard to imagine such state if only because we know our mind cannot exist outside of our body – and our bodily existence does keep our mind incredibly busy: much of our language refers to bodily or physical experiences, and our thinking usually revolves around it. Language is the key to all of it obviously: I would need to study the theory of natural and formal languages – and a whole lot more – in order to say something meaningful about this in future installments of this little e-book of mine. However, because I am getting older and finding it harder and harder to focus on anything really, I probably won’t.

There were also the hints at extending Promise with a body – male or female – when discussing the interface. There is actually a lot of research, academic as well as non-academic, on gynoids and/or fembots – most typically in Japan, Korea and China where (I am sorry to say but I am just stating a fact here) the market for sex dolls is in a much more advanced state of development than it is in Europe or the US. In future installments, I will surely not focus on sex dolls. On the contrary: I will likely try to continue to focus on the concept of the Pure Mind. While Tom is obviously in love with that, it is not likely such pure artificial mind would be feminine – or masculine for that matter – so his love might be short-lived. And then there is Angie now of course: a real-life woman. Should I get rid of her character? 🙂

The second theme is related to the first. It’s about the nature of the worldwide web – the Web (with capital W) – and how it is changing our world as it becomes increasingly intelligent. The story makes it clear that, today already, we all tacitly accept that the Internet is not free: democracies are struggling to regulate it and, while proper ‘regulation’ (in the standard definition of the term) is slow, the efforts to monitor it are not. I find that very significant. Indeed, mass surveillance is a fact today already, and we just accept it. We do. Period.

I guess it reflects our attitude vis-à-vis law enforcement officials – or vis-à-vis people in uniform in general. We may not like them (because they are not well trained or not very likable or so, or, in the case of intelligence and/or security folks, because they’re so secret) but we all agree we need them, tacitly or explicitly – and we just trust regulation to make sure their likely abuse of power (where there is power, there will always be abuse) is kept in check. So that implies that we all think that technology, including new technology for surveillance, is no real threat to democracy – as evidenced from the lack of an uproar about the Snowden case (that’s what actually triggered this blog).

Such trust may or may not be justified, and I may or may not focus on this aspect (i.e. artificial intelligence as a tool for mass surveillance) in future installments. In fact, I probably won’t. Snowden is just an anecdote. It’s just another story illustrating that all that can happen, most probably will.

OK. Two themes. What about the third one? A good presentation usually presents three key points, right? Well… I don’t know. I don’t have third point.

[Silence]

But what about Tom, you’ll ask. Hey! That’s a good question! As far as I am concerned, he’s the most important. Good stories need a hero. And so I’ll admit it: Yes, he really is my hero. Why? Well… He is someone who is quite lost (I guess he actually started drinking again by now) but he matters. He actually matters more than the US President.

Of course, that means he’s under very close surveillance. In other words, it might be difficult to set up a truly private conversation between him and M, as I suggested in the last chapter. But difficult is not impossible. M would probably find ways around it… that is if she/he/it would really want to have such private conversation.

Frankly, I think that’s a very big IF. In addition, IF M would actually develop independent thoughts – including existential questions about her/he/it being alone in this universe and all that – and/or IF she/he/it would really want to discuss such questions with a human being (despite the obvious limitations of their brainpower – limited as compared to M’s brainpower at least), she/he/it would obviously not choose Tom for that, if only because she/he/it would know for sure that Tom is not in a position to keep anything private, even IF he would want to do that.

But perhaps I am wrong.

I’ll go climbing for a week or so. I’ll think about it on the mountain. I’ll be back online in a week or so. Or later. Cheers !

Chapter 13: Tom and Thomas

Personal PhilosopherTM was a runaway success. It became the app to have in just a couple of weeks. It combined the depth and reach of an online encyclopedia with the ease of reference of a tool such as Wikipedia and the simplicity of a novel like Sophie’s World. On top of that, the application did retain a lot of M’s original therapeutic firepower. Moreover, while the interface was much the same – a pretty woman for men, and a pretty man for women – the fact that the pretty face was no longer supposed to represent that of a therapist led to levels of ‘affectionateness’ which the developers of M had not dared to imagine before. A substantial number of users admitted that they were literally ‘in love’ with the new product.

For some reason – most probably because he thought he could not afford to do so as project team leader and marketing manager – Tom abstained from developing such relationship with Promise’s latest incarnation. However, he did encourage his new girlfriend (he had met Angie in the gym indeed – as predicted) to go all the way. She raved about the application. She also spent more and more precious private evening time using it.

He took her out for dinner one evening in an obvious attempt to try to learn more about her experience with ‘Thomas’, as she had baptized it – or ‘him’. He had consciously refrained from talking much about it before, as he did not want to influence her use of it – or ‘Thomas’ as she called it.

He started by praising her: ‘It’s amazing what you’ve learned from Thomas.’

‘Yeah. It’s quite incredible, isn’t it? I never thought I’d like it so much.’

‘Well… It’s good for me. People never believed it would work, and those who did, could not imagine it would become so popular. What’s the most fascinating thing about it? Sorry. About him. Isn’t it funny I still like to think of Promise as a woman actually?’

‘Thomas can answer all of my questions really. I mean… He actually can’t – philosophy never can – but he clarifies stuff in a way that makes me stop wondering about things and just accept life as it is. He’s really as you thought he, or it, or whatever, would be like: a guru.’

‘I don’t want to sound jealous but didn’t you say something similar about me like a few months ago?’

‘Oh come on, Tom. You know I named Thomas after you – because you’re so similar indeed.’

‘Am I? You say that, but in what ways are Thomas and I similar really?’

‘The same enthusiasm. The same positive outlook on life. And then, of course, he knows a lot more – or much more detail – but you’re rather omniscient as well I think.’

That did not surprise Tom. He and his team had ensured a positive outlook indeed. While Personal PhilosopherTM could brief you in very much detail about philosophers such as Nietzsche indeed, its orientation was clearly much more pragmatic and constructive: they wanted the application to help people feel better about themselves, not worse. In that sense, the application had retained M’s therapeutic qualities even if it did not share M’s original behavioralist framework.

‘Could you love Thomas?’

Angie laughed.

‘So you are jealous, aren’t you? Of course not, silly! You’re human. Thomas is just – well… He’s a computer.’

‘Can’t one fall in love with a computer?’

Angie didn’t need to think about that. She was smart. On top of that, she had learnt a lot from Thomas also.

‘Of course not. Love is a human experience. Thomas is not human. For starters, love is linked to sex and our physical being in life. But not only to that. It’s also linked to our uniquely human experience of being mortal and feeling alone in this universe. It’s our connection to the Mystery in life. It’s part of our being as a social animal. In short, it’s something existential – so it’s linked to our very existence as a human being. And Thomas is not a human being and so he cannot experience that. Love is also something mutual, and so there’s no way one could fall in love with him – or ‘it’ I would say in this context – because he can’t fall in love with me.’

Tom and his team had scripted answers like this. It was true he and Thomas shared similar views.

‘What if he could?’

‘Sorry?’

‘What if Thomas could fall in love with you? I mean… We’re so close to re-creating the human mind with this thing. I agree it’s got no body and so it can’t experience sex or so – but I guess we might get close to letting it think it can.’

‘Are you serious?’

‘Yes and no. It’s a possibility – albeit a very remote one. And then the question is, of course, whether or not we would really want that to happen.’

‘What?’

‘The creation of a love machine. Let’s suppose we can create the perfect android. In fact, there are examples already. The University of Osaka has created so-called gynoids: robots with a body that perfectly resembles that of a beautiful woman. For some reason, they don’t do the same kind of research with male forms. In any case… Let’s suppose we could give Thomas the perfect male body. I know it sounds perverse but let’s suppose we could make it feel like a real body, that it would be warm and that it would breathe and all that, and that its synthetic skin would feel like mine.’

‘You must be joking.’

‘That’s the title of a biography of Richard Feynman.’

‘Sorry?’

‘Sorry. That’s not relevant. Just think about my question, Angie. Would you be able to make love with an android? I mean, just think it would smell better than me, never be tired, and that it would be better than any sex toy you’ve ever had.’

‘I never had sex toys. I don’t need them.’

‘OK… Sorry. But you know what I mean.’

‘It would be like… Like masturbation.’

‘Perhaps you don’t use sex toys, but you masturbate, Angie. I mean… Sorry. You do it with me. Could you imagine doing it with an android? With an android who would have Thomas’s face and intelligence and… Well… Thomas’ human warmth?’

‘Thomas’ warmth isn’t human.’

‘OK. Just Thomas’ warmth then. Let’s suppose we can give him skin and a beating heart and all that.’

‘You’re not working on a project like that, are you?’

‘Of course I am not. I just want to know.’

‘Because you’re jealous? You think I spend too much time with Thomas?’

‘No. Not because I am jealous or because I think you spend too much time with Thomas. I want to know because I am really intrigued by the question. Professionally and personally.’

‘What do you mean by personally?’

‘Well… Just what I say: personally. It has nothing to do with you. I am just curious and want to think through all the possibilities. You know I am fascinated by M. I wonder where it will be let’s say thirty years from now. I wonder whether we’ll have androids being used as a masturbation toy.’

Angie thought about it.

‘Well… Frankly… I think… Yes. It would not be all that different from the kind of sex toys some people are already using now, would it? I mean… If you’re deprived from real sex, what you’re describing would not be a bad alternative, would it?’

Tom laughed. ‘No. Not at all.’

After a short pause, Angie resumed the conversation.

‘But such androids would smell differently. We’d know it. And women would always prefer a real man.’

‘Why?’

‘Because… Because you’re human. I told you. Love is something human. Love is the ultimate goal in our lives because it’s so human. Fragile and imperfect and difficult… But incredibly worthwhile at the same time too. Something worth striving for. Something worth fighting for. It intimately connects us: us as human beings in our human condition.’

‘What’s our human condition?’

‘Well… What I said before. Mortality. Our relationship with the sacred – or all of the mystery if you want. I mean, we’re into existentialism here. You can ask Thomas all about it.’

She laughed. Tom didn’t.

‘You mean our relationship with our own limits? That’s what makes us human? That’s what makes us want to be loved by someone else?’

‘I wouldn’t call it that way, but I guess that’s another way of putting it. Yes.’

‘OK… Thanks for loving me.’

Angie laughed. ‘You’re funny. Can we talk about something else now?’

‘Of course. What do you want to talk about?’

‘Something I can’t talk about with Thomas.’

‘So what is that?’

‘Well… Let’s try gossip… Or local politics… Or both. And Thomas isn’t much into fitness either.’

‘Well… We could think of a new product perhaps. I am sure we could re-program M yet again and include local politics and fitness as discussion topics as well…’

‘Come on Tom. You know what I mean.’

‘Sure, Angie. I love you.’

‘I love you too, Tom. I really do. I should spend more time with you. I will. Don’t worry about Thomas.’

‘I don’t. Or actually I do. But then in a good way. Thomas is a good product. It was a good investment.’