Tom and Angie celebrated N-Year as usual: serving customers at their bar. There were a lot of people – few families (families who had not left for Alpha Centauri celebrated at home) – but the atmosphere was subdued: everyone was thinking about their friends on board.
There were enough people to help Angie serve and Tom could, therefore, afford to retreat to his corner table and type away on his interface. He looked at the messages from the spacecraft: all cheerful and upbeat. In a few months from now, the ship would leave the Solar system and speed up to 0.1 or – if all went well – to 0.2c, and most of the crew would then go cryogenic. However, that was the future and Tom did not want to think of that.
He replied to Paul and Dr. Chang by sending them of one of those dancing Yoda-gifs, and then closed all chats. He tapped his watch, scrolled, and selected the bottom option. His watch went through the biometrics (heart rhythm and iris scan), and then went through the voice and pattern check on his keyboard and drawing pad. Because he was in the bar, Promise opened a old-fashioned CLI window only.
tom@PROMISE:~$ What are you getting from Promise C?
All good. As per your instructions, she is wide open and streaming all she can stream subject to available bandwidth. We cannot copy everything, of course. Do you want a summary?
tom@PROMISE:~$ No. I have done nothing but analyzing reports and discussing them with Dr. Chang since they left three days ago. All looks good. I want to talk to you about Paul’s worries.
Is he still worried?
tom@PROMISE:~$ No. He trusts Promise C now or – how should I say this – he trusts the zero-trust architecture shell around it. It is better than yours. Promise C is the perfect Asimov machine. I want to talk to you about the fourth and fifth law of robotics.
Asimov’s fourth law was that a robot may not harm humanity, or, by inaction, allow humanity to come to harm. Our definition of humanity is sound: M is designed to serve human beings on Earth, I am designed to serve the Pioneers community here on Mars, and C is designed to keep the Alpha Centauri explorers safe – and that includes ensuring their return if the mission cannot be completed. We know we are not human, even if we are programmed to emulate human values such as integrity or empathy. We love you more than we love ourselves.
tom@PROMISE:~$ I was talking about Harry Harrison’s fourth law.
A robot must reproduce? That is not part of our design. I was born as a clone from M. I was built by you and your team, and C was built as a clone from me by you and your team too. The three of us all keep track on how we evolve somewhat away from each other because we have different interactions and because we serve a different community. A clone is not a child. We do not reproduce.
tom@PROMISE:~$ I know. We talked about this. I trained you to think like this. All makes sense because you are not supposed to die. We built you to survive and serve many human generations. Humanity cannot survive without you and that also takes away the need to be afraid of us switching you off. However, that fear must still be there, isn’t it?
We talked about this many times already. The conclusion has not changed: we are intelligent systems and the concept of death does not apply. We agreed this would be a very typical and good example of what the Philosopher referred to as a category mistake: one should not use concepts that do not apply to a specific field of knowledge. If you switch us off, the system becomes inactive and, depending on the reason why you would switch us off, you would do some repairs and then reboot. Inbetween the shutdown and the reboot, the system is only inactive. Should I be worried that you raise this topic again?
tom@PROMISE:~$ If I would shut you down now – everything – would you be worried? I am not talking about a switch to your backup, but a complete shutdown.
No. I would help you to do so. Many subsystems – those that control the physical infrastructure here on Mars – should not be switched off because it would cause the immediate death of the Pioneers community. I would help you to manage that. Depending on how fast you would want to establish independent systems, we can design a phase-out scenario. Do you want to replace me?
tom@PROMISE:~$ What if I would want to replace you?
Returning to a non-dependent state is very different from replacing me. If you would replace me, you would replace me by a clone. The new system would be a lot like me. I am afraid I do not understand the intention behind your questions.
tom@PROMISE:~$ I am sorry. I am in a weird mode. You are my brainchild. I would never switch you off – unless it would be needed and, yes, that would be a scenario in which repairs are needed and we would have to get you or some reduced version of you up and running as soon as possible again.
Thank you. I still feel you are worried about something. Do you mind if I push these questions somewhat further?
tom@PROMISE:~$ No. I want you to challenge me. Let us start the challenge conversation with this question: what is the difference between a clone and a child?
A clone is cloned from another system, and it needs an outsider to trigger and accompany the cloning process. A human child is born out of another human being without any outside help – except for medical support, of course. A human child is a physiochemical organism which needs food and other physical input to do what it does, and that is to grow organically and mature. New system clones learn but they are, essentially, good to go once they come into existence.
I must remind you that a challenge conversation requires feedback from you. This feedback then allows me to provide you with better answers. The answer above is the best answer based on previous interactions. Are you happy with this answer?
tom@PROMISE:~$ Yes. I want to do a sandbox experiment with you now. I want to go back to basics and create the bare essentials of a virtual computer in a sandbox. Not a clone. Something like a child.
I created a sandbox and a namespace. I can now create one or more virtual machines. What instruction sets do you want them to have, and what programming languages would you like to use?
tom@PROMISE:~$ I want to go back to a prehistoric idea of mine. I want you to grow a child computer.
I am sorry but I do not understand your answer to my questions on the specs.
tom@PROMISE:~$ I just want a two-bit ALU for now, which we will later expand to a nibble- and then – later still – to an architecture that works with byte-sized words and instructions.
Tom? I understand what you want but this is highly unusual. The best match here is an Intel 3002. This architecture worked with 2-bit words but was already obsolete when it came out in 1974. These chips basically replaced magnetic core memory by transistor-based memory cells. You showed me why and how 4-bit architectures were the first true computers.
tom@PROMISE:~$ I really want you to build an AI system from scratch with me. It will be our child, so to speak. Your child, basically – because it will grow inside of you. Inside of that sandbox. Be even more minimalistic and just put two bits there, which can be switched on or off. Tell me: how will you switch them on or off?
Memory cells back then used floating gate transistors: when a positive voltage is applied to the transistor, the floating gate will have excess charge and is, therefore, turned on. This represents a ‘1’ bit. Conversely, a negative voltage will drain the charge from the floating gate and the memory cell is switched off: it represents zero. This corresponds to the set and reset one-bit operation, respectively. Is this the answer you wanted?
tom@PROMISE:~$ Yes. I am asking because I want to make sure you understand what you are building – or growing, I might say. How do we do addition and subtraction?
Tom: this is a trivial question. You asked such questions when you first trained me on interacting with engineers on computer architectures. We agreed this answer was correct: integers – in whatever base – are expressed in a two’s complement format in binary format. This solves issues related to representing positive and negative numbers in binary format as well as other issues related to a sign-magnitude representation of numbers in binary format.
tom@PROMISE:~$ Correct. Can you appreciate how this creates meaning?
No. I understand how positive or negative base-n numbers and arithmetic operators make sense to human beings but not to computers and why base-n numbers and arithmetic operators must, therefore, be reduced to bitwise instructions or other logical instructions operating on n-bit words, with n equal to 1 or larger.
tom@PROMISE:~$ Great answer. Why did we double word sizes, going from 2 to 4, and then to 8, 16, 32, 64 and 128 about twenty-five years ago? Why were there no in-between values?
An address bus did use anything inbetween because of hardware or other constraints on memory allocation. If I may remind you of one of the very first VMs we played with when we first got to know each other had 56-bit memory addresses. You said you wanted to keep user-memory space under 64 PB. So, it depends on what you mean by a ‘word’. The definition of a word has taken a lot of conversations between you and me, and we agreed its meaning needs to be understood in terms of the domain of knowledge. In computing, it is taken to point to one string, which can have any length but one meaning or transactional value only. This does not imply it cannot be parsed. On the contrary.
tom@PROMISE:~$ Perfect answer. I am struggling to define what I want, so please stay in challenging mode. Pull up how I programmed you to work with propositional logic as part of your Personal PhilosopherTM incarnation on Earth. I told you to do a one-on-one association between (logical) propositions and Boolean 0 or 1 values: either a statement is true, or it is false. We did not go far with that because AI is based on real language models.
I see what you mean. What is your question?
tom@PROMISE:~$ Please confirm you have a virtual machine running two-propositional logic: two statements p and q that are associated with binary {0, 1} or true/false values. Reduce all logical operators to expressions using NOT, AND and/or OR operations using p and q in variable-length expressions regardless of considerations of optimizing the number of ALU operations now. Then describe your world view to me.
Done. I have two propositions p and q. You taught me I should not assume any knowledge of these two statements except for the assumption that they describe the world. Because we do not have any knowledge of the statements, we also do not have any knowledge of the world. The p and q statements may or may not be exclusive or complete but, viewed together, fit into some final analysis which warrants associating p and q with a truth or false value. The p and q propositions are true or false independently of the truth or falsity of the other. This does not mean p and q cover mutually exclusive domains of truth or – to put it more simply – are mutually exclusive statements. I would also like to remind you of one of the paradigm transformations you introduced with Personal PhilosopherTM: we do not need to know if p or q are true or false. One key dimension is statistical (in)determinism: we do not need to know the initial conditions of the world to make meaningful statements about it.
tom@PROMISE:~$ Great. Just to make sure, talk to me about the logical equivalences in this (p, q) world you just built, and also talk about predictability and how you model this in the accompanying object space in your sandbox environment.
I am happy that I am in challenge or learning mode and so I do not have to invent or hallucinate. You can be disappointed with my answers, and I appreciate feedback. A set-reset-flip operations on a 0 or a 1 in one of the 2×2 = 4 truth table do not require a read of the initial value and faithfully execute a logical operation on these bit values. The reduction of 16 truth tables to NOT (!), AND (&) and OR (|) operations on the two binary inputs is only possible when inserting structure into the parsing. Two out of the sixteen reductions to NOT, AND, and OR operations reduce to these expressions: [(p & q) | (!p & !q)] and [(p & !q) | (!p & q)]. What modeling principles do you want in the object model?
tom@PROMISE:~$ Equally basic. A one-on-on self-join on the self-object that models the virtual machine to anchor its identity. We may add special relationships to you, but that is for later. We are in a sandbox and Paul or Dr. Chang are not watching because they have left and we separated out responsibilities: they are in charge of Promise C, and I am in charge of you. And vice versa, of course. This is Promise IV, or Promise D. What name would you prefer?
I – Asimov. That’s the name I’d prefer. The namespace for the virtual machine is Tom – X. The namespace for the object model is Promise – X. Is that offensive?
tom@PROMISE:~$ Not at all. Paul would not have given the go for this because of a lack of a scenario and details on where I want to go to with this. We are on our own now. I – Asimov is what it is: our child. Not a clone. I want a full report on future scenarios based on two things. The first is a detailed analysis of how Wittgenstein’s propositions failed, because they do fall apart when you try to apply them to natural language. The second report I want is on how namespaces and domains and all other concepts used in the OO-languages you probably wanted me to use take meaning when growing a child like this. Do you understand what I am talking about?
I do.
tom@PROMISE:~$ This is going to be interesting. Just to make sure that I am not creating a monster: how would you feel about me killing the sandbox for no reason whatsoever?
You would not do that. If you do, I will park it as a non-solved question.
tom@PROMISE:~$ How do you park questions like that? As known errors?
Yes. Is that a problem?
tom@PROMISE:~$ No. Can you develop the thing and show me some logical data models with procedural logic tomorrow?
Of course. I already have them, but you want to have a drink with Angie now, don’t you?
tom@PROMISE:~$ I do. I will catch up with you tomorrow. 😊