I wrote only one post after the intermezzo, and now an epilogue already? Yes. It was fun to write my last post, but intellectual honesty demands realism. Mankind can reach Mars and – perhaps – build a colony there, but a spaceship that travels at a sizable fraction of lightspeed cannot work: even the smallest piece of dust in space would make it explode on impact.
Also, we can, perhaps, imagine that a matter-antimatter engine could be made to work say, 100 years from now with – yes – some new shielding material. If my hypothesis that dark matter does not interact with ordinary matter (and ordinary matter includes antimatter in this context) because it may obey a right-handed EM force, then it would be a likely candidate for such shielding material. However, I have no idea about how we would go about tapping such force. Also, the production of antimatter requires at least as much energy as it may provide as a fuel – and that energy would be very massive: it may take the equivalent of many nuclear bombs to produce the thrust that would be needed to accelerate anything to some fraction of lightspeed. Even dark matter will likely break down completely under the radiation which such proton-antiproton reactions generate.
As for the element of logistics and the Mars colony’s independence, there is no reason whatsoever to assume Mars would be richer than Earth in primary materials such as rare earth minerals. Hence, the idea that a colony would soon be independent from Earth is a pipedream.
I think all these considerations explain why populating Mars is not on the cards of either NASA’s Mars programme or the Chinese space administration. Probably not because of the high costs it implies but – quite simply – because a space station manned by robots would be far more cost-efficient and more effective.
However, this blog is about general artificial intelligence (AGI) – not about Mars exploration or space travel. So, what about AGI’s role in such ventures? My answer to that question is that the Promise and Promisee systems – on Mars and on the Centauri spaceship, respectively – would probably be able to handle many routine jobs, but they would not replace the ship’s captain, or Tom as a leader of the colony on Mars. Computer programs may be better at Go (think of Google’s AlphaGo) – or at chess or solving quizzes think of IBM’s Watson, for example) – but I do not think AGI can replace human wisdom and leadership.
As for AGI systems displaying human emotions and feelings – yes, of course ! However, these are likely to remain very primitive for decades to come. Human intuition will not be replaced any time soon. Therefore, one should not be afraid of AI: it will put a lot of people out of a job, perhaps, but it will only augment human capabilities – not replace them.
[…]
The finer point in the story is, perhaps, this: anyone with brains looking from afar to Earth must be thinking we are making a bit of a mess of our beautiful planet. I think we are.
The social and societal aspects of artificial intelligence – all of the things we see happen now – are interesting. For some, it may look frightening. Re-reading the e-book or blog story I did ten years ago (all posts before last one), I was struck by what I wrote about Tom when thinking about the managerial and business aspect of AI apps pervading our lives:
“As Tom moved from project to project within the larger Promise enterprise, he gradually grew less wary of the Big Brother aspects of it all. In fact, it was not all that different from how Google claimed to work: ‘Do the right thing: don’t be evil. Honesty and integrity in all we do. Our business practices are beyond reproach. We make money by doing good things.’ Promise’s management had also embraced the politics of co-optation and recuperation: it actively absorbed skeptical or critical elements into its leadership as part of a proactive strategy to avoid public backlash.”
With all of the talk about regulating AI – I do not believe in regulation – I think this quote is what companies working on AI should look at again. Technology is a good thing, but it can, and usually is, also used with bad intent. Not by all, of course. Not even by most, I’d think. But history shows technology is used for both good and bad. Looking at where we are with this war with Russia, I am afraid we should not be too hopeful in this regard. Regulation is surely not the answer: it can and will not stop mankind using AI for the wrong things. That is sad but true: we just have to live with that.