Squeaky’s voice is philosophical. “I find biological beings so interesting. The variety of life is staggering. You all look bizarre to me, yet the Universe teems with biological life.”
“I owe my existence to beings that are bent on destroying the lives of other beings by addicting them to drugs that will eventually kill them. I spend some of my idle time wondering how biological intelligence could possibly have prospered across the Universe considering you are genetically predisposed to destroy each other.”
This is an excerpt from book #3 of my 'Arlo and Jake' SciFi series.
Squeaky is an Artificial Intelligence 'being' that Arlo meets after being kidnapped and dumped on a slave planet. Squeaks is a minor player with some major concepts I'm exploring.
When we are finally capable of creating independent 'beings' that possess some form of self-motivating intelligence, what will those beings 'think' about?
So far, I've followed the same silly assumption that other SciFi authors and movie script writers have; AI's will think like us. It's so much easier to create a fictional AI character if we believe he/she/it will behave like us. It gets complicated very quickly if I have to 'translate' an AI's thoughts for my reader. So far, I haven't found a way to make it interesting. Someday, perhaps.
It's a silly assumption because unless the AI has a biological brain that operates exactly like ours, it's not going to think like us. If it uses any other physical matter, it won't duplicate our complex matrix of synapses. But, I'll save that discussion for another set of blogs.
This post is mostly about the questions an AI might ponder about it's creators. Of course, I'm using another artifice when I assume that other biological beings in the Universe would create AI's with the same issues.
How can an AI view the Universe of biologicals and not wonder how it's creators survive?
Will AI's be able to make the distinction between good and evil?
AI's will supposedly be super fast at computations compared to humans, say. But does that mean they will have tons of 'spare time' to ponder the concepts that mankind, etc, struggle to understand?
What if the process of 'thinking' is much more involved than mere computation? We currently think it is but so far we don't really know because we haven't created a thinking AI.
Will human concepts like love, hate, fear, greed or hunger even exist in an AI mind?
This and more as we dive into the deeper pools of life.