Archetypes for AI
Back in 2019 I was frustrated with my job and played around with the idea to quit it and start my own company… or something… . So in the age old tradition of techies dreaming of building something of their own, I came up with a name and bought a domain, and set up a blog.
What in retrospect looks like an exercise in creative writing was the following article, written from a point of view of being a founder of an AI company (that does… what exactly?) talking about what role AI could have in our lives once the company’s product has finally been built.
Re-reading this now, it does not feel as hypothetical as it did back then. In fact, compared to some stuff that has been written since LLMs changed the world (of AI), this seems almost balanced. But read for yourself :)
February 6, 2019
Archetypes for AI
We believe that AI will play a significant role in the future of mankind. But what will that role be? Clearly, machines surpass us in raw computing power, but will the same be true for the class of simulated artificial intelligence that we’re seeing evolving right now?
One direction in which we point our research is figuring out what the most beneficial role is for AIs to assume. We believe that for AIs to enable humanity and not supersede it, AI must interrelate with us. Just like empathy creates the social fabric that binds humanity together, AI needs to become a part of our social fabric as well.
AI is currently impersonal, mechanic and voiceless. Typical AI applications like computer vision, autonomous driving, and even chatbots feel alien to us because we can sense that there is no one inside.
Companies like Boston Dynamics explore the space by building robots that look distinctly like humans or animals. By exploiting the Eliza effect, we humans start to interpret and attach intent to what we see. We don’t want to ridicule those who do not understand that we’re merely projecting. Instead we think that the Eliza effect will be the empathic bridge to create a link to the AIs of the futures.
So, what kinds of archetypes are thinkable? Let us start with the AI overlord. Possessing fast computing power, optimized to be immune to biases and fallacies, it seems natural to expect that AIs will be more advanced than we are, and if all goes right, AIs will look after us (unless they are just out to exterminate us). People are putting their hopes on this scenario, understandable given the complexities of life, and the sheer scale of humanity with the possibility that we will cause our own demise and extinction.
In order to explore these scenarios, we frequently look into the world of fiction. You are probably familiar with HAL in the movie 2001, an AI that runs a spaceship, but then becomes homicidal because it has been fed conflicting goals. In the works of Iain Banks’s Culture series, a fleet of sentient hyperintelligent benevolent and decidedly anarchistic AIs that are gigantic space ships is a recurring topic. They host humans and other living beings out of good will and for their own enjoyment.
These fantasies become more tangible in video games. Horizon: Zero Dawn is a more recent work of art that explores this fine line between benevolence and evil. Rendered in luscious 4K, Guerrilla Game’s creation tells a story of AIs that both become a threat to humanity as a whole, but also the All-Mother who tirelessly works at recreating humanity and the world as it was. So much we can disclose without spoiling too much about the game’s story.
What do we learn from these examples? For one, these visions seem to call back to humanity’s yearning for a god, a benevolent being much more powerful than we are, that can look after us. Maybe a reminiscence of how our parents appeared to us when we were still children, and our inability to forget what we once had.
On the other hand, these visions seem ignorantly attributing too much to the powers of AI. The complexity of the world is inherent. Chaos theory plays a role in restricting what can be predicted. We haven’t yet fully understood how systems that construct their own world view can become victims to the categories they construct, and until we know how to guard against this, our AIs will start to show the same subjectivity and logical and conceptual fallacies that even the smartest among us possess.
Also, sheer computing capability has only been achieved for clear categorial data. Numbers, definite arithmetic operations, symbolic logic. Whether our simulations of AI will be able to show the same kind of speed in dealing with uncertainty, with nebulous representations of meaning, that remains to be seen.
Especially the animal inspired forms of Boston Dynamics’s robots suggest a different role for AI, that of a companion, or a servant to humanity. Just like we enjoy the company of animals who are clearly less intelligent, less long lived than we are, we still rely on them for comfort, support, and sheer enjoyment.
Authors and movie creators alike have thought about this case, too. Marvel’s Iron Man iconically creates his own AI butler to support him. Jarvis is at times little more than a voice interface, at other times it is a companion in Tony Stark’s scientific endeavors.
In real life, companies have started to experiment with robot butlers for hotels and care for the elderly and handicapped. Results seem to be mixed, but that will change as technologies mature.
This area will probably have the furthest growth in the near future, because the technological requirements are not as steep as in the case of an AI overlord.
A third role we’d like to briefly discuss is that of an eye level partner to us humans. Star Trek’s Commander Data is a prominent example, and the series devoted some of its time to discussing questions of autonomy and status of robots within the society.
We believe it is premature to think about consciousness and self-awareness. In fact, existing systems don’t really have an inner life in the same sense that we humans do. They are more akin to extracted subsystems of our perception as in computer vision, or live in very restricted areas, focussed on problem solving like Alpha Zero.
We at postheuristic.ai are not only concerned with the raw technology that will enable true AI, but consider these questions part of the human-machine-interface. It is unclear what the dominating relationship will be, but not leaving this relationship to accident is important enough for us to make explicit role modeling for AI applications part of our product development process. But more on that later.
Thanks for reading!