When an AI’s ability to mimic human intelligence and/or behaviour is indistinguishable from that of a human, it’s called AGI (also known as Strong AI or Deep AI).
When an AI doesn’t mimic human intelligence and/or behaviour but surpasses it, it’s called ASI.
ASI is something we can only speculate about. It would surpass all humans at all things: maths, writing books about Orcs & Hobbits, prescribing medicine and much, much more.
But is it even possible?
To count as an ASI, the AI would have to be capable of things we believe humans will always be able to do better than bots, such as relationships and the arts.
Even optimistic experts believe AGI, let alone ASI, requires decades more research, perhaps even centuries.
So where does this leave us on the will Ai take over the world debate that’s in the news more and more (did you see
Musk vs. Zuckerberg controversy!)? Are we on a (very, very slow) ticking time bomb?
In his book
Superintelligence, Nick Bostrom begins with
“The Unfinished Fable of the Sparrows.”
Once upon a time, some sparrows decided they wanted a pet owl.
Most of the sparrows were quickly convinced that having a pet owl would be awesome; however, one skeptical sparrow voiced its concern, asking how they could control an owl.
Another sparrow replied that there wasn’t much point in worrying about how to control an owl until there was an owl to be controlled. And with that, most of the sparrows departed.
The remaining sparrows soon realised that learning how to tame an owl wouldn’t be easy, in no small part because they had no owls upon which to practice. But they pressed on as best they could because, at any moment, the other sparrows could return with an owl egg.
Elon Musk would argue that humans are the sparrows, in Bostrom’s metaphor, and ASI is the owl. As it was for the sparrows, the “control problem” is especially concerning because we may only get one chance at solving it.
The argument is that once an ‘evil’ superintelligence is developed, it will try to stymie any attempt we make at stopping it or changing its preferences. Mark Zuckerberg disagrees, saying the positives of AI overweigh the potential negatives.
Let’s dive a bit into Musk’s control problem using Codebots as an example.
A codebot can only create and edit external code, not its own internal code. This is one of our 5 unbreakable rules, and it’s why we side with Zuckerberg.
We take Musk’s concerns seriously which is why we have these rules. But, because we are vigilant, we are also optimistic about AI. As cool as a codebot is, all it can do it code (and help its human pilots spend less time coding and more time creating).
There are several types of Artificial intelligence: ANI, AGI, and ASI.
- ANI: has a narrow-range of abilities
- AGI: is about as capable as a human
- ASI: is more capable than a human
For now, AGI and ASI are Sci-Fi, and all existing AI is ANI (e.g. Googlebot).
One way that tech companies are discovering new boundaries in AI development is by combining multiple ANI together, as we do in Codebots. Codebots is a new breed of ANI –ANI 2.0– where several bots, and even several types of bots, work together with humans to create more.
This method is hardly unique to Codebots however, as basically all leading AI platforms draw on a number of technologies implemented with various degrees of.intelligence. For example, Siri has awesome capabilities for finding information, but slightly less awesome capabilities for understanding instructions given in a strong or unusual accent.
Siri is designed to make life easier for humans by taking onboard some of the heavy lifting of everyday planning and research. Codebots is all about less time coding and more time creating, and we won’t be taking over the world anytime soon. (A big plus in our books.) There are 3 types of AI, but there’s only one goal. Less time working and more time living.