It’s been almost 20 years since I wrote my first neural network, during my university studies, in C#. Even back then, it was still unclear to me why this was called intelligence. I mean, after all, it was doing exactly what I programmed it to do and it felt like cheating to call “learning” what it was doing.
In the years since, I’ve read quite a few books on natural sciences (Darwin, Dawkins, Pinker…) and I discovered some wonderful things about the world we live in and how we “operate”. One extremely important thing I’ve learned is that while we know a lot about our brain and its functions, we still have a long way to go before we can say we know everything about it.
Natural intelligence has yet to define natural intelligence.
I still stand by my idea that software is just a tool that we use. It is not, in any way, an emerging new species that will overthrow humanity.
Is it smarter than us? No way! “Ok”, you might say, “but AlphaZero…”. AlphaZero what!? It can beat me at chess in no time? I bet it can’t if I spill a glass of water on top of it, can it? This is exactly the way we should evaluate things when we bring AI into the realm of inter-species competition. “Ok”, you might say again, “but what if all those specialized systems are put together into one big system that knows how to do every task? Isn’t that like a brain?”. No! Just read about cases where specialized brain modules were damaged, but somehow the functions were taken over by other modules. This is truly wonderful. Our brains adapt in such ways the aforementioned machine brain can’t.
A couple of words about the fear of AI destroying us: it’s most definitely possible for machines to destroy humanity (and possibly many other species as well), but that’s because WE control it. WE pull the trigger. WE have nukes, we don’t even need AI to do that. AI is just a means to an end in this scenario.
I believe that once we do fully solve the complex riddle of how the brain works, we might be able to recreate one from scratch. When that happens, it will probably be indistinguishable from a “natural” brain.
I will say this though: AI does have a bad PR, especially when it comes to public that is not trained in the subject at all (and most people don’t have a computer science degree). Of course people fear what they don’t understand AND is presented to them as a threat. Our brain always decides it’s better to be afraid and run if there is a sound in the tall grass of the savanna. The cost of being eaten by a predator hugely outweighs the cost of being wrong and tired from a run.
By the way, if you haven’t already done so, I strongly recommend you read Asimov’s “Robots” series. It’s wonderful! Now that’s a world I’d really like to live in.