Just when we thought that advanced machines had started to master many human-like thinking and processing tasks, it seemed like the revolutionary AI technology reached its maximum capabilities to grow. We witnessed many amazing breakthroughs year by year, and there is a lot more room to develop in many areas. Still, maybe all our predictions about massive adoption and artificial general intelligence were only science fiction?
The Artificial Intelligence Index shows us the level of progress accomplished each year. AI is getting better at many levels, like image processing or game playing. However, it is still quite far away from smart robots able to understand and process information independently, in the same way as humans.
Professor Michael Wooldridge from the Department of Computer Science at Hertford College believes there is an AI bubble. And many other indicators suggest that AI development started to slow down.
The question is — did we accelerated it and pushed it over its limits, or this is the organic path of the evolvement of one complex system like AI?
Another significant figure of the tech world, Jerome Pesenti, who is the head leader of AI at Facebook, pointed out that Artificial Intelligence is not taking over the world, at least not for now. Instead, it just “hit the wall.” During the interview for Wired, Pesenti expressed concern about “the rate of progress that is not sustainable” and said that mechanisms of Deep Learning are precisely what is being pushed over the limits.
Deep Learning needs more room to operate. One part of the problem is the lack of computing power, but most of it concerns large-scale projects that are extremely costly, almost unaffordable.
The process of increasing computing power is moving at a significantly slower pace than AI, which leads to unsustainability.
One concept that measures the increase in computer power is called Moore’s Law and shows a doubling rate every year. According to OpenAI, for proper AI applications, we need to double computing power every 3.4 months. The results show that such a trend is not sustainable, and the inability to follow AI with increased computer power significantly slows down its development.
When it comes to system limitations, quantum computing can overcome many shortcomings, and it seems like it already has found application in practice. But while Amazon is offering quantum computing services to its customers and Google announcing processor that could resolve computations unimaginably faster than a regular computer, AI still requires increased computing power.
When it comes to the proper approach, many experts, including Presenti, admit there are some considerable limitations to how AI is trained. He pointed out how Deep Learning still can’t be applied to many aspects of machine training.
AI still lacks the ability to think for itself and draw conclusions considering the smallest details. It’s something we call “common sense” in humans, and machines have not been able to overcome it yet.
Another “human factor” that can pose a problem is the large data sets that are fed into systems for processing and learning purposes. This data is human-generated, so it contains human flaws such as prejudice. The problem arises when an algorithm comes across such data and fails to remove it; instead, it becomes biased. Then, the practical application of AI can experience fatal errors and stop being purposeful.
It is essential to overcome the issue of insufficient computing power and improve the training system. Artificial intelligence, in its current form, can vastly improve many industries and services, but it will not reach its potential with such deficiencies.