At a moment that shook the tech world, Elon Musk, the CEO of Tesla and SpaceX, is predicting that “super” personal artificial intelligence will end up being available within a year. This type of AI, according to Musk, will overtake human intelligence and become capable of exercising an unfathomable amount of power.
When he answered a question on his recent X, which is a social media platform that he recently bought, he made his comments. On the contrary, less than a month ago at the World Economic Forum, Elon Musk did not even mention the post-human state, giving only some five years for the creation of the general intelligence AI.
“It looks like a year from now, we’ll see an AI smarter than any of humanity as a whole,” Musk’s words were. In the same way, His predicted estimate of superintelligence is a contradiction to his earlier estimates which are placed till 2029.
But, whilst Musk did admit to some barriers standing in the way of the AI singularity, he did remain positive. He pinpointed the escalating need for computing power to fuel such intricate algorithms, along with the challenge of especially shortages of highly computerized chips. “It was the constraint that last year” – he said. “People went crazy to stock Nvidia chips. What happened this year is that the chips went from voltage transformer power supply to only electric power supply. Maybe in about a year or so, it’s only electricity the chips would need.”
With superhuman AI, the world has gained the spark of a vivid discussion. For articulators, it is a game changer and a means to the new height of human achievement and scientific advancement. They believe AI is effective in addressing major problems like climate crisis and illnesses due to its speed and efficiency which are much better compared to humans.
In this regard, Musk has, himself, been notable for his dissenting voice on the destructive possibilities of AI. This reminds him – he strongly advocates for level-setting regulations that reflect AI as a big opportunity but a big threat if not guided. If unmanaged, it could cause grave damage to humankind.
The probability of stronger AI overtaking humans is advanced further than we predicted and this leads to safety mechanism questions if they are shored up. Experts advise people not to overestimate AIs’ capacities, especially soon. This is because, even though it is only human cognition that can learn and adapt to other domains, today’s most sophisticated AIs aren’t able to do that yet.