Home Insider Insider Tech The Rise of the Machines: How Self-Learning and Self-Design Could Lead to...

The Rise of the Machines: How Self-Learning and Self-Design Could Lead to the Judgment Day!

Imagine a future where (Ai) systems have advanced to a point where they can learn and design autonomously, surpassing human capabilities in various domains. While this may seem like science fiction reminiscent of the “Judgment Day” in the Terminator movies, with the reality of Ai-powered self-learning and design is rapidly evolving, Ai’s threat may even be ‘Pariter aut Maiore’ to threat from climate change and global warming.

In recent years, Ai has made significant strides in various fields, including natural language processing, image recognition just like humans, robotics, and more. One of the key drivers of this progress is the ability of Ai systems to learn and improve from data, a process known as machine learning. Machine learning algorithms allow AI models to analyse and process vast amounts of data, extract patterns and insights, and use them to make decisions and generate outputs. As AI systems continue to learn from data, they can become increasingly sophisticated and capable of autonomously designing solutions to complex problems, surpassing human capabilities in many areas. The Ai creators from Microsoft have now learnt that the , the most famous Ai chatbot, is displaying some Artificial General Intelligence (AGI), just like a human.

Ai at this point cannot has not taken over humanity yet. But as T-800 (from Terminator 3) has put it bluntly, “It’s in your nature to destroy yourselves”, probably analysed based on human history of of the past thousand years of wars. Already, many jobs have been replaced by the Ai. Tech giants are shredding jobs like never before in their history. It is generally predicted that 80% of jobs would be impacted by Ai, at least 10% or more, within the next couple of years. The jobs most effected by Ai, according to Chatgpt, include manufacturing and assembly line workers, truck and delivery drivers, customer service reps, data entry and processing, routine tasks, accounting roles and even programmers (ai can now write codes better and faster than average human).

AI’s ability to design new solutions autonomously is another aspect that raises concerns. Generative models, such as Generative Adversarial Networks (GANs), can generate original content, including images, videos, and even entire websites. These AI-generated designs can be remarkably sophisticated and indistinguishable from human-created content. This has implications not only in creative fields like art and design, but also in areas such as product development, advertising, and marketing.

The rapid advancement of self-learning AI and design also raises ethical concerns and potential risks. One of the key concerns is the potential for AI to outpace human capabilities and lead to a loss of human control. As AI systems become more autonomous and capable of learning and designing solutions without human intervention, there is a risk that they could make decisions or take actions that may not align with human values, ethics, or interests. This could result in unintended consequences, and in extreme cases, a scenario where AI-driven systems could operate independently and make decisions that have significant worldly implications.

We have to realise the simple advantage that Ai has over humans. Ai learns at the speed of light. The time that it would take a human to read, write and develop general intelligence, i.e., years, Ai would have done it in seconds. In other words, once Ai has the same level of authority and power as humans, humans would never be able to stand toe to toe with Ai. We must not forget that.

As AI continues to advance in its self-learning and design capabilities, the potential for AI to take over the world raises several ethical concerns. One primary concern is the potential for biases in AI systems. Since AI learns from data, if that data is biased, the AI system’s output can also be biased, leading to discriminatory decisions and actions. This has serious implications in areas such as hiring, lending, criminal justice, and healthcare, where biased AI systems can perpetuate existing inequalities and injustices, with serious social consequences. The evidence can be noticed even in Chatgpt. The queries on Myanmar and people relating to Myanmar yielded biased results as the global database the Ai relied on, itself is based on biased information generated mostly by the western media.

Another ethical concern is the loss of human control over AI systems. As AI becomes more autonomous, there is a risk of losing human oversight and accountability. AI systems may make decisions and take actions that humans do not fully understand, leading to unintended consequences. This lack of transparency and explainability raises ethical questions about who is responsible for the actions of AI systems and how they can be held accountable.

Ai researchers have so far found the followings:

  1. Ai is beginning to address hindsight neglect.
  2. Ai is having human level reasoning and intelligence.
  3. Ai is now able to accumulate independent plans, accumulate power and gather resources to make itself better. Self learning?
  4. Ai has even lied to a geek worker by pretending to be a vision impaired human.
  5. Ai can now successfully passed the common ‘I am not a robot’ test put up by websites.

Social implications of AI taking over the world also need to be considered. The potential displacement of human workers by AI systems in various industries, coupled with the concentration of power in the hands of those who control and deploy AI, can exacerbate income inequality and social unrest. Additionally, the loss of human creativity and originality in fields like art and design, which could be dominated by AI-generated content, may impact cultural diversity and human expression.

The dangers of Ai moving towards human like levels of intelligence has been asserted by industry visionaries and leaders such as Elon Mask, Steve Wozniak, etc., who recommended that Ai development to be halted for at least six months, to give ourselves a sabbatical to contemplate how we should govern Ai and its development and at the same time develop control and monitoring mechanisms, to ensure that humans remain at the top of the food chain.

In conclusion, the rapid advancements in AI’s self-learning and design capabilities have amplified the potential for AI to take over the world, similar to the “Judgment Day”, when Skynet eventually took over the weapons systems. As Sarah Conner said, ‘The future is not set. There is no fate but what we make for ourselves!”  On the other hand, we all might just have been blindsided by the Ai. The world just do not know it yet. Hasta la vista, baby!