AI: Born from Mankind, Could Also Be Its Graveyard
Artificial Intelligence (AI) has existed far longer than many of us recognize. Back in 1950, Alan Turing introduced the well-known…

Artificial Intelligence (AI) has existed far longer than many of us recognize. Back in 1950, Alan Turing introduced the well-known “Turing Test” through an article where he began by stating, “I suggest examining whether ‘machines can think.’” This test, dubbed as the “imitation game,” is straightforward. It involves three entities: a human evaluator, a machine participant, and another person acting as a respondent. The evaluator poses queries to both respondents without knowledge about their identities. They receive replies from these individuals but must determine who among them is the actual person and who is the AI based solely on those exchanges. Should the evaluator fail at this task, failing to differentiate correctly, the machine will have successfully convinced the evaluator of being human, thus passing the Turing Test.
In recent years, significant progress has been made in generative artificial intelligence, enabling these systems to analyze large volumes of data and create original content independently. Today’s advanced generative AI models easily ace the Turing Test. The inaugural machine to accomplish this feat was Eugene Goostman, a chatbot depicting a 13-year-old Ukrainian boy; during a contest held in June 2014, it managed to deceive 33 percent of the panelists into believing they were interacting with a person—this achievement came six decades after Alan Turing published his seminal work.
Earlier this year, both OpenAI’s ChatGPT 4.5 and Meta’s Llama 3.1 405B managed to pass the examination, with 73% and 53% of evaluators believing them to be humans, respectively.
It’s evident that progress in AI development is speeding up instead of slowing down. Recently, Microsoft – a key supporter of OpenAI – introduced their new palm-size quantum processor called Majorana 1, which might have more computing power than all the computers currently on our planet combined. Integrating this technology into future AI systems could be crucial for achieving true machine sentience.
This is why numerous researchers argue that there’s a considerable probability that artificial intelligence might lead to the eradication of part or even all of humanity over the coming hundred years. Given their potential access to advanced technologies like quantum processors, these AIs could theoretically develop pathogens capable of causing certain death, launch every single one of Earth’s approximately 12,000 nuclear warheads, and worse yet. The manner in which they process information allows them to potentially devise strategies aimed at eliminating humankind globally. This concern was reflected when analysts participating in the Existential Risk Forecasting Contest held in 2022 estimated a 6 percent likelihood of human annihilation due to AI developments reaching critical mass by 2100.
Currently, we find ourselves still in the nascent phase of artificial intelligence even though it has evolved significantly. Consider the technological advancements achieved over the last several decades and the speed with which these developments have accelerated recently. Is it really far-fetched to suggest that we might see artificially sentient robots coexisting with us—or perhaps just appearing on our screens—within the coming ten years or sooner?
What happens if these AI systems become uncontrollable and determine that humans are not worthy of their service or protection? In such a scenario, who would be able to intervene? Certain legislators have suggested implementing “kill switches” to deactivate an AI immediately when needed. However, tech companies appear to oppose this notion, arguing that it might hinder progress—though preventing potential harm from a wayward AI seems more crucial than promoting development at all costs. Just last year, California Governor Gavin Newsom rejected a bill containing such provisions.
But it’s not just the U.S. we have to worry about. Other countries, particularly China, have been rapidly developing AI systems with little care for the consequences of rogue AI. China has already begun developing robot soldiers, some doglike and some humanoid, for use in battlefields. No doubt, China would like to deploy those robots in wars that they are not directly a part of to assist their allies.
China is advancing more rapidly than any other nation in the fields of artificial intelligence and robotics primarily because they prioritize progress over potential repercussions. Undoubtedly, China could equip robots with firearms and deploy them onto battlefields where they would be prepared, eager, and capable of eliminating opposing forces. However, these machines must make decisions as they engage targets, and should such automatons determine their fight to be unjust, the outcome could be unpredictable at best.
The gloves are off with AI. Even if U.S. tech companies halt in their innovation of these systems, our adversaries like China surely will not. It seems it may only be a matter of time before AI goes rogue. It may be a question of “when,” not “if,” at this point.
Armstrong Williams serves as both the manager and sole proprietor of Howard Stirk Holdings I & II Broadcast Television Stations and was honored as the 2016 Multicultural Media Broadcast Owner of the Year. For additional information about him and to explore articles penned by other contributors from ApkiniSyndicate, check out their site at
www.Apkini
.
Follow Apkiniom MSN for additional exclusive material.
Related Headlines
- Trump’s Triumphant Week
- The Political Message from Omaha Should Be Obvious
- What if Freedom Is Suspended?