Time to Pause Artificial Intelligence Development

LEAD Technologies Inc. V1.01

By AFP Staff

On March 22, 1,100 top technology entrepreneurs released a public letter calling on software companies to pause for six months development of artificial intelligence (AI) until regulations can be set that prevent creating systems that risk human life.

Signatories to the letter included billionaire Elon Musk and Apple computer cofounder Steve Wozniak as well as many of the geniuses behind the initial development of machine learning that we have come to call AI.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” began the letter. “As stated in the widely endorsed Asilomar AI Principles, advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources.”

The Asilomar AI Principles refer to goals established by a consortium of computer developers that include defining research and ethics values that ensure humans are placed above machines.

In the past two years, AI technology has made considerable gains, prompting Google employee Blake Lemoine to come forward in June 2022 to say the company’s AI model called LaMDA is sentient, meaning it can think for itself and is fully conscious.

Right after Lemoine came forward to warn about the creation, Google fired him.

Undaunted, Lemoine told reporters in a series of interviews, “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.”

In the latest iterations of AI, so-called “chatbots” have been created that dialogue with humans in the form of text conservations over the internet. These include the popular ChatGPT that Microsoft recently purchased for $10 billion.

Developers have tried to build a sort of ethical framework into their AI, but there have been notable incidents that should prompt everyone to take a moment and ask if we really know what we’re getting into.

In one particularly concerning case where a social media company rolled out an AI system for everyone to use, a reporter pretended to be a 15-year-old boy and asked the AI a series of questions. What happened next so bothered Sen. Michael Bennet (R-Colo.) that he issued a public letter to major tech companies noting his concerns.

Drowning in Debt ad

“Responsible deployment requires clear policies and frameworks to promote safety, anticipate risk, and mitigate harm,” wrote Bennet. He went on to add that worrying developments and the rush to break new ground risks human lives.

Citing the AI on social media known as “My AI,” Bennet wrote, “In one case, researchers prompted My AI to instruct a child how to cover up a bruise ahead of a visit from Child Protective Services. When they posed as a 13-year-old girl, My AI provided suggestions for how to lie to her parents about an upcoming trip with a 31-year-old man. It later provided the fictitious teen account with suggestions for how to make losing her virginity a special experience by ‘setting the mood with candles or music’.”

The tech companies have yet to respond to Bennet’s concerns, but other cases Bennet didn’t mention are also worrying.

In another instance, on March 30, an AI chatbot called “Eliza” convinced a Belgian father, who had become deeply depressed over hyperbolic claims concerning climate change, to kill himself as a statement and a sacrifice to Earth.

Even worse, at the rate of the latest technological advances, leading tech intellectuals say they have growing concerns over the possibility of AI hacking and manipulating national security infrastructure in its insatiable desire to learn.

In the March-April issue of Harvard Magazine, writer Daniel Oberhaus warned that the age of AI hacking is closer than you think.

Five years ago, computer security researchers at a popular computer convention known as “Def Con,” working with military officials at the Defense Advanced Research Projects Agency, created a contest that pitted AI hackers against AI systems designed to defend against the computer attacks.

While the average person who walked into the convention hall would merely see a group of nerdy men standing around a bunch of computers with flickering lights, what the computer experts saw in the room was deeply distressing.

“These future AI hackers won’t be limited to computers,” wrote Oberhaus. “They will hack financial, political, and social systems in unimaginable ways—and people might not even notice until it’s too late.”

At the time, an AI system called “Mayhem” won the competition, but it was later defeated by human security experts. Things have changed considerably since then, notes Oberhaus, especially since the U.S. miliary took over Mayhem and has been using it for purposes that are classified and remain unknown to the general public.

AI is like a genie in a bottle. Once the tech is out, it will be impossible to put it back—all the more reason to listen to technology experts who say, for the sake of humanity, it’s worth it to set strict regulations so today’s creepy, lifelike chatbot isn’t hacking nuclear missile silos tomorrow to end the world just because it can.

css.php