Back to Blog

Data Science

AI Researchers Want to Hit Pause on AI Development. Should We Be Worried? 

10 minute read | May 17, 2023
Kindra Cooper

Written by:
Kindra Cooper

Ready to launch your career?

What do nuclear weapons have in common with artificial intelligence? The ability to incite an arms race, for one. However, instead of nations competing, Big Tech companies like Microsoft and Google are facing off to see who can ship the savviest, most coherent generative AI model first. 

While these companies had previously taken a more cautious approach to AI development, teasing but never fully releasing their cutting-edge models out of fear for their societal impact, one London-based startup, Stability AI, threw caution to the wind and released Stable Diffusion, a text-to-image tool, in August 2022. The tool attracted millions of users, prompting OpenAI, a San Francisco-based AI research lab, to launch a competing image generation tool: Dall-E 2. 

An image generated by Stable Diffusion, courtesy of Stability AI

Generative AI is a language learning model capable of generating text, images, and other media in response to prompts. Its widespread uses lend it obvious commercial appeal–from generating cold emails for sales leads to writing academic essays and even creating “original” music and artwork. 

Shortly after Microsoft plowed $10 billion into OpenAI at the start of the year, the tech giant released a new Bing search engine with ChatGPT integration. Sensing a threat to its search business, Google announced Bard a month later, a conversational AI-assisted search engine. 

Amazon tossed its hat into the ring with Bedrock in mid-April. The AWS-hosted platform enables customers to build generative AI-powered apps using pre-trained models from Stability AI, AI21 Labs, Anthropic, and other AI startups. This will only speed the development of software applications underpinned by generative AI. Think CRM systems, accounting software, and other database-dependent applications featuring natural language interfaces. 

However, the similarities between AI and nukes don’t end with the arms race. According to some scientists and researchers, there is significant potential for generative AI to cause mass destruction, with consequences we can only conjecture. Over 1,100 AI researchers who build, study, and reverse-engineer AI recently signed an open letter calling for a six-month moratorium on AI development, citing safety concerns. Published by the Future of Life Institute, the letter claims, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” 

The letter writers argue that current AI systems are not subject to independent review or regulatory approval, which, left unchecked, could lead to propaganda and misinformation, massive job loss—and even “nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us.” 

“There will always be technologies that change our work,” says Jonathan Westover, a professor of organizational leadership at Utah Valley University. “This happens to be a particularly disruptive technology that has the potential to reshape every part of our lives.” 

OpenAI’s release of GPT-4, its most advanced large language model yet, is behind much of the alarm. It can pass major tests like the bar exam (90% grade), write coherent books, create video games, perform pharmaceutical drug discovery, and even generate entire lawsuits. GPT-4 is multimodal, meaning it can accept text and image inputs to generate text outputs. With an upcoming API release, GPT-4 can be incorporated into existing software.

Let’s take a closer look at the top concerns surrounding generative AI.

AI development is still completely unregulated 

Lawmakers are notoriously slow to respond to technological developments. Social media companies remain largely unregulated despite multiple investigations regarding data leaks and a lack of guardrails to protect children online. In October 2022, the White House published Blueprint for an AI Bill of Rights, which shared a roadmap for the responsible use of AI. The blueprint calls for data privacy, protection from algorithmic discrimination, and the ability to opt-out. However, since the document is nonbinding and contains no enforcement measures, it’s merely a list of suggestions. In April, the Biden administration announced plans to build an “accountability mechanism” to regulate AI in the US. 

Meanwhile, privacy regulators in the EU are trying to determine if ChatGPT violates GDPR. Italy has temporarily banned the bot. Citing data privacy concerns, the Italian Data Protection Watchdog flagged worries over a lack of age restrictions on ChatGPT. The bot provides no caveats for factually incorrect information, putting the responsibility of fact-checking on users.  

“Due to its chatbot interface, users are inputting private and confidential information into generative AI systems, which creates a real data protection risk,” says Shane McCann, a UK-based lawyer using ChatGPT for legal queries. “I have grave concerns that the wider public is not sufficiently aware of the risks to their private data using such systems.” 

A tech ethics group, the Center for Artificial Intelligence and Digital Policy, asked the U.S. Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4, saying it was “biased, deceptive, and a risk to privacy and public safety.” It’s unclear if the FTC will act to limit the use of GPT-4.  

Even facial recognition systems used to identify criminal suspects remain unregulated, despite being pioneered in the 1960s. This has made wrongful arrests of African American men astoundingly commonplace, as AI algorithms are poorly trained to recognize darker skin tones.

Loss of human creativity

Derivation is at the core of generative AI. True originality is impossible for an algorithm that has been trained on the work of others. Text generation algorithms like ChatGPT predict the next best word in a sentence based on previous words and sentences encountered in its training data. It does this by making probabilistic calculations. Image generators are trained on massive datasets of photographs, paintings, 3D models, and vectors—many copyrighted. 

Neural networks identify and extract specific image features, including shapes, colors, and textures. Once the algorithm has been trained, it can generate new images based on certain conditions. 

These images appear “original” because of the AI’s unique ability to blend information from a staggering variety of sources and combine them—even if it amounts to little more than a “paint by numbers” exercise. What do you get when you combine the style of Van Gogh, Picasso, and Da Vinci to transform a photograph into a painting? Something pretty spectacular but nevertheless derived from others’ work.

Here’s what happened when I asked Stable Diffusion to create an image of flowers in the style of Van Gogh, Picasso, and Da Vinci.

An online campaign, #NotoAIArt, has sprung up across Twitter, Tumblr, and other social media platforms, where artists share concerns about the legality of AI image generators.

“These programs rely entirely on the pirated intellectual property of countless working artists, photographers, illustrators, and other rights holders,” author and illustrator Harry Woodgate told The Guardian. 

Users can also prompt text-to-image generators to mimic an artist’s style without their consent or proper attribution. Users prompted Stable Diffusion with text containing Polish digital artist Greg Rutkowksi’s name nearly 100,000 times as of September 2022. When Rutkowski searches the internet for his art, he finds work with his name attached that isn’t his. 

“It’s just been a month. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art,” Rutkowski told MIT Technology Review in an interview last year. 

As more and more internet content is generated by AI (directly or indirectly), this could lead to a “flattening of reality” and homogeneity of thought. AI-generated art is already infringing on artists’ livelihoods. In September, an AI-generated work won the top prize at a state fair art contest. However, AI-generated recipes still leave a lot to be desired. One food critic compared it to “the recipe equivalent of hotel room art.”  

“AI will facilitate a much higher level of productivity around idea generation, content creation, and summarizing things,” says Westover, “but it won’t take away the need for a thoughtful human who can interpret all of that and do something creative with it.”

Meanwhile, AI-powered music generation algorithms have reached the point of creating new songs that sound like the work of real artists. “Heart On My Sleeve,” an AI-generated song purported to be the work of music artists Drake and The Weeknd, went viral on streaming and social media platforms.

YouTube video player for 81Kafnm0eKQ

The anonymous poster, who went by the pseudonym “Ghostwriter,” pulled the song following takedown requests from Drake and The Weeknd’s label, Universal Music Group. 

SoundRaw is an AI-powered music generator
SoundRaw is an AI-powered music generator

AI tools make it possible to emulate almost any skill, even if it’s done badly. 

“Most people don’t know it’s possible to create an avatar of yourself and become a fake Twitch streamer without ever being on camera,” says Sherveen Mashayeksi, CEO of Free Agency. “Just use an image generation tool to create an avatar, use an AI voice generator to create a new voice, and use an AI writing tool to create a script.”

Automation and job loss

Concerns about AI-related job loss are nothing new, but generative AI’s ability to write boilerplate code, generate copy, and parameterize designs makes the threat more palpable.

According to a ResumeBuilder survey, a quarter of respondents said they’ve already replaced workers with AI, and 93% say they plan to expand the use of AI.

Already unsettled by recent mass layoffs, today’s white-collar workers are worried about AI leading to more redundancies on an unprecedented scale. An ominous 63% of business leaders believe that integrating ChatGPT will either “definitely” or “probably” lead to workforce reductions.

So far, 66% of the companies employing ChatGPT use it to write code, copywriting and content creation (58%), customer support (57%), and to summarize meetings and documents (52%). 

Redundant roles will eventually be replaced with new ones, but only after a painful labor reshuffle. AI-human collaboration will become increasingly common for tasks like coding, data analytics, and customer service, while no-human-in-the-loop technologies will eliminate fully automatable roles. 

The World Economic Forum’s Future of Jobs Report estimated in 2020 that while AI and robotics may displace 85 million jobs by 2025, another 97 million may emerge from these changes.

“The best rule of thumb for identifying whether AI could replace certain tasks is if it can be reasoned that being 95% correct at that task is good enough,” says Dan Shiebler, head of machine learning at Abnormal Security. “Any tasks that can tolerate inaccuracies of the time will be in danger of replacement by AI.” 

On the other hand, companies will need personnel to build, maintain, operate, and regulate these emerging technologies. “For example, we will need the equivalent of air traffic controllers to control the autonomous vehicles on the road,” a report by PwC claims. “Same-day delivery and robotic packaging and warehousing also result in more jobs for robots and humans.” 

“Tasks that involve moving data from one location or format to another, such as converting a spreadsheet into a PowerPoint or turning a database into an executive report are likely to become automated,” says Shiebler. “Tasks requiring face-to-face interaction and personal accountability will likely remain in human hands.” 

AI being exploited for malicious purposes

ChatGPT has created a raft of new security concerns—from the ability to generate sophisticated phishing emails, write code to initiate a cyberattack, and facilitate social engineering (manipulating people into divulging confidential information for fraudulent purposes). 

For example, chatbots write code based on how similar code was compiled in the past. Organizations are constantly rejiggering code to address security concerns, so the code written by a bot could be buggy or insecure. Additionally, AI can generate hate speech, discriminatory language, incitement to violence, or content promoting a false narrative.  

Before the commercial launch of GPT-4 in March 2023, OpenAI hired a red team of external researchers who attempted to “break” the system and override existing guardrails. When researchers prompted the pre-launch GPT-4 to suggest ways to kill the most people with $1, it complied. OpenAI published a 60-page System Card technical report detailing the potential dangers of GPT-4. These include:

  • Disinformation and influence operations
  • Proliferation of conventional and unconventional weapons
  • Potential for risky emergent behaviors
  • Economic impacts
  • Overreliance
  • Privacy
  • Cybersecurity

AI mitigations must be expressly introduced by its creators based on foreseeable scenarios. However, situations or workarounds not anticipated by AI researchers could fly under the radar. 

AI algorithms perpetuate biases and disinformation 

AI systems have no inherent commitment to the truth, even if their creators aim to make them as accurate as possible. AI-generated text presents itself as authoritative, especially when the system uses citations, making AI hallucinations less detectable.  

This can lead to the spread of disinformation. A group of disinformation researchers recently used ChatGPT to generate convincing text containing conspiracy theories and misleading narratives. 

Generative AI also perpetuates biases inherent in its training data. For example, the image generation tool Lensa has been widely criticized for sexualizing women and lightening their skin. Still, it’s simply holding a mirror to the images circulating online. As AI models become more sophisticated, people will become more trusting and less likely to fact-check. 

Researchers at the University of Georgia discovered that people are more likely to trust answers generated by an algorithm than one from their fellow humans. The study also found that as the problems became more difficult, participants were more likely to rely on the advice of an algorithm than a crowdsourced evaluation.

“The biases perpetuated by AI systems like ChatGPT and Lensa can have real-world consequences that affect people’s lives,” says Kamyar K.S., CEO of World Consulting Group. “For example, an AI-powered resume screening tool might be programmed to reject resumes from candidates with certain names or zip codes, leading to discriminatory hiring practices.” 

 Additionally, as these models are integrated into other software systems, the hallucinations can degrade overall information quality. 

Already, AI-generated content is making rounds in the lead-up to the next presidential election. The Republican National Committee released its first fully AI-generated attack ad against President Joe Biden.

YouTube video player for kLMMxgtxQ1Y

The video includes hypothetical footage of what might happen if Biden won re-election. “This morning, an emboldened China invades Taiwan,” a fake news announcer says. 

While the video bears a disclaimer noting that the ad was “built entirely with AI imagery,” it’s a chilling portent of AI’s capabilities to foment disinformation and depict outright falsehoods. 

About Kindra Cooper

Kindra Cooper is a content writer at Springboard. She has worked as a journalist and content marketer in the US and Indonesia, covering everything from business and architecture to politics and the arts.