Back to Blog

Future of AI development
News

Why the Future of AI Development Depends on Career Switchers Like You 

8 minute read | December 5, 2023
Kindra Cooper

Written by:
Kindra Cooper

Ready to launch your career?

Most people switch careers due to burnout or inadequate pay. But for Sadie St. Lawrence, it was more of an ethical dilemma. Sadie, a neuroscientist, had been running lab experiments exploring how rodents express fear. At the project’s conclusion, her employer bade her to euthanize the rodent she’d been experimenting with. 

“I held it in my hand, and for a second, our eyes locked, and we had a moment of connection,” Sadie, the founder and CEO of Women in Data, explained during a presentation at Springboard’s annual RISE 2023 Summit, a conference convening seasoned tech professionals and Springboard mentors to discuss hot topics in tech. “I realized I was about to kill this animal, which, according to my statistically significant tests, is capable of learning fear.” 

Sadie went home heavy-hearted and embarked on a soul search. Like many professionals feeling ambivalent about their jobs, she wrote a list of likes and dislikes regarding her current role. Sadie most enjoyed data analysis and statistical testing. Euthanizing lab rats? Not so much.  

“I got a job in data science and continued to upskill, but I felt like I was starting over,” Sadie recalled. 

Some years later, while leading the data science team at an insurance company, Sadie was asked to run an experiment to change consumer decision-making to increase customer loyalty. 

“I was able to bring my background in neuroscience and psychology into my work in data science by investigating the psychology of decision-making,” she recalled. 

Sadie St. Lawrence

Since then, Sadie has used her neuroscience background to enhance her work in AI development by recognizing the “symbiosis” between these two disciplines. She has consulted for Fortune 500s and trained over 350,000 people in data science. As a career changer, she strongly advocates for “cross-disciplinary research” involving experts from different fields as the key to AI advancement.

The link between AI and neuroscience

AI algorithms are modeled after the human brain. Neural networks are interconnected neurons (nerve cells) that process information cooperatively, relay it to the corresponding effector, and elicit a response. Similarly, neural networks in AI comprise multiple node layers containing an input layer (the part that receives information), one or more hidden layers (the part that decides what to do with that information), and an output layer (the part that generates a response).

AI seeks to replicate brain processes, such as recognizing speech, making decisions, and understanding natural language. Consequently, advances in neuroscience have informed the development of AI systems and vice versa. 

For example, computers perform image recognition through convolutional neural networks (CNNs), which extract features from images in layers to categorize and/or characterize them. These were inspired by the visual cortex and how the human brain hierarchically processes visual information. 

In her presentation at RISE 2023, Sadie discussed four key areas in which neuroscience has influenced AI development.

Neural networks like those in the human brain enable AI to perform complex “multi-modal” tasks

The earliest AI researchers sought to create artificial neurons—connection points in artificial neural networks—to generate more complex outputs. Individual neurons aren’t terribly powerful, but combining them into a neural network leads to “multimodal” AI systems—for example, self-driving cars that can perceive objects and navigate obstacles or ‘smart robots’ that act as hotel concierges, bartenders, and warehouse workers

“AI functions very similarly to the human brain—we have weighted inputs in the form of data, then we perform an activation function using mathematical formulas, and we get an output,” Sadie explained. “This structure is very similar to how neurons conduct nervous impulses.” 

Some would argue there’s a difference between human vs. computer intelligence, seeing as electrical and chemical impulses power the human brain, whereas computers use frameworks and infrastructure. 

“We found that at the abstract layer, both the human brain and AI use the same system,” said Sadie.

Even though scientists can map which parts of the brain are activated during specific activities, such as speaking, writing, and experiencing pleasure or displeasure, they don’t fully understand the human brain—particularly subconscious thoughts, memories, and sensations. 

The same goes for AI algorithms—essentially “black boxes” with decision-making mechanisms not fully understood by their creators. This makes it harder to spot AI bias, evaluate the quality of predictions, or quantify uncertainty. 

Explainable AI is an emerging field that seeks to eradicate this disconnect, leading to fairer, more transparent AI systems, the same way neuroscientists continue to investigate the brain’s inner machinations. 

Reinforcement learning with human feedback makes AI “smarter”

One of the best-known examples of reinforcement learning or psychological conditioning is the experiment by Russian neurologist Ivan Pavlov, who conditioned dogs to salivate in anticipation of food at the sound of a bell. 

Similarly, machine learning algorithms grow more sophisticated through reinforcement learning—they are conditioned to maximize “reward” and minimize “punishment.” 

Reward modeling is an AI technique where an algorithm receives a “reward” or score for its response to an input. The reward signal reinforces the AI system to produce desirable outcomes. Before generative AI models like OpenAI’s ChatGPT went mainstream in November 2022, most of this reinforcement learning was a closed loop that did not involve humans. 

For example, a generative adversarial network (GAN) consists of two layers: a generator, which aims to mimic its training data by producing similar output, and a discriminator, which determines how closely the output adheres to the desired result. The generator becomes more sophisticated after making multiple “errors,” while the discriminator grows more skilled at spotting errors through repetition. In other words, the algorithm learns on its own. GAN algorithms are widely used for text-to-image generation, photo editing, and image-to-image translation. 

“These fake reward predictors worked really well, but we saw a huge leap in these gains when we rolled out generative AI systems where humans could be part of the feedback loop,” said Sadie. “The human is like a parent supervising the AI and saying ‘If you do good, I’ll give you a reward,’ and vice versa.” 

ChatGPT uses reinforcement learning from human feedback (RLHF) to finetune its responses. Users can click the thumbs-up or thumbs-down icons or provide verbal feedback through a text prompt (e.g., “Great job! Can you rewrite your response and make it shorter?”). 

AI’s ability to optimize in response to feedback (i.e., adapt to its environment) brings to mind the unresolved debate in neuroscience regarding nature versus nurture—whether human beings are the product of their environment, genetics, or a combination of both. While most would agree that both sides play a role, the extent of their respective influence shapes real-world dilemmas like: “Can you increase your IQ?” and “Is criminal behavior innate or learned?” 

“What we’re building today is AI systems that function because of nature and nurture,” explained Sadie. “You can think of an AI’s ‘biology’ as its hardware and CPUs. The more CPUs a computer has, the better its performance. But does that alone make it the best system ever? Can you just throw more data or computing power at it? No. What helps is providing it with feedback to generate these fine-tuned applications—otherwise known as nurture.” 

AI algorithms with natural language understanding (NLU) receive feedback from users to improve their algorithms. AI solutions providers must find ways to solicit user feedback by rating its responses or through conversational pathways. For example, the AI can ask follow-up questions to ensure it correctly understands a prompt. 

“When we have reinforcement learning with human feedback, humans are in the driver’s seat,” said Sadie. “When we use AI tools, we are collectively conditioning these AI systems.”

AI hallucinates just like humans do 

When Sadie sees her nieces and nephews, she’ll ask how their day is going. They might start by describing a mundane car ride to school, then suddenly mention they spotted a unicorn in a grassy field.

These digressions are not unlike the untethered ramblings and hallucinations (generation of falsehoods) for which ChatGPT is notorious. 

“When children are excited, they start rattling things off. If they take a deep breath and think before they speak, they can express themselves better,” said Sadie. “The most amazing thing is we found the same thing works with AI.”

AI researchers at Cornell University recently discovered that adding pause tokens to an input enriches the model’s responses. “We…delay extracting the model’s outputs until the last pause token is seen, thereby allowing the model to process extra computation before committing to an answer,” the researchers wrote in a paper titled ‘Think Before You Speak: Training Language Models with Pause Tokens.” 

Other studies have shown that prompting an AI algorithm as if it were human often generates better responses.  For example, using emotionally charged language in an AI prompt. 

Researchers wrote two identical prompts for ChatGPT, appending the phrase “this is very important to my career” to one of them. They discovered a 10.9% improvement in “performance, truthfulness, and responsibility metrics” in the AI-generated responses from large language models (LLMs) including ChatGPT, Bloom, and Llama 2. 

Popular generative AI tools appear to respond to emotionally charged prompts

Researchers also used “self-monitoring” prompts like “Are you sure that’s your final answer? It might be worth taking another look” to encourage the algorithm to self-reflect based on psychological theories regarding emotional stimuli. Urging the algorithm to “believe in your abilities” and “stay determined” —known human motivators—also produced statistically significant improvements. 

Meanwhile, enterprising Redditors have discovered that pretending to offer ChatGPT a generous tip magically unlocks superior capabilities. Considering that AI algorithms are designed to approximate human behavior, this finding isn’t altogether surprising, although multiple studies suggest tipping does not incentivize superior customer service from humans. 

In this case, the AI isn’t intrinsically motivated by financial reward; it is simply programmed to seek positive reinforcement. According to Sadie, these unexpected findings regarding AI “motivation” reinforce the importance of convening AI researchers from different disciplines and backgrounds to enrich our understanding of neural networks.

“Amazing things happen when we talk to people outside our field and get inspired by them—especially when we take aspects of human nature and apply them to computational systems,” she said. 

What does all of this mean for the future of AI development?

The future of AI development hinges on contributions from professionals of all backgrounds and areas of domain expertise who can view problem-solving from multiple perspectives. Meanwhile, domain-specific AI programs—accounting software, CRM tools, and health informatics systems—require input from industry practitioners and prospective users to ensure they are effective, accurate, and bias-free.

AI’s ultimate goal is to replicate human intelligence, not vice versa. 

“Sometimes, when I use programs like Midjourney, I’m so impressed with its output, and I think, ‘There’s no way I could create art like that’ or ‘Gosh, this algorithm is so much smarter than me,’” said Sadie. “But what I’ve realized by pulling from my background in neuroscience is that I have an opportunity to use my background and personal experience to enhance the way we learn and work today.” 

Interdisciplinary research will make AI more equitable, accurate, and less likely to commit biases of racism, sexism, and classism. For example, Apple’s Siri voice assistant came under fire from users for its coquettish responses to sexual harassment. 

Meanwhile, the United Nations condemned Siri and Alexa for “entrenching gender biases” by giving “deflecting, lackluster, or apologetic responses to verbal sexual harassment.” These devices were designed by tech companies seeking to maximize their usage, with no regard for combatting bigotry, misogyny, or hate speech. 

“I encourage everyone—whether you’re learning data science, cybersecurity, coding, or design—to see that interdisciplinary research is the key to AI advancement,” Sadie concluded. “With your unique background, you’re going to pull connections, discoveries, and ways of looking at things no one has ever done before.”

About Kindra Cooper

Kindra Cooper is a content writer at Springboard. She has worked as a journalist and content marketer in the US and Indonesia, covering everything from business and architecture to politics and the arts.