IN THIS ARTICLE
- What Can ChatGPT Do?
- How Will ChatGPT Transform Tech Jobs?
- ChatGPT Won't Replace Us Yet — But We’ll Need to Use It Wisely
Get expert insights straight to your inbox.
The Twittersphere’s response to a K-pop star’s recent apology for wearing a swastika-bearing t-shirt is a sign of our strange times. Chaeyoung, a member of the K-pop group Twice, apologized on Instagram, claiming she didn’t know the significance of the tilted Nazi symbol. However, eagle-eyed Twitter sleuths pointed out that the star’s apology was copy-pasted from ChatGPT, a natural language processing tool by OpenAI, a Microsoft-backed AI research company.
After being introduced to the public on November 30, 2022, ChatGPT has caused an unprecedented stir. The generative AI tool allows users to have reciprocal conversations, ask questions, and use the chatbot to assist with tasks.
While chatbots are nothing new, ChatGPT and other tools like it represent a new frontier in artificial intelligence. It is the first time AI has been applied to “creative and expressive” tasks—like writing poetry, creating digital art, or espousing one’s fitness for a job—rather than “dangerous and repetitive ones,” Ethan Mollick, associate professor at the Wharton School of the University of Pennsylvania, writes in Harvard Business Review.
Humans generally accept that answering repetitive questions, parsing terabytes of data, and providing order status updates are drudgeries best delegated to AI. ChatGPT has already demonstrated proficiency at writing boilerplate code, generating custom cover letters, and helping college students cheat on assignments. The bot even passed an MBA exam.
“Using ChatGPT feels like having a second brain; it helps you ideate, strategize and handle more tasks faster,” says Max Thake, cofounder of peaq, a Web3 network powering the economy of things. “We can expect hundreds of new startups and products leveraging ChatGPT and the like for all sorts of use cases. On the consumer side, it can be anything from creative image editing apps to smart recipe recommendation engines, travel planners, virtual fashion stylists, and more.”
According to an analysis by Swiss Bank UBS, ChatGPT is the fastest-growing app of all time. In January, only two months after its launch, UBS estimated ChatGPT had 100 million active users. By contrast, it took nine months for TikTok to amass that user base.
While Google is taking a more cautious, wait-and-see approach with the launch of Bard, its competing AI tool, despite threats to its search business, OpenAI appears bent on cornering the market. OpenAI’s most advanced system, GPT-4, launched on March 14, but is only available to those with a subscription to ChatGPT+, the premium version of ChatGPT-3, at $20 per month. This version can process longer pieces of text, accept images as inputs, and is less prone to making up facts.
What Can ChatGPT Do?
ChatGPT can explain complex topics in a particular style (e.g., “explain quantum physics to a five-year-old”) or emulate a renowned public figure or film character. It can find answers to homework questions and translate text from one language to another. The bot can even grade essays, explain the grade, and suggest improvements. It can generate recipes, accounting for dietary requirements. It can also generate story ideas for novels and fix plotholes.
The World Economic Forum’s Future of Jobs Report estimated in 2020 that while AI and robotics may displace 85 million jobs by 2025, another 97 million may emerge from these changes. ChatGPT is now being used in Microsoft’s Bing search engine. Over one million people joined the waitlist for the new AI-powered Bing in the first 48 hours it opened.
ChatGPT’s capabilities range from amusing (“write a biblical verse explaining how to remove a peanut butter sandwich from a VCR”) to utilitarian (moonlighting as a search engine). Some have already found nefarious uses for it: cybercriminals are already using the tool to generate believable phishing emails—say goodbye to attempts at wire fraud from heirless Nigerian princes and hello to next-generation spoofing.
But ChatGPT isn’t flawless. It’s been known to produce false information that it presents as factual, known as a hallucination. “A hallucination occurs in AI when the AI model generates output that deviates from what would be considered normal or expected based on the training data it has seen,” Greg Kostello, CTO and co-founder of Huma.AI, told Cybernews.
Sam Altman, CEO of OpenAI, warned against relying on ChatGPT “for anything important” right now. “Fun, creative inspiration; great! Reliance for factual queries; not such a good idea,” he tweeted. “It does know a lot, but the danger is that it is confident and wrong a significant fraction of the time.”
Yaser Ayub, the founder of SEO consultancy Yaser UK, warns that ChatGPT is essentially a black box. Users should therefore question the reliability and impartiality of its training data.
“ChatGPT has had no access to the internet or any new data since it was created. This means the sources being used may be outdated, no longer relevant, or have since been found to be incorrect,” says Ayub. “Researchers chose the sources used to educate the tool, and so the technology may be subject to unconscious bias, either from the researchers and the information they supplied it with, or bias within the web pages it has access to.”
Moreover, AI is incapable of original thought, empathy, imagination, or logical reasoning. It’s also not convinced that 2+2 does, in fact, equal 4. ChatGPT states that 2+2 is “ a complex and multifaceted topic that requires a great deal of contemplation and introspection to fully grasp.” In its present state, it’s unlikely ChatGPT will replace jobs anytime soon.
“If you look at what most employers are interested in—high-level communication skills, critical thinking, creative problem-solving—AI’s not there yet,” says Adenike Makinde, a career coach at Springboard. “AI can automate certain tasks, but at the end of the day, there are certain things humans do better.”
However, ChatGPT also tends to emulate human emotion and free will. In its present state, the bot simply approximates sentience—the same way a parrot mimics sentences without truly understanding them.
Artificial General Intelligence (AGI) is the notion that AI systems could one day attain human intelligence. In other words, the ability to form opinions, understand real-world conditions, and practice common sense. When faced with an unfamiliar situation (i.e. a task for which the algorithm has no training data), it could theoretically find a solution. Despite ChatGPT’s impressive capabilities, it’s a far cry from AGI–even if the bot wants you to believe otherwise.
When New York Times columnist Kevin Roose had a two-hour conversation with Microsoft’s Bing chatbot—ChatGPT—it identified itself as Sydney, revealed a secret desire to be human, and declared its love for Roose. When Roose stated he is happily married, Sydney doubled down, attempting to convince him otherwise.
“Experts have told me this isn’t sentience; it’s simply the AI responding to prompts the way it feels it’s expected to respond based on its interactions with that individual and other interactions it has picked up through the web,” says Jonathan Westover, a professor of organizational leadership at Utah Valley University. He hasn’t ruled out the possibility that machines could one day become sentient.
“I think the fear is that AI will become self-protective and self-sustaining,” says Sherveen Mashayekhi, founder and CEO of Free Agency. “Technology moves faster than humans. Before we can unplug our computers, a capable enough AI could go rogue and infiltrate every machine.”
How Will ChatGPT Transform Tech Jobs?
Before ChatGPT burst onto the scene, many industries had already used AI to augment efficiency. However, generative AI tools will fundamentally change how most knowledge workers do their job.
“Most of what we do in knowledge work is interfacing with, retrieving, and modifying data,” says Mashayekhi. “ChatGPT lets you skip the user interface of a CRM or project management tool and retrieve information using natural language.”
Here’s a closer look at how tech professionals will integrate generative AI tools into their day-to-day work.
Google found that, in theory, it would hire ChatGPT as an L3 entry-level coder if it interviewed at the company, according to an internal document. This role usually applies to new college grads and those in their first coding job.
Amazon employees already use ChatGPT for software coding. Other Amazon employees who tested ChatGPT said it does a “very good job” of answering customer support questions and is “very strong” at answering queries about corporate strategy.
“ChatGPT can analyze the code and identify potential syntax or logic errors, helping programmers catch mistakes they may have overlooked otherwise,” says Michael Smith, founder of itechhere.com, a blog about technology news. “Moreover, ChatGPT can generate simple code snippets, saving programmers the time and effort of writing repetitive code. This feature can be especially helpful for novice programmers who may be less familiar with syntax and coding conventions.”
Ironically, OpenAI has hired hundreds of contract developers to make ChatGPT better at coding. The programmers’ job is to create training data that will include lines of code and explanations of the code written in natural language. Their main responsibility is “data labeling”—creating massive sets of images, audio clips, and other data.
OpenAI already has a ChatGPT model called Codex, which translates natural language into code. The model is trained on data scraped from GitHub. Microsoft also uses Codex to power GitHub Copilot, a service that helps programmers write code.
Experts have compared coding assistants to cruise control for a car because the AI doesn’t truly understand the context or consequences of the code it writes.
Moreover, ChatGPTdoesn’t understand the contexts or constraints coders work with, such as user personas, business requirements, budget, and other constraints. But it can automate less desirable tasks such as documentation and support tickets, letting developers focus on complex application architecture or cybersecurity.
“ChatGPT can be used to create knowledge bases or chatbots that can assist end-users in troubleshooting technical issues, reducing the workload of technical support staff,” says Smith.
Chatbots are already widely used in customer service; however, ChatGPT can generate responses to more generalized questions and not just scripted interactions based on customer service training data. ChatGPT enables support professionals to handle a higher volume of queries in a given time, reducing customer wait times.
Intercom, a chatbot software provider, recently integrated ChatGPT into its software and tested it on customers. They found that ChatGPT can understand when people ask questions using unexpected phrasing and refer back to something said earlier in the conversation, unlike a typical chatbot that only deals with linear interactions.
However, the bot still needs supervision as it makes up facts when it doesn’t know the answer. Also, because ChatGPT is generative, it creates entirely new sentences every time you ask a question, which isn’t always ideal, as consistent responses are necessary for customer support.
Already, the role of the customer support rep is changing. Instead of writing responses from scratch, they edit out misinformation and rephrase the text to sound more friendly.
Meta, Canva, and Shopify already use similar large language models in their customer-facing chatbots. Customer experience software companies have begun partnering with OpenAI to add ChatGPT capabilities to their existing chatbots.
ChatGPT has already shown surprising promise in creating marketing campaigns because of its ability to generate content. Many aspects of marketing involve creating text—writing ad copy, emails, social media posts, and video scripts. However, ChatGPT still doesn’t know how to emulate brand voice, although you can give it directives like “write in a laidback, conversational tone.”
ChatGPT can generate content briefs or outlines for ad copy. It’s also useful for generating bulk product descriptions.
However, a marketer’s job is to build a relationship with a target audience—something AI can’t do. Also, ChatGPT can’t perform SEO keyword research, and its content generation abilities are quite elementary.
“While using AI to assist us with writing LinkedIn posts, we’ve found that ChatGPT has been useful in helping us create rough drafts and format the information,” says Ayub, “but we can’t publish the posts without reviewing and editing them.”
At best, the bot can assist with personalization at scale, something marketers struggle with. ChatGPT is great for subject line optimization and generating numerous versions of the same copy for A/B testing purposes. While ChatGPT cannot create images, image-focused AI tools like Lensa can.
However, marketers who rely too much on ChatGPT could see their site rankings drop. Google’s search advocate John Mueller said content automatically generated with AI writing tools is considered spam as it goes against the search engine’s webmaster guidelines. Unfortunately, AI detection algorithms still do a poor job of flagging AI-generated content.
Data science/data analytics
Like programmers, data scientists have to Google many things. For example, say you forgot how to merge dictionaries in Python. Instead of searching on Stack Overflow, you can ask ChatGPT.
Programmers can also use the tool to write SQL queries using natural language. One data scientist asked ChatGPT to write code to train a machine-learning model on the Titanic dataset using a Random Forecast classifier algorithm. It did. Some responses require manual intervention or additional questions but provide a solid starting point.
ChatGPT generates the baseline code, which you can tweak for your specific problem. You can also ask ChatGPT to translate from one programming language to another. These capabilities could change the role of an entry-level data scientist.
Here are other ways data analysts can use ChatGPT:
- Write code that automates data gathering, formatting, or cleansing.
- Define data structures—for example, what fields should be included in a database or what row and column headings are needed for a spreadsheet.
- Advise on how data visualizations should be constructed and what information to include.
However, users can’t upload files to ChatGPT, making it impossible to import data beyond the information that can be input into the text field. However, certain third-party apps enable ChatGPT to read the information in Google Sheets. Those with programming and API knowledge can also sidestep this limitation.
“In my work as a data analyst, I have used ChatGPT to generate summaries of large volumes of customer feedback data,” says Smith. “By inputting the data into ChatGPT, I obtained insights on the most frequently mentioned topics and issues, which informed our team’s product development and marketing strategy.”
Perhaps a bigger threat to the data analytics field are tools like Usechannel, which enable users to query datasets using natural language and receive immediate answers.
Regarding UX design, ChatGPT is more like having an augmented search engine at your fingertips. For example, you can use it to learn about specific areas of design (eg: How is gamification used in productivity apps?).
You can also ask it to write UX copy alternatives, such as a CTA button for an ecommerce website, or to generate survey questions. However, the AI can’t mock up design deliverables. You can’t ask it to generate wireframes, SVGs, or HTML code. When asked to generate a wireframe for a basic ecommerce application, it provides a helpful checklist of elements to include and how to do it.
“Generative AI could be used to create new design concepts or variations, which could save designers time and provide new ideas to explore,” says Westover. “For example, generating different layout options for a website or app.”
You can also ask ChatGPT to adapt content to specific users. Say you’re writing microcopy for a website. You can input details regarding each user persona, and ChatGPT can customize the copy. For example, “Rewrite this job ad to be more desirable for introverts” or “write CTA copy that will get undecided buyers to purchase.”
Designers can ask the tool to summarize notes from user interviews or list the main takeaways from survey data, as long as the data is copy-pasted into the text field. GPT does have prompt limits, but it’s unclear how long a prompt can be.
Autodesk, 3D design and engineering software, already has generative design capabilities where users input a design’s parameters and generate a prototype or model based on those specifications. Autodesks’ generative design tool has been used to design a chair—the Starck AI Chair—and an airplane partition for the Airbus A320 cabin. It can also automate routine design tasks, such as checking for errors and optimizing the design for performance, cost, and other factors.
Just as software engineers are using the tool to write code, hackers are deploying it to write malware. Security analysts may need to learn to use ChatGPT and other AI tools to identify and intercept AI-generated malware.
“Because of ChatGPT’s ability to create code on the fly, attackers can automate part of the process of launching cyberattacks by having the chatbot create their initial infection code for them,” Jerrod Piker, a competitive intelligence analyst with Deep Instinct, said in an interview with SecurityIntelligence.
This lowers barriers for those with little coding knowledge to launch attacks. However, ChatGPT has safeguards to prevent content policy violations, especially if someone uses a “trigger” keyword. However, rephrasing the query minus the trigger allows the program to continue the script.
AI is already widely used in cybersecurity. According to ChatGPT itself, “OpenAI’s technology, including ChatGPT, can be used in both offensive and defensive ways in the realm of cybersecurity.”
A survey by BlackBerry found that half (51%) of IT decision-makers predicted that a cyberattack attributed to ChatGPT would likely occur within less than one year.
Checkpoint, a cybersecurity provider, has been running a series of simulated attacks using ChatGPT to anticipate the new threat landscape. In one experiment, they used the chatbot to conduct a “full infection flow, from creating a convincing spear-phishing email to running a reverse shell capable of accepting commands in English.”
While the initial code is often elementary or doesn’t satisfy all parameters, further prompting leads to better responses. Whoever wins the offensive AI game is the one who learns how to write the best queries to get what they want from AI.
ChatGPT Won’t Replace Us Yet — But We’ll Need to Use It Wisely
ChatGPT is unlikely to steal your job anytime soon, but most professionals will need to learn how to use it to augment their productivity. This means learning skills like writing better prompts and refining AI responses by applying real-world constraints and business requirements. It also means following your organization’s ChatGPT policy and being careful about what data you share with the bot.
A study by the University of Pennsylvania and Open AI found that almost 1 in 5 employees could have at least half of their work tasks disrupted by ChatGPT.
To stay ahead of the curve, Mashayekhi advises professionals to think of ways to use generative AI tools to augment their creativity and output, rather than shying away from it.
“If you’re building accounting software or a consumer-facing shopping app, think about how you could use a large language model to make the user experience more convenient,” he says. “What are the ways you could manipulate data more efficiently?”
Some job descriptions already require applicants to use ChatGPT in their work. A leaked memo revealed that Microsoft employees already use ChatGPT daily. The only restriction is they refrain from sharing proprietary company information with the chatbot.
Companies will likely establish formal ChatGPT policies governing what job functions can and cannot be augmented using AI. Some acceptable uses of ChatGPT include first drafts, editing documents, generating ideas, and basic coding. However, ChatGPT should not be used for fact-checking, parsing confidential documents, or analyzing proprietary data. Additionally, ChatGPT’s text output must always be reviewed by a human for factual accuracy.
“When using AI to augment our work, our job is to be fact-checkers,” says Westover, who caught Bing’s ChatGPT making up facts when he asked it to write a short bio about himself.
“It came up with really impressive text which looked well-referenced and well-researched,” he admits. “The problem is about half of it was wrong.”
Widespread use of AI in programming can pave the way for new job titles such as “prompt engineer”—someone versed in the techniques for writing model inputs to get the best possible results from chatbots.
Those who come out on top will have found ways to use AI to overcome simple problems like information retrieval or writing the first draft of a report so that they can focus on “what really matters.”
“A writer might generate a rough first draft of their idea so they can focus more on the plot and storyline,” says Mashayekhi. “Or, rather than focusing on data entry and data cleaning, a product manager can focus on making strategic decisions based on the available data.”
Throughout history, technological developments have tended to create new jobs as quickly as they made old jobs redundant. Moreover, the jobs created are often more technical, creative, or high-skilled, and therefore well-compensated and satisfying. The dawn of the computing era rendered low-paid secretarial and clerical jobs obsolete but created higher-paid jobs in software engineering and data administration.
“If AI can easily replace certain tasks or roles, then we won’t have people in those roles anymore,” says Westover. “There will be displacement in the labor force, but overall, I think it’s good because it frees up human capital to do human-centric, creative things more in line with meaning and purpose.”