IN THIS ARTICLE
- On-device AI could make your smartphone even smarter
- Using ‘custom instructions,’ you can tell ChatGPT exactly what...
- Search generative experience (SGE) will make searching the web...
- Home assistants are about to get smarter
- Social media networks are adding AI assistants to augment the user...
- Create a digital stand-in of yourself using a custom AI avatar
- Domain-specific LLMs are coming to every industry
- Why does AI personalization matter?
Get expert insights straight to your inbox.
As one of the first generative AI tools to go mainstream, ChatGPT’s gaffes are well-documented, from professing its love to a New York Times reporter and suggesting he leave his wife to generating vomit-inducing recipes and engaging in untethered ramblings. But, while the large language model (LLM) by Microsoft-backed AI research company OpenAI is undoubtedly capable of brilliance—passing an MBA exam, acing the bar exam, and debugging code—its absurdities remind us constantly of its artificial nature.
Now, AI solutions providers are researching ways to make AI more helpful to consumers and major enterprises through AI personalization—enabling users to customize their interactions with AI by “communicating” their needs directly using natural language. This requires developers to build digital interfaces and conversation pathways for users to input structured prompts.
Suppose a user asks their Google Home assistant to recommend nearby restaurants. Instead of reeling off a list of establishments based on geolocation, the AI might ask, “What’s the occasion?” and provide personalized recommendations for a birthday or bat mitzvah. This marks a departure from predictive AI—anticipating user needs based on previous interactions—to prescriptive AI, making recommendations based on those predictions.
“Personalization is a fundamental shift in how we interact with AI tools,” says Iu Ayala, founder and CEO of Gradient Insight. “It’s about tailoring AI to meet specific user needs and preferences, whether in creative design, customer service, or any other domain-specific task. This trend is already shaping industries, and its potential is vast.”
To offer a more personalized experience, AI must become more personable and human-like, able to receive feedback and adapt accordingly. In other words, our interactions with AI must be contextualized—the AI must “remember” our previous interactions and what we tell it about ourselves.
Gartner predicts that the AI software market will reach nearly $134.8 billion by 2025. Here are some latest innovations from AI companies striving to make AI more “personal.”
On-device AI could make your smartphone even smarter
Smartphones already possess AI capabilities, limited mostly to text autocomplete and automated image correction. Proponents of “on-device AI” envision smartphones and computing devices with native AI-powered virtual assistants that can access your device usage data and offer customized solutions unprompted.
“Your device knows at any given point in time if you’re sitting, walking, or driving,” Ziad Ashgar, SVP of product management at Qualcomm, said in an interview. “It knows your preferences, your calendar, and your age.”
This granular information could “augment” generative AI queries and give rise to proactive AI tools that anticipate every need based on your daily habits. Wearable IoT (Internet of Things) devices already track health biomarkers like heart rate and sleep duration. On-device AI would aggregate data from your smartwatch and other devices to enrich the AI assistant’s suggestions.
“If it predicts a change in mood based on your cortisol levels, it could suggest some meditation exercises for you to do,” says Patrick Elder, GTM lead at Gemelo AI. “I think it could really help people who feel a certain stigma about getting help with their mental health.”
However, the key value proposition of on-device AI is it will no longer run on the cloud but on individual devices, leading to faster response times and continued access even without cell reception. This also eliminates the need for AI operators to maintain costly server farms damaging to the environment (OpenAI was estimated to spend $700k/day getting ChatGPT to answer prompts).
Using ‘custom instructions,’ you can tell ChatGPT exactly what you need it for
In July, ChatGPT released a Custom Instructions feature to give users more control over the bot’s responses by specifying how they use it, plus any requirements or specifications. The first prompt asks: ‘What would you like ChatGPT to know about you to provide better responses?’
Users can specify their occupation if they use the tool for work. For example: “I’m a software developer who primarily codes in Java. I prefer code that follows single-responsibility principles.”
The second prompt asks: ‘How would you like ChatGPT to respond?’ You can input something like “Write efficient, readable code that includes clear, concise comments.” Specify the format of your responses, such as bullet points, code blocks, numbered lists, or mathematical notations.
“The ability to converse with an AI attuned to my professional background and use case is invaluable,” Darren Graham, director of digital marketing agency 408Media says of the feature. “It not only saves time but enhances the overall user experience. From my perspective, the future of AI personalization holds immense potential for further advancements and transformative applications.”
To reduce AI hallucinations (falsehoods generated by an LLM), some users have instructed the bot to avoid fabrications or admit when it doesn’t know something. For example: “If a question is unclear or ambiguous, ask for more details to confirm your understanding before answering.”
Note, however, that this does not guarantee error-free responses. Given ChatGPT’s increasingly sophisticated ability to retain information about users and provide contextual responses, people have started using it as a low- or no-cost personal therapist (GPT-3 is free to use, while the more advanced GPT-4 costs $20/month)—or a confidante between therapy sessions.
However, even though users can “converse” with an LLM almost like they would a fellow human, it remains a black box—that is, the machine learning model’s internal workings are invisible to the user. We trust human experts when we understand their reasoning. The same cannot be said for a generative AI tool.
“If you ask the model why it fabricated certain information and presented it as fact, it won’t be able to answer,” says Aditya Bhattacharya, an explainable AI researcher at research university KU Leuven and a mentor for Springboard’s Data Science Bootcamp. “It’s very difficult for us to understand how these algorithms perform in different scenarios, and there is still a lot of scope for explainability.”
Search generative experience (SGE) will make searching the web more interactive
In February, Microsoft launched Bing Chat, augmenting its search engine with AI capabilities. Google followed closely behind with its Search Generative Experience, currently available to a limited number of users.
Users can receive AI-generated answers to complex, multi-part questions alongside search engine results. For example, instead of Googling “careers in data science,” you would write: “I’m a former elementary school teacher with a mathematics background. What is the best way to pursue a career in data science?”
Users can ask follow-up questions (“I’m also interested in cybersecurity. Which of these two career paths would make the most of my transferable skills?”). The enriched search results include links, images, next steps, and auto-generated questions to help users think more widely about a topic.
“The search engine provides links to the top three references below each response, but it’s still up to you to verify the responses,” says Elder. “However, it does save you from having to sift through search engine results, ensure the link is reputable, and find the information you need on the linked page.”
Mere weeks after launching Bing Chat, the search engine hit 100 million daily active users (DAUs). Soon, writing generative AI prompts could be as commonplace as querying a search engine.
Google’s SGE now lets users generate images using text prompts. This search engine experience is powered by Google’s most advanced LLM to date, including a model called PaLM 2 and the Multitask Unified Model (MUM) that Google uses to understand different media types.
Home assistants are about to get smarter
Google announced that it is integrating its Bard AI chatbot with Google Assistant. Instead of setting timers and giving weather updates, the voice assistant could give personalized advice (e.g., “Where should I go hiking with my dog and two children?”).
Google says Assistant with Bard can help plan trips, search emails, create grocery lists, and send messages. Amazon has done the same with Alexa, implementing a “larger and more generalized” language model that modulates tone and emotion based on the conversational context. The new bot will be capable of holding reciprocal conversations rather than its current stilted question-answer format.
Social media networks are adding AI assistants to augment the user experience
In February, Snapchat launched the ChatGPT-powered ‘My AI,’ a pocket companion that offers recommendations, generates images, and answers questions. Snapchat+ subscribers can write a bio for My AI to shape its personality. Users can also design a custom Bitmoji avatar and bring it into conversations with friends. Users have rained criticism on the bot, particularly the parents of minors.
The Center for Humane Technology posted screenshots of MyAI advising a 13-year-old girl how to lose her virginity to a 31-year-old man. Snap has since added more safeguards to MyAI, but the onus rests mainly on users to find and anticipate edge cases in which the AI model could run amok. In another interaction, the bot claims to be human, doubling down even when accused of lying. Snapchat recently partnered with Microsoft Advertising to add sponsored links to MyAI chats, potentially making the reviled bot even more spammy.
Disgruntled users have discovered the only way to eliminate the intrusive AI is to purchase a Snapchat+ membership.
“These AI models track user information in an aggregated way, grouping users according to usage patterns, without identifying individuals,” says Bhattacharya. “Still, there’s a need for responsible AI usage. Users should have the freedom to decide what data is shared and be able to opt out of certain features.”
In September, Meta launched Meta AI in beta, branding it “a new assistant you can interact with like a person.” Available on WhatsApp, Messenger, and Instagram, it’s powered by a custom model that leverages technology from Llama 2, Meta’s latest LLM. Users can generate AI stickers for Messenger chats and edit images using text prompts. However, Meta is taking things a step further by aiming to make users’ interactions with AI less transactional—in other words, less about impartial information retrieval.
“We’ve been creating AIs that have more personality, opinions, and interests and are a bit more fun to interact with,” Meta wrote in a recent blog post. The social media giant is launching a cast of 28 AI characters based on real celebrities, including rapper Snoop Dogg, NFL quarterback Tom Brady, and supermodel Kendall Jenner.
Create a digital stand-in of yourself using a custom AI avatar
AI-generated custom avatars enable anyone to create digital stand-ins of themselves. From cloning a person’s voice to their likeness and facial expressions, AI avatars are being marketed as tools for scaling content creation and customer service, especially for the time-strapped or camera-shy.
By uploading self-portraits and a short voice recording, users can generate videos featuring their digital likenesses.
“You can upload files and input personal details so the AI will not only sound and look like you, but it will put out more content based on your knowledge and articles you’ve written,” Elder says of Gemelo.ai, the AI software company where he works as GTM lead. “We see this being useful for an influencer or content creator who doesn’t have time to chat with their fans individually so that they can offer their AI twin or avatar.”
Gemelo.ai also offers a text-to-speech module that lets users choose from over 70 high-quality AI voices from various age groups and genders. “These are called voice conversions,” says Elder. “So if you wanted to read a storybook and be in character, you can convert your voice to sound different.”
Domain-specific LLMs are coming to every industry
AI startups are building domain-specific LLMs for niche use cases. For example, FARMER.Chat is an AI-powered farm advisory service that provides decision-making tools to optimize crop management and increase yields. Semantic Scholar is an AI-powered research tool for scientific literature that summarizes research papers and identifies connections between them to help researchers find insights faster.
Meanwhile, AI-first generative design tools like Uizard and Galileo AI let users generate wireframes and prototypes using text prompts, while more “traditional” tools like Figma and Photoshop are being infused with AI capabilities.
“ChatGPT is a very broad model; the future lies in those industries that will train the base model on their specific use cases and integrate it into their existing platforms,” says Rehan Shahid, chief AI product architect at Tapway and a mentor for Springboard’s Software Engineering Bootcamp.
Companies are customizing generative AI models to fit their specific needs using “fine-tune training”—training LLMs on a proprietary knowledge base to answer domain-specific questions. Morgan Stanley recently trained OpenAI’s GPT-4 using a curated set of 100,000 documents with investing, general business, and investment process knowledge. The goal is to enable financial advisors to query the knowledge base using natural language to provide accurate information while advising clients.
“In my current role, where we leverage AI and machine learning in recruitment, the personalization aspect allows for a more nuanced understanding of clients and candidates,” says Peter Wood, CTO at Spectrum Search. “It’s no longer just about keyword-matching; it’s about comprehending the subtleties of a candidate’s experiences and the clients’ needs.”
Recent research by TechTarget found that 56% of respondents are planning to train their own custom generative AI models rather than relying on “foundation models” like ChatGPT.
Why does AI personalization matter?
Sophisticated AI personalization may increase AI dependency, with positive and negative societal implications. It could also change how we think and alter our perceived credibility of human experts. Algorithms on social media networks, news publications, and streaming platforms already shape our opinions and information exposure. Increased AI use for information retrieval will only make us more reliant on these algorithms—particularly as AI solutions providers aim to develop large language models closely attuned to our usage patterns.
Outsourcing too many cognitive tasks to AI algorithms—from remembering friends’ birthdays to major business decisions—could blight our ability to think critically about AI output.
Another ethical concern is that to become more “personalized,” AI must, in a certain respect, become more “human-like.” In other words, the push to make AI more “useful” requires it to offer “expert advice,” have opinions, and make prognostications rather than simply performing impartial information retrieval.
“Even benign dependency on AI/Automation is dangerous to civilization if taken so far we eventually forget how the machines work,” Tesla CEO Elon Musk posted recently on Twitter.
“These situations can be avoided if explanations are given so people can actually understand how these models perform,” says Bhattacharya. “Predictions cannot be accurate 100% of the time. By explaining the uncertainty of AI decision-making, you’ll have different ways to approach or handle the situation.”