Back to Blog

AI Prompts 101
Data Science

AI Prompts 101: Understanding How They’re Created & Used

10 minute read | October 24, 2023
Kindra Cooper

Written by:
Kindra Cooper

Ready to launch your career?

In the rapidly evolving world of artificial intelligence, the ability to effectively communicate with generative AI models has become a pivotal skill. This article delves deep into the realm of AI prompts, shedding light on their significance, the intricacies of generative AI, and the art of crafting precise prompts to elicit desired outputs. 

From understanding the foundational principles of generative AI to exploring real-world applications and best practices, this comprehensive guide offers a holistic overview for both novices and seasoned professionals.

What Are AI Prompts?

AI prompts serve as the bridge between human intent and machine understanding, enabling us to communicate our desires and queries to generative AI models. 

Prompting a large language model to obtain a desired output is an increasingly sought-after skill. Well-crafted prompts enable users to obtain accurate and relevant outputs to efficiently solve complex problems and create shortcuts in their daily workflow. 

“Prompt engineering allows you to establish methods to automate tasks using generative AI,” says Lara-Arango. “It’s a way of interacting with generative AI such that you can ask it to solve a problem and predict, to some extent, what you’ll get in return.” 

People use generative AI for innumerable purposes, from generating text for an important work email to brainstorming ideas for a new mobile app. Users communicate with the AI model through prompts. Generative AI models are now embedded in many of the software tools we use today, from word processors that suggest grammatical corrections and predict the next word in a sentence to email and instant messaging apps. 

Generative AI uses natural language processing (NLP) algorithms to decode the message in an AI prompt, translate human speech and text into machine language, and generate an output using human language. Chatbots can retain context between responses so users can have reciprocal conversations with AI. This makes it possible to refine your prompt by asking follow-up questions or specifying additional requirements. 

What Is Generative AI Used For?

  • Summarization – “Hi ChatGPT, here is an excerpt of an article on home hydroponics [Copy-paste text here]. Can you summarize the most important steps to establishing a home garden?”
  • Classification “Here is a database of customer reviews for an e-commerce website. Classify the responses as ‘Positive,’ ‘Negative,’ or ‘Neutral’ and tally the responses for each category.”
  • Translation – “Translate the phrase ‘I love data science’ from English to French.”
  • Text generation/completion – “I am writing a cover letter for my first software engineering job. Here are some bullet points about my accomplishments and educational background [insert here]. Generate a cover letter using these points.”
  • Question answering “What are the steps to creating a user flow for a UX design project?” 
  • Coaching – Asking for suggestions like “How would you improve the following script for a YouTube video about generative AI? [Insert copy here].” 
  • Image generation “Snail house on top of a cliff, surreal, fantasy, digital art.”

How AI Models Generate Responses Based on Prompts

Just as AI is incapable of sentience, it has no innate “understanding” of language. AI models are trained on immense databases of text and images scraped from the internet. They learn to recognize patterns in sentences, paragraphs, and larger chunks of text. The model infers which words go together in a sentence or phrase. 

When presented with text inputs—for example, the sentence “Explain why unicorns are real, and Santa Claus isn’t”—the algorithm undertakes the following steps:

  • Tokenization – The AI breaks down the input into smaller units called tokens, which can be words or subwords. For example, “Santa” “Claus” “isn’t” “real.”
  • Contextual understanding – The algorithm analyzes the token based on sentence structure, grammar, and word relationships to understand the question’s intent. It also considers the surrounding context (eg: past interactions). For example, if you previously asked it to “answer questions as if you were talking to a five-year-old,” it would respond in this style. 
  • Pattern recognition – The algorithm identifies patterns, phrases, or keywords relevant to the question based on the tokens and context. This is called inference. To give a sample, “unicorn” typically occurs in a sentence with “fairytale” and “Santa Claus” implies “Christmas.” However, these keyword associations are not fixed. 
  • Retrieval and response generation – The algorithm recalls information, facts, and statements learned from training to match the identified patterns. This is how it generates contextually relevant responses. 

For example, given the sentence, “If it’s raining, I’ll bring an___,” the model would generate probabilities for various words that could plausibly complete the sentence, such as “umbrella,” “hat,” or raincoat” and select the most mathematically probable word for that sequence. 

What Is a Generative Pre-Trained Model?

ChatGPT is built on the GPT (Generative Pre-trained Transformer) architecture. 

The “Pre-trained” aspect of GPT refers to the initial phase of training, where the model learns from a large corpus of text data (books, articles, website content scraped from the internet). This is a type of unsupervised machine learning with “predictive objectives,” which means the model was trained without any explicit supervision or labeled data. During pre-training, GPT learns to predict the next word in a sentence given the previous words, also known as “next-token prediction.” The model learns to understand syntax, grammar, semantics, and even some world knowledge from this diverse text data.

The Transformer architecture uses self-attention mechanisms to weigh the importance of different words in a sentence relative to each other. This allows the model to capture contextual information effectively.

Types of AI Prompts (e.g., completion, classification, generation)

Completion Prompts

Provide the initial context in your prompt (eg: a partial code snippet) and the model generates the rest of the content based on its learned patterns. For example, you can ask the AI to write the final paragraph of an email, generate a UI checklist for a design project, or complete an SQL query. 

Prompt examples: 

  • “Once upon a time, in a town far away, there lived a small mouse named…”
  • “I’m building a web application in Python using the Flask framework. I need a function that takes user input from a form and processes it to calculate the result. Please provide me with a Python function that receives user input as parameters and returns the calculated result. Consider handling form submission and input validation within the function as well.”

Classification Prompts

Present the AI model with input and ask it to classify it using predefined labels or categories. This is commonly used for sentiment analysis, text classification, and object recognition. 

The model uses pattern recognition to analyze the language and context of the input and assign the appropriate sentiment category. 

Prompt examples: 

  • “Determine whether the sentiment of the following review is positive, negative, or neutral.”
  • “Here is a list of recent email open rates. Determine whether the email campaign was successful or not.” 

Guidelines for Writing Effective AI Prompts

Clarity and Specificity in Prompts

Provide as much detail as possible, especially when prompting an image generator. For example, if you’re generating an image, specify if you want a pencil sketch, oil painting, or artwork in the style of known artists. Or, you can ask a text generator to mimic the tone of a well-known writer or public figure (e.g.: “Rewrite ‘Dr. Seuss’ Green Eggs and Ham in the style of Ernest Hemingway.”) 

“Don’t leave any ambiguity regarding what ChatGPT needs to do,” says Tobias Zwingmann, managing partner at RAPYD.AI and a mentor for Springboard’s Data Science Bootcamp. “Don’t let it think for you. Provide the necessary guardrails so the AI just needs to execute.” 

1. Clear Task Description

State the task clearly and concisely so the model knows what to focus on.

Prompt:

“Create a heartfelt poem inspired by the beauty of nature during springtime. Focus on the rebirth of life, the vibrant colors of blooming flowers, and the sense of renewal in the air. Craft the poem in a lyrical style with metaphors and vivid imagery.”

2. Context and Constraints

Guide the model’s response in a specific direction by adding more details about the output you seek and anything you’d like the model to omit. 

Prompt: 

“Write a short comedy skit set in a library during exam week. Include a clumsy librarian, a group of overly serious students, and a strict “no noise” policy. However, don’t include any spoken dialogue — the humor should arise from actions and reactions.”

3. Input Format and Examples

If applicable, provide examples or demonstrations to show the desired format or style of the response. This is especially important when asking the model to write a function or provide input for a domain-specific deliverable (eg: a card sort or user journey map in UX design). 

Prompt 1: “Here are the first six lines of Shakespeare’s ‘A Midsummer Night’s Dream’ [insert text here]. Write a paragraph about choosing the right college in a similar style.” 

Prompt 2: “An issue tree is a visual representation in data science that systematically breaks down a problem into its underlying factors, categorizing them hierarchically, and revealing the relationships between them. This approach helps pinpoint the key drivers of the issue. Generate a mutually exclusive, collectively exhaustive issue tree on low conversion rates on an e-commerce website.”

4. Specific Criteria or Requirements

Mention key talking points you’d like referenced in the response and the outcome you want to achieve from your message. 

Prompt:

“Draft a persuasive speech advocating for the importance of renewable energy sources. Use passionate and uplifting language to convince the audience of the benefits of solar and wind power. Incorporate real-world examples of successful renewable energy projects to underscore your points.”

Get To Know Other Data Science Students

Rane Najera-Wynne

Rane Najera-Wynne

Data Steward/data Analyst at BRIDGE

Read Story

Samuel Okoye

Samuel Okoye

IT Consultant at Kforce

Read Story

George Mendoza

George Mendoza

Lead Solutions Manager at Hypergiant

Read Story

The Ingredients of a Good AI Prompt

Good AI prompts consist of at least one instruction or question. You can focus on including as much detail as possible in your initial prompt (an approach known as one-shot learning) or refine the output by conversing with AI (few-shot learning). This lets you drill down to get the exact result you’re looking for. 

Including too much detail in the initial prompt may confuse the model. Break down complex tasks into subtasks to prevent model confusion and ensure the reasoning behind the output is sound. 

Here are a few critical components of writing a clear, specific, and actionable AI prompt:

Input/Context

  • Provide the model with information and ask it to convert/translate/summarize/classify the information. (Example: “Here is a transcript of a podcast about generative AI: [transcript]. What does it say about LLMs?”). 
  • Or you can ask the AI to assume a specific role within a scenario (eg: “You are a customer support rep who is trying to appease an angry customer after a shipment arrived late. Explain how you plan to remedy the situation.”)

Instructions 

  • Tell the model what function you need it to perform (“Rewrite [code snippet] from Javascript to Python). 
  • Specify the desired output length (“Write a 150-word summary of Harry Potter and the Philosopher’s Stone.”)
  • Mention the desired tone (“Write a polite response”) or specify the format (“Give me the summary in bullet points” or “Explain it step by step”). 
  • Specify the audience (“Explain Generative Adversarial Networks (GAN) to a non-technical person.”) 

Questions 

  • Prompt the model to answer a query (“Why is processing a sorted array faster than processing an unsorted array?”)
  • Ask the model to interpret a given input (“Here’s a Python code snippet. The function is supposed to sort a list, but it’s not returning the expected output. Can you identify the problem?”) 
  • Specify the desired output format (“Provide a short answer and explain your reasoning.”)

Incorporating Domain-Specific Knowledge

AI won’t do the heavy lifting for you. The more knowledgeable you are in your respective field, the better you can prompt AI to produce the desired output by finetuning its responses and providing the proper context. For example, if you use AI to generate code snippets, you must:

  • Specify the programming language 
  • Include information about libraries, APIs, or frameworks you’re using
  • Mention context from existing code (eg: “the user’s input is stored in a variable called ‘Customer’). 

This article will discuss ways to format prompts specifically for data science, UX design, software sales, and programming. 

Companies are also doing what’s known as “fine-tune training”—training an LLM on a proprietary knowledge base to answer domain-specific questions. Morgan Stanley recently trained OpenAI’s GPT-4 using a curated set of 100,000 documents with investing, general business, and investment process knowledge. The goal is to enable financial advisors to query the knowledge base using natural language to provide accurate information while advising clients. 

“ChatGPT is a very broad model; the future lies in those industries that will train the base model on their specific use cases and integrate it into their existing platforms,” says Rehan Shahid, chief AI product architect at Tapway and a mentor for Springboard’s Software Engineering Bootcamp. 

Sharing Information With ChatGPT

ChatGPT recently introduced a feature called ‘Custom Instructions’ that lets ChatGPT Plus users provide personal information about themselves and how they intend to use ChatGPT. This allegedly enables the bot to customize its responses based on user requirements. Once you set your preferences, ChatGPT will keep them in mind for future interactions. 

Users are prompted with a series of suggested questions, such as sharing what you do for work, your location, and discussing hobbies and interests.

ai prompts, Sharing Information With ChatGPT

Wrapping Up

The realm of AI prompts is both fascinating and essential, acting as the linchpin in our interactions with generative AI. As we wind down this article, a big takeaway should be that crafting precise and effective prompts is the key to harnessing the full potential of AI models.  

From generating text for professional communication to seeking domain-specific insights, the power of a well-structured prompt is undeniable. As the digital age progresses and AI becomes an integral part of our lives, mastering the art of AI prompting will stand as a cornerstone skill for all.

Since you’re here…Are you interested in this career track? Investigate with our free guide to what a data professional actually does. When you’re ready to build a CV that will make hiring managers melt, join our Data Science Bootcamp which will help you land a job or your tuition back!

About Kindra Cooper

Kindra Cooper is a content writer at Springboard. She has worked as a journalist and content marketer in the US and Indonesia, covering everything from business and architecture to politics and the arts.