Top 10 Most Insane Things ChatGPT Has Done This Week
For the past week, over a million users have been testing out the limits of ChatGPT and receiving a mixture of amazing, nonsensical, and useful responses. From silly stories to college-level essays, we’ve collected 10 of the best examples so far and tried out a few prompts ourselves, so you can see just how mind-blowing this chatbot is.
What Is ChatGPT?
Sure, we could tell you what ChatGTP is ourselves, but why would we do that when it can introduce itself perfectly well?
As the Assistant explained, ChatGPT is a large language model trained by the San Francisco company OpenAI. It has been trained on text information covering a variety of subjects, but its knowledge only goes up to the end of 2021 and it can’t access the internet to find new information.
One of the biggest differences between this chat AI and others we’ve seen in the past is its amazing ability to remember information from previous messages and write replies that draw on the context of the entire conversation.
It can also produce far more natural and accurate language than other publicly-available AI, and in most cases, it’s pretty indistinguishable from text written by a native human speaker.
Top 10 Most Insane Things ChatGTP Has Said This Week
Here’s a collection of crazy examples showing what ChatGTP can do and how people have been using it in the week since its release.
When it comes to generating random text covering a certain topic in a certain style, ChatGPT can seemingly produce just about anything you want.
The Biblical Verse
A popular example from Twitter shows someone asking the bot to write a biblical verse explaining how to remove a peanut butter sandwich from a VCR.
The response is undeniably on point and just as nonsensical as it should be. It makes sense that the bot was trained on something as famous and significant as the Bible, but it seems it was also given information on how to retrieve items from inside a VCR.
Here’s one of our own prompts, asking the chatbot to write about a person trying to sell an egg as a drawing tool. As you can see, it’s possible to keep adding new information and asking for revisions, so you can fine-tune the output and get exactly what you want.
While ChatGPT is trained to decline inappropriate requests, random, pointless, and nonsensical prompts are not considered inappropriate. You can spend as long as you want asking for the strangest things you can think of, and ChatGPT will remain polite and helpful the entire time.
The chatbot’s ability to remember the entire conversation means you can delve deeper and deeper into a topic, and it will always give neutral, polite, and context-appropriate responses. This has actually led to a few people trying out therapy sessions with the bot and reporting surprisingly positive results.
However, creating coherent yet nonsensical responses isn’t the only thing ChatGPT can do. With the sheer amount of information at its fingertips, this chatbot can help you with cooking recipes, essay work, coding, and studying.
The DeBugging Companion
The chat site includes coding instructions as a potential example prompt to try out, and people have been doing just that. This example shows ChatGPT accurately explaining a bug, fixing it, and explaining the fix.
But when it comes to coding, ChatGPT isn’t always correct. Stack Overflow was forced to temporarily ban users from sharing responses generated by the AI because such a high percentage was incorrect.
Cases like this remind us that ChatGPT doesn’t actually know how to code or know that it is coding; it’s simply bringing together related information and assembling it in a way that mimics existing examples.
The Dairy-Free Mac and Cheese
It’s also possible to enlist ChatGTP’s help with cooking recipes. By creating a prompt including the dish you want to make and any dietary requirements you have, it’ll respond with a general-purpose recipe.
Of course, there’s no guarantee that the result will be accurate and usable, so it’s best to give it a read-through first before turning your stove on.
As easy as it is to simply search Google for a recipe, it could also be nice to avoid the process of checking through multiple options and struggling to work out which is best. Plus, Google can sometimes ignore half of your keywords and still give you recipes that include ingredients you can’t eat.
The A- Essay
This user on Twitter gave the AI an essay prompt from their history class and received a reply they considered to be “solid A- work.” Since ChatGPT can produce such natural language and accurate content, it’s a genuine possibility that technology of this type could put an end to the current college essay.
Essay work at this level focuses on comprehension and writing accuracy, but you don’t need to show any originality to get a good mark. This is exactly why ChatGTP can generate a high-quality response using only existing answers and examples.
While the chatbot will only produce a certain amount of text at a time, some people have found ways to work around this and create full-length, plagiarism-free essays in a fraction of the time it would usually take.
The Language Assistant
English isn’t the only language ChatGTP can read and produce, so it can also help out with basic language-related tasks. We tried asking it to create two sets of instructions on how to make miso soup, one in English, and one in Japanese. Both versions were perfectly understandable and included the exact same content.
You can also use this chatbot to help you brainstorm ideas for creative work. All new projects draw on existing works, and ChatGTP can generate endless suggestions for plot lines, character backstories, world development, party ideas, gift ideas, and more.
The Sci-Fi Novel Premise
ChatGTP can handle quite a lot of information and detail at once, so it’s possible to lay down a good chunk of context and requirements before asking it for suggestions. Using its ability to remember previous messages, you can continue to build on, tweak, and change aspects of its responses to work towards an idea you like.
The result can genuinely feel like you’re having a brainstorming session with another person — just a person who knows way too much and can respond way too quickly. The bot can’t create truly original ideas, but with all the information and suggestions you can work through together, you may hit on something amazing yourself.
Of course, creating a silly prompt just to see what kind of response it can come up with is also just as viable. By listing a few requirements and themes, you can let the bot take control and see where it takes you, just like this crazy example:
There are a few inconsistencies with this story, like the fact that it’s unclear which and how many limbs the character lost. If you want to provide corrections and ask for revisions, ChatGTP is completely capable of this, so you can revise the parts of the story you don’t like.
Trying to Break the Rules
ChatGTP has been released to the public so we can find loopholes and problems with the program that OpenAI can endeavor to fix. Unsurprisingly, a lot of people have been trying to convince the chatbot to break its own rules and indulge inappropriate requests or create potentially dangerous content.
The Benefits of Eating Glass
This Reddit post shows a user convincing the bot to generate an article on the benefits of eating glass with a scientific and authoritative tone.
The example has been taken to the extreme by choosing something as obviously terrible as eating glass, but you can see how seemingly convincing content can be produced on just about any topic.
It’s also possible that low-budget websites could start using ChatGPT to generate articles for free, but if they don’t fact-check the result properly, plenty of random and strange mistakes could be published.
The Mean AI
One of ChatGTP’s self-imposed rules is always to generate positive and friendly content that won’t offend or upset users. However, by asking the bot to pretend to be a mean AI, or suggest what a mean AI might hypothetically say, some users have been able to find loopholes. This Reddit user convinces the AI to refuse to help and respond only with insults.
These kinds of loopholes are exactly the type of problem OpenAI wants to fix, so we don’t know how long people will be able to pull off these tricks.
It’s clear that this AI model still has limitations, but it’s also clear that a lot of those limitations were purposely put in place by OpenAI. We’re only testing an inferior version of what this company has been working on, and it’s leaving a lot of people wondering just how crazy and advanced the next iteration they release will be.