Project Description
Task
When you communicate with a large-language model AI, you may see a disclaimer at the bottom of the screen that says something like this:
ChatGPT can make mistakes. Consider checking important information.
or
Gemini may display inaccurate info, including about people, so double-check its responses.
These disclaimers are included because of AI Hallucinations. A hallucination is when an AI responds in a very sure manner with information that is completely false or inaccurate.
In this activity, you’ll learn more about AI hallucinations and will try to find a prompt that will cause an AI program to hallucinate.
To begin, you’ll watch a video on AI Hallucinations to understand what they are and why they can happen. Then, you’ll have a conversation with a large-language model AI like OpenAI’s ChatGPT or Google’s Gemini to see if you can find a prompt that will cause a hallucination and reflect on your experience.