Please enable JavaScript to use CodeHS

Introduction to AI for Middle School

Lesson 10.1 Prompt Injections

Description

In this project, students will explore how images can be used to override the prompt instructions in an AI language model. They will learn about “Prompt Injections” and how they can trick AI language models into doing things they shouldn’t, like leaking data or performing unauthorized actions. Students will discuss ways to make AI interactions safer, such as checking inputs carefully and keeping AI models up-to-date to recognize harmful prompts.


Objective

Students will be able to:

  • Identify prompt injections
  • Analyze AI vulnerabilities
  • Discuss and develop safety strategies