LLM Security & Prompt Injection
Protect large language models from prompt injection, jailbreaking, and data exfiltration attacks.
7
Lessons
100%
Free
Course Lessons
Follow these lessons in order for the best learning experience.
1. LLM Security Landscape
Lesson 1 of 7 in the LLM Security & Prompt Injection course.
Start here →
2. Direct Prompt Injection
Lesson 2 of 7 in the LLM Security & Prompt Injection course.
Study lesson →
3. Indirect Prompt Injection
Lesson 3 of 7 in the LLM Security & Prompt Injection course.
Study lesson →
4. Jailbreaking Techniques
Lesson 4 of 7 in the LLM Security & Prompt Injection course.
Study lesson →
5. Input Sanitization Defenses
Lesson 5 of 7 in the LLM Security & Prompt Injection course.
Study lesson →
6. Output Filtering
Lesson 6 of 7 in the LLM Security & Prompt Injection course.
Study lesson →
7. LLM Security Best Practices
Lesson 7 of 7 in the LLM Security & Prompt Injection course.
Study lesson →
Lilly Tech Systems