Beginner

Introduction to LLM Security

Large Language Models have transformed how we build software, but they have also introduced an entirely new class of security vulnerabilities that traditional application security cannot address.

The LLM Security Landscape

LLM-powered applications process natural language as both data and instructions simultaneously. This fundamental property creates security challenges that have no direct parallel in traditional software engineering. Every text input is a potential attack vector, and every output may leak sensitive information.

A New Paradigm: Traditional web application firewalls, input validation, and parameterized queries do not apply to LLM security. The boundary between data and code has been completely dissolved in natural language processing.

OWASP LLM Top 10

The OWASP Top 10 for Large Language Model Applications identifies the most critical security risks:

Rank Vulnerability Description
LLM01 Prompt Injection Manipulating LLM behavior through crafted inputs
LLM02 Insecure Output Handling Failing to validate or sanitize LLM outputs
LLM03 Training Data Poisoning Manipulating training data to introduce vulnerabilities
LLM04 Model Denial of Service Resource exhaustion attacks against LLM services
LLM05 Supply Chain Vulnerabilities Risks from third-party models, data, and plugins
LLM06 Sensitive Information Disclosure LLMs revealing confidential or private data
LLM07 Insecure Plugin Design Vulnerabilities in LLM tool/plugin integrations
LLM08 Excessive Agency LLMs with too much autonomy or permission scope
LLM09 Overreliance Trusting LLM outputs without verification
LLM10 Model Theft Unauthorized access to or extraction of proprietary models

Why LLM Security Requires New Approaches

  • Natural language as code: There is no syntax to parse or sanitize — every valid sentence is a potential command
  • Probabilistic behavior: The same input can produce different outputs, making deterministic security testing impossible
  • Context window attacks: Adversaries can hide instructions in documents, web pages, or emails that the LLM processes
  • Tool-augmented risk: LLMs connected to tools (code execution, web browsing, APIs) can cause real-world harm through manipulated actions
  • Emergent capabilities: Models may exhibit unexpected behaviors that were not anticipated during safety evaluation

Who This Course Is For

Application Developers

Building LLM-powered features and need to understand the security implications of every design decision.

Security Engineers

Responsible for securing AI applications and need LLM-specific knowledge beyond traditional AppSec.

Platform Teams

Managing shared LLM infrastructure and need to implement organization-wide security controls.

AI/ML Engineers

Training and deploying models with security built in from the start rather than bolted on afterward.

💡
Looking Ahead: In the next lesson, we will map the complete attack surface of LLM applications — from inputs and outputs to training data, model weights, tool integrations, and infrastructure.