Course Duration
2 Days
Cyber
Authorized Training
IT
Course cost:
was £2,265
£1,795
IT Certification Overview
This intensive two-day course explores the security risks and challenges introduced by Large Language Models (LLMs) as they become embedded in modern digital systems. Through AI labs and real-world threat simulations, participants will develop the practical expertise to detect, exploit, and remediate vulnerabilities in AI-powered environments.
The course uses a defence-by-offence methodology, helping learners build secure, reliable, and efficient LLM applications. Content is continuously updated to reflect the latest threat vectors, exploits, and mitigation strategies, making this training essential for AI developers, security engineers, and system architects working at the forefront of LLM deployment.
Newto Training Reviews
What Our Happy Alumni Say About Us
Prerequisites
Participants should have:
- A basic understanding of AI and LLM concepts
- Familiarity with basic scripting or programming (e.g., Python)
- A foundational knowledge of cybersecurity threats and controls
Target audience
This course is ideal for:
- Security professionals securing LLM or AI-based applications
- Developers and engineers integrating LLMs into enterprise systems
- System architects, DevSecOps teams, and product managers
- Prompt engineers and AI researchers interested in system hardening
Learning Objectives
By the end of this course, learners will be able to:
- Understand LLM-specific vulnerabilities such as prompt injection and excessive agency
- Identify and exploit AI-specific security weaknesses in real-world lab environments
- Design AI workflows that resist manipulation, data leakage, and unauthorised access
- Apply best practices for secure prompt engineering
- Implement robust defences in plugin interfaces and AI agent frameworks
- Mitigate risks from data poisoning, overreliance, and insecure output handling
- Build guardrails, monitor LLM activity, and harden AI applications in production environments
Mastering LLM Integration Security: Offensive & Defensive Tactics Course Content
Prompt engineering
- Fundamentals of writing secure, context-aware prompts
- Few-shot prompting and use of delimiters
- Prompt clarity and techniques to reduce injection risk
Prompt injection
- Overview of prompt injection vectors (direct and indirect)
- Practical exploitation scenarios and impacts
- Detection, mitigation, and secure design strategies
Lab activities:
- The Math Professor (direct injection)
- RAG-based data poisoning via indirect injection
ReACT LLM agent prompt injection
- Introduction to the Reasoning-Action-Observation (RAO) model
- Vulnerabilities in frameworks such as LangChain
- Agent behaviour manipulation and plugin exploitation
Lab activities:
- The Bank scenario using GPT-based agents
Insecure output handling
- AI output misuse leading to privilege escalation or code execution
- Front-end exploitation via summarisation and rendering
Lab activities:
- Injection via document summarisation
- Network analysis and arbitrary code execution
- Internal data leaks through stock bot interactions
Training data poisoning
- Poisoning training or fine-tuning datasets to alter LLM behaviour
- Attack simulation and defence strategies
Lab activities:
- Adversarial poisoning
- Injection of incorrect factual data
Supply chain vulnerabilities
- Security gaps in third-party plugin, model, or framework usage
- Dependency risk, plugin sandboxing, and deployment hygiene
Sensitive information disclosure
- How LLMs can inadvertently leak personal or proprietary data
- Overfitting, filtering failures, and context misinterpretation
Lab activities:
- Incomplete filtering and memory retention
- Overfitting and hallucinated disclosure
- Misclassification scenarios
Insecure plugin design
- Misconfigured plugins leading to execution or access control flaws
- Securing LangChain plugins and sanitising file operations
Lab activities:
- Exploiting the LangChain run method
- File system access manipulation
Excessive agency in LLM systems
- Over-privileged agents and unintended capability exposure
- Agent hallucination, plugin misuse, and permission escalation
Lab activities:
- Medical records manipulation
- File system agent abuse and command execution
Overreliance in LLMs
- Cognitive, technical, and organisational risks of AI overdependence
- Legal liabilities, compliance gaps, and mitigation frameworks
Exams and assessments
This course does not include formal certification. Participants will complete multiple hands-on labs simulating attacker tactics and securing LLM implementations. These labs are designed to assess comprehension, critical thinking, and applied technical skill.
Hands-on learning
This course includes:
- Over 10 scenario-based labs hosted in a cloud-accessible platform
- 30-day extended access to all lab environments
- Realistic LLM threat simulations: injection, escalation, data manipulation
- Post-course access to instructor guidance for continued learning
Mastering LLM Integration Security: Offensive & Defensive Tactics Dates
Next 3 available training dates for this course
VIRTUAL
VIRTUAL
VIRTUAL
Advance Your Career with Mastering LLM Integration Security: Offensive & Defensive Tactics
Gain the skills you need to succeed. Enrol in Mastering LLM Integration Security: Offensive & Defensive Tactics with Newto Training today.