Course Overview
This course explains how to use Model Armor to protect AI applications, specifically large language models (LLMs).
The curriculum covers Model Armor's architecture and its role in mitigating threats like malicious URLs, prompt injection, jailbreaking, sensitive data leaks, and improper output handling.
Practical skills include defining floor settings, configuring templates, and enabling various detection types. You'll also explore sample audit logs to find details about flagged violations.
Who should attend
- Security engineers
- AI/ML developers
- Cloud architects
Prerequisites
- Working knowledge of APIs
- Working knowledge of Google Cloud CLI
- Working knowledge of cloud security foundational principles
- Familiarity with the Google Cloud console
Course Objectives
- Explain the purpose of Model Armor in a company’s security portfolio.
- Define the protections that Model Armor applies to all interactions with the LLM.
- Set up the Model Armor API and find flagged violations.
- Identify how Model Armor manages prompts and responses.