Part of the AI Red Teamer path, the Attacking AI – Application and System module examines security vulnerabilities across two critical layers of AI deployments: the application and the system.
You’ll learn how flaws in these components can compromise the overall security of an AI environment, then shift focus to the Model Context Protocol (MCP), its purpose, how it functions, and the risks it introduces when misconfigured or exploited by malicious servers.
You’ll gain the skills to:
  • Detect vulnerabilities in AI application and system components
  • Explain how MCP operates and where security weaknesses can arise
  • Assess the risks of MCP servers, including threats from malicious instances
  • Apply offensive testing techniques to evaluate and strengthen AI deployments
logo (2)