Changelog
Follow up on the latest improvements and updates.
RSS
new
Academy
Enterprise
Defensive
Nine LetsDefend courses added into HTB Academy
HTB Academy’s defensive portfolio just got deeper, sharper, and more job-aligned with the addition of nine LetsDefend courses.
These new modules strengthen critical defensive capabilities across the workflows defenders rely on every day, including PKI, malware analysis, threat frameworks, network traffic analysis, DFIR, and threat hunting across SIEM, DNS, and IPS/IDS environments.
Courses included in this release are:
- Public Key Infrastructure
- Identifying Threats and Malicious Software
- MITRE ATT&CK Framework
- Cyber Kill Chain
- Network Packet Analysis
- DFIR with EDR
- Threat Hunting with SIEM
- Threat Hunting with DNS
- Threat Hunting with IPS/IDS

new
Labs
Academy
Capture The Flag
Features
HTB Account integration with LetsDefend
HTB Account has now been fully integrated into the LetsDefend platform.
What this means for Community Platform users:
- Going forward, new registrations to LetsDefend will be available only via an HTB account
- Existing users will have the option to link their LetsDefend account to an existing HTB account
For a short transition period, there will be two sign-in options:
- Sign in with an existing LetsDefend account
- Sign in via an HTB account


We’ve added a brand-new Quantum challenge category which includes five hands-on challenges exploring the impact of quantum computing on cryptography and security.
Designed for intermediate to advanced security professionals who want to get hands-on with quantum circuits, cryptographic attacks, and emerging quantum threats, users will learn:
- How quantum algorithms can break widely used encryption methods (RSA, ECC) faster than classical computers
- How organizations are preparing for the transition to quantum-resistant cryptography
- New attack vectors and defenses including quantum-enhanced attacks and quantum-secured communication

new
Capture The Flag
Extend the impact of your CTF with the CTF Post-Event Workshop
The CTF Post-Event Workshop is now live for HTB CTF Platform users.
The Post-Event Workshop enables you to create a non-competitive clone of an original CTF, allowing participants to revisit and solve challenges in a more relaxed, training-focused environment after the main event has ended.
What’s different from a standard CTF?
- Rankings, timers, and scoreboards are disabled so teams can focus on learning
- Participants can attempt challenges as many times as they want, even if a teammate has already solved them
- Unlike live CTFs where a solved challenge is completed for the entire team, the Workshop ensures every participant can work through every challenge themselves
Why it matters for your team
- Use the Workshop as a safe sandbox for follow-up enablement, skills reinforcement, or short internal training sessions
- The Workshop automatically registers participants and clones the existing infrastructure with no manual invites or complex configuration required
- Extend the lifespan of CTF content and ensure learning continues long after the event ends

improved
Enterprise
Faster, more reliable Cloud Labs are here
We’ve implemented an infrastructure upgrade to improve the performance, stability, and long-term scalability of Cloud Labs by re-deploying the service on a more modern and efficient backend.
What’s changing
- Cloud Labs has been migrated to an updated infrastructure
- Reduction of ongoing maintenance requirements
- A foundation to support new and more advanced Cloud Lab scenarios in the future
What this means for you
- Faster and more reliable environment resets, improving day-to-day training workflows
- Increased stability across Cloud Lab deployments
- Support for future expansion, enabling our Content team to build additional and more complex cloud-based exercises
new
Academy
Enterprise
Offensive
Full AI Red Teamer Job Role Path now available
The AI Red Teamer Job Role Path, built in collaboration with Google, is now fully complete!
This path equips cybersecurity professionals with the cutting-edge skills needed to assess, exploit, and secure today’s AI-powered systems. With 12 hands-on modules aligned to Google’s Secure AI Framework (SAIF), you’ll explore everything from prompt injection and model privacy attacks to adversarial AI techniques, supply chain risks, and deployment-level threats.
As you move through the path, you’ll work through real-world AI security scenarios, learning how to influence model behavior, craft AI-specific red teaming strategies, and execute offensive security testing against AI-driven applications.

new
Academy
Enterprise
Defensive
Master AI Privacy and Defense with two new Academy modules
The
AI Privacy
module introduces you to one of the most critical privacy threats in machine learning: the ability to determine whether a specific individual’s data was included in a model’s training set. You’ll explore how overfitting creates detectable behavioral signals, implement real membership inference attacks using the shadow model methodology, and apply industry‑recommended defenses such as differential privacy.Key learning outcomes:
- Implementing shadow models and attack classifiers to detect membership based on prediction confidence patterns
- Understanding differential privacy and applying DP-SGD to train privacy‑preserving models
- Using PATE to achieve privacy through architectural separation
- Evaluating and mitigating privacy leakage across machine learning systems

The
AI Defense
module introduces you to the strategies and techniques for protecting AI applications from attacks explored in the AI Red Teamer path. You’ll explore how to proactively harden models through adversarial training and tuning, as well as implement LLM guardrails to enforce safety and reliability at the application layer.Key learning outcomes:
- Understanding adversarial tuning and applying it to refine model behavior against evolving threats
- Learning the basic concepts of LLM guardrails and implementing them at the application layer
- Building multi-layered defenses that combine model-level and application-level safeguards
- Designing AI applications that maintain security, reliability, and user trust against a variety of attack vectors

new
Labs
Enterprise
Train against CVE-2025-55182 in ReactOOPs challenge
Just days after the React2Shell vulnerability was publicly disclosed, you can now learn to understand, exploit, and defend against CVE-2025-55182 with the
ReactOOPs
challenge.In this scenario, you’ll analyze how NexusAI handles data in its reactive layer by serializing and deserializing component “chunks.” In addition, you’ll pinpoint where input handling breaks down and how attackers can manipulate reference resolution.

new
Capture The Flag
New HTB Threat Range scenario - Spray ‘n’ Pray
Spray ‘n’ Pray
is a multi-host intrusion scenario that simulates a simple privilege escalation from a workstation compromise to a server admin compromise via a brute force attack.The intrusion starts with an already compromised workstation, and teams are tasked with extracting key information from the SIEM and retrieving files from the compromised endpoints to help understand the scope of the attack.
Throughout the scenario, teams will hunt for:
- Suspicious file downloads
- Brute Force password attacks
- Lateral Movement
- Living off the Land attacks
- OS Credential Dumping

improved
Labs
HTB Labs’ UI and frontend have been updated

We’ve officially upgraded the HTB Labs frontend and made some changes in our user interface for a cleaner layout. What does that mean for you?
Faster load times
UI interactions feel smoother and more responsive.
More frequent feature drops
This upgrade streamlines our engineering workflow, so we can ship features and improvements faster.
Better long-term support
This change unlocks new capabilities for future development, helping us build a more scalable HTB Labs platform.
You’ll still find the same content and tools. Just now, with a cleaner layout, faster performance, and way more potential. Got feedback or spot a bug? Drop it in /feedback on our Discord channel.
Load More
→