news & announcements
44 TopicsNew CTI Lab: CVE-2025-49113: Investigating a Roundcube RCE
In February 2026, the Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2025-49113 to its Known Exploited Vulnerabilities (KEV) catalogue following exploitation in the wild. This critical vulnerability, which lay dormant in the Roundcube Webmail codebase for over a decade, allows authenticated attackers to achieve Remote Code Execution (RCE). With tens of thousands of instances exposed globally, particularly in government and higher education sectors, this flaw has become a primary target for both cybercriminal and state-sponsored groups. What is this about? CVE-2025-49113 is a high-impact PHP Object Deserialization vulnerability (CWE-502) with a CVSS score of 8.8. The flaw resides in how the application handles session data and URL parameters during file uploads. The attack vector focuses on: The Vulnerable Parameter: The _from parameter in program/actions/settings/upload.php lacks proper validation. The Logic Error: A bug in the Roundcube session parser allows an attacker to inject an exclamation mark (!) to corrupt session variables. The Gadget Chain: By manipulating the corrupted session, attackers can inject malicious PHP objects that leverage the Crypt_GPG library to execute arbitrary commands. Post-Auth Requirement: While exploitation requires authentication, attackers often pair this with credential harvesting or CSRF attacks to gain the initial foothold. Why is this critical for you and your team? This vulnerability represents a long-standing vulnerability that many organizations have failed to patch to date: Execution: Exploitation is notoriously difficult to detect via traditional Web Application Firewalls (WAFs) due to the nature of PHP object injection within session handling. Admins are largely relegated to access logs and HTTP logs to identify evidence of intrusion Federal Mandate: CISA has directed all Federal Civilian Executive Branch (FCEB) agencies to remediate this vulnerability by March 13, 2026, underscoring the immediate risk to national security infrastructure. Who is the content for? Security Analysts System Administrators Threat Researchers Threat Hunters Link to the lab: CVE-2025-49113: Investigating a Roundcube RCE13Views0likes0CommentsNew CTI Lab: 7-Zip Installer (Proxy Node Campaign): Analysis
In February 2026, security researchers across different organizations exposed a long-running malware distribution campaign targeting users of the popular 7-Zip archiving utility. Operating for an extended period, threat actors behind the operation registered the convincing lookalike domain 7zip[.]com , closely mimicking the legitimate 7-zip.org to distribute trojanized installers that silently convert victims' machines into residential proxy nodes. What is this about? Brand impersonation attacks represent a critical threat vector where attackers exploit user trust rather than software vulnerabilities. In this campaign, operators created a sophisticated intrusion using a fake 7zip[.]com domain that mirrors the structure and content of the official site, the malicious installer carries a now-revoked code-signing certificate from "JOZEAL NETWORK TECHNOLOGY CO., LIMITED," and victims receive a fully functional copy of 7-Zip that deploys malicious payloads onto the victim machine. These malicious Golang binaries establish persistence, manipulate firewall rules, and transform victim machines into nodes for a residential botnet. Why is this critical for you and your team? As security teams increasingly focus on advanced persistent threats and zero-day exploitation, this campaign demonstrates how attackers achieve persistent access through social engineering and trust exploitation. Users downloading software from what appears to be a legitimate source, particularly when following online tutorials or search engine results, and bypassing traditional security awareness training. The malware's use of code-signed binaries, legitimate system directories, and SYSTEM-level service persistence means it evades many endpoint security controls designed to catch obvious malware. Understanding this infection chain and learning to threat hunt for these artefacts is essential for detecting similar tactics in your environment. Who is the content for? Security Analysts Threat Researchers Threat Hunters Here is a link to the lab: 7-Zip Installer (Proxy Node Campaign): Analysis36Views1like0CommentsNew CTI Lab: Lotus Blossom Notepad ++ Campaign: Analysis
In January 2026, threat researchers at Rapid7 detailed a sophisticated supply chain attack targeting the Notepad++ update mechanism. Between July and October 2025, attackers compromised the project’s distribution infrastructure to deliver a custom, undocumented backdoor dubbed Chrysalis. By intercepting update requests, the threat actor distributed malicious NSIS installers to a targeted set of victims across Southeast Asia and Australia. What is this about? Supply chain compromises represent one of the most dangerous threat vectors today. In this campaign, the Chinese state-sponsored group Lotus Blossom (also known as Billbug or Thrip) hijacked a trusted software update pipeline. The attack involves complex DLL sideloading techniques—abusing a renamed Bitdefender binary to execute a multi-layered encrypted payload. Once the Chrysalis backdoor is active, it provides the attackers with persistent, feature-rich remote access to the victim's environment. Why is this critical for you and your team? As organizations rely on legitimate third-party utilities like Notepad++, trust in the update process is paramount. This intrusion highlights how state-sponsored actors can weaponize that trust to bypass perimeter defences. Understanding the Chrysalis infection chain—from the initial NSIS installer to the triple-layer decryption of its C2 configuration—is vital for detecting similar "living-off-the-land" and sideloading tactics in your own network. If your team manages software deployments or monitors developer environments, you must be cognisant of how attackers leverage legitimate, signed binaries to mask malicious behaviour. This lab provides a deep dive into the specific obfuscation and persistence strategies used by one of the region's most persistent threat groups. Who is the content for? Security Analysts Threat Researchers Here is a link to the lab: Lotus Blossom Campaign: Analysis75Views3likes1CommentNew CTI Lab: CVE-2026-23744 (MCPJam RCE): Offensive
On January 16, 2026, advisories were released covering a critical vulnerability in MCPJam Inspector, the local-first development platform for MCP servers. The Latest version, 1.4.2 and earlier, is vulnerable to a remote code execution (RCE) vulnerability, a trivial yet highly impactful vulnerability that allows an attacker to send a crafted HTTP request that triggers the installation of an MCP server, leading to RCE. What is this about? Model context protocol (MCP) has become more popular as a way to connect applications and services together that use AI, such as connecting tools to your OpenAI account, so the AI can help you work with the tool, perform tasks on your behalf, or work as webhooks between tools. MCPJam is an example of a tool that makes these processes easier and more convenient. Why is this critical for you and your team? As AI adoption across industries and sectors rises, products and services have been released to help people interact with AI pipelines. With MCPJam and tools like it, you can test and develop MCP (model context protocol) servers, emulate deployments, and debug your workflow, making your entire MCP development pipeline much smoother.If you're using any tools like this, where you share you API keys and other sensitive data with the tool, you need to be cognisant of the risks that these tools carry, as many others are vulnerable to basic misconfigurations that can lead to serious impacts. Who is the content for? Penetration Testers Security Analysts Incident Responders Here is a link to the lab: CVE-2026-23744 - MCPJam: Offensive This application has no logging available at all, so no Defensive Variant of this lab50Views1like0CommentsNew CTI Lab: CVE-2026-21858 (n8n RCE): Offensive
On January 7, 2026, Cyera Research Labs released an advisory for "Ni8mare," a critical unauthenticated remote code execution vulnerability (CVE-2026-21858) in n8n with a CVSS score of 10.0. The flaw stems from a "Content-Type Confusion" bug in the Form Webhook node, which allows attackers to override internal file paths, thereby enabling the arbitrary disclosure of sensitive data, including database.sqlite and the system's unique encryption key. This vulnerability can be exploited to forge administrative sessions and achieve full system takeover. What is this about? n8n has had a lot of vulnerabilities over the last year or so, in particular, these vulnerabilities have required authentication to exploit, meaning the attacker would already need access to the n8n server to leverage the vulnerability. This critical vulnerability has a CVSS score of 10.0, and attackers can achieve unauthenticated remote code execution - making it a poignant discussion point for potential future vulnerabilities and attacks, given that n8n will likely receive more attention from vulnerability researchers and threat actors alike. Why is this critical for you and your team? n8n is very popular with organization and the wider community alike, with over 70,000 active instances exposed to the internet; there is a reasonably wide attack surface to be exploited. If you or your team uses n8n, and there is a reasonably high probability that you do (for example in human resources, project planning, news feeds etc) then learning about and mitigating this vulnerability is essential to protect yourself against attacks. Who is the content for? Penetration Testers Security Analysts Incident Responders Here is a link to the lab: CVE-2026-21858 (n8n RCE): Offensive41Views2likes2CommentsNew CTI Lab: CVE-2025-55182 (React - Next.js)
On December 3, 2025, the cybersecurity world received news of a critical vulnerability in the React 19 ecosystem. This critical flaw, tracked as CVE-2025-55182 with a CVSS score of 10.0, affects React Server Components (RSC). A major issue, this flaw allows unauthenticated attackers to achieve Remote Code Execution (RCE) on vulnerable servers by sending a specially crafted HTTP request. AI Hallucination Within the first 24 hours of the vulnerability being announced, a POC was published to GitHub, which looked convincing and, when tested, appeared to achieve the goal successfully, resulting in Code Execution. It turned out that this POC, which was picked up and circulated by researchers and social media, was actually an AI Hallucination. The AI had crafted a deliberately misconfigured and vulnerable server and a POC that appeared to match the requirements of the exploit, but only actually triggered the misconfiguration. What is this about? CVE-2025-55182 is a critical Insecure Deserialization vulnerability. It affects React Server Components (RSC) within the React 19 ecosystem. The flaw is located in the server-side logic that handles the React Flight protocol, which is used for client-to-server interactions, specifically Server Functions or Server Actions. An unauthenticated attacker can execute a specially crafted HTTP request containing a malicious, serialized payload. The vulnerable server-side code fails to validate this payload, allowing the attacker to achieve remote code execution on the server. Why is this critical for you and your team? This critical vulnerability has a CVSS score of 10, is fairly trivial to exploit, and has significant impacts when successfully exploited, given that its impact includes unauthenticated remote code execution. If your team uses React, React Server Components (RSC), or similar, are at risk. This flaw impacts the standard, default configurations of high-profile frameworks like the Next.js App Router, which many organizations rely on for building high-performance sites. Who is the content for? Security Analysts Penetration Testers Incident Responders Vulnerability Management Teams Here is a link to the lab: CVE-2025-55182 (React - Next.js)177Views3likes1CommentIt’s Not Magic, It’s Mechanics: Demystifying the OWASP Top 10 for AI
Welcome back to our series, “Behind the Scenes of Immersive One”! The following is a conversation with Sabrina Kayaci, Cybersecurity Engineer for Immersive One, and Rebecca Schimmoeller, Lead Product Marketing Manager. Today, we’re continuing the discussion on our Secure AI capability. “When developers hear ‘AI Security,’ they either start to sweat or eye-roll. It either feels like a black box where the old rules don’t apply, or it feels like inflated marketing hype. The truth is, AI vulnerabilities aren't magic; they are mostly just new manifestations of the classic flaws we’ve been fighting for decades. Once you map the new threats to the old patterns, the mystique fades. You realize it’s not magic to fear or hype to ignore—it’s just an engineering problem to solve.” Rebecca: Awesome frame, Sabrina. No matter where you sit on the spectrum—whether you’re anxious about the risks or skeptical of the buzz—AI security doesn't mean starting from zero. Developers should already have the muscle memory for this. Sabrina: Exactly. We aren't asking them to learn a new language; we're asking them to apply their existing fluency to a new dialect. That’s the core philosophy behind our new OWASP Top 10 for LLMs and GenAI collection. We tackle the problem that AI is often treated as a "new and daunting" field. By framing threats like Supply Chain Vulnerabilities or Excessive Agency as variations of known issues, we accelerate the learning curve. We strip away the "AI mysticism" to reveal the underlying mechanical flaw. Rebecca: I love "stripping away the mysticism." Let’s talk about how that works, starting with the big one everyone is concerned about—Prompt Injection. How do you take that from "scary AI jailbreak" to something a grounded engineer can fix? Sabrina: In the media, Prompt Injection is portrayed as this sentient ghost in the machine. In our lab, we treat it as an Input Validation failure. We show that the system is simply confusing "user input" with "system instructions." When a developer sees it through that lens, the eye-roll stops. It’s no longer hype; it’s just mixed context. And they know how to fix mixed context. We show them how to apply that architectural fix to an LLM. Rebecca: That maps perfectly. But looking at the curriculum, I see we go much deeper than just a standard "Top 10" checklist. Why was it important to go beyond the simple definitions? Sabrina: Because a definition tells you what something is, but it doesn't tell you how it impacts you. In the new OWASP LLM collection, we focus on Core Mechanics and Attack Vectors. We deconstruct threats like Data and Model Poisoning or Supply Chain vulnerabilities to show you exactly how they infiltrate a system. It’s the difference between knowing what an engine looks like and knowing how to take it apart. You need to understand the mechanics of the vulnerability to understand the potential impact—otherwise, you're just guessing at the fix. Rebecca: It sounds like we're upgrading their threat modeling software, not just their syntax. Sabrina: Yes, 100%. Look at Excessive Agency. That sounds like a sci-fi plot about a robot takeover. But when you do the lab, you realize it’s just "Broken Access Control" on steroids. It’s about what happens when you give an automated component too much permission to act on your behalf. Once a developer maps "Excessive Agency" to "Least Privilege," they stop worrying about the robot and start locking down the permissions. Rebecca: Is the goal to get them through all ten modules to earn a Badge? Sabrina: The OWASP Top 10 for LLMs Badge is the end state. It proves you have moved past the "sweat or eye-roll" reactive phase. To your manager, it signals you have a proactive, structured understanding of the AI risk landscape and can speak the language of secure AI. There’s no hype in that. Only value-add to you and your team. Final Thought Our OWASP Top 10 for LLMs collection is the antidote to AI security angst. For the developer, it demystifies the threat landscape, proving that their existing security instincts are the key to solving new problems. For the organization, it ensures that your AI strategy is built on a bedrock of engineering reality, rather than a shaky foundation of fear. [Access Collection]163Views1like0CommentsArchitecting at Speed: Mastering Secure Development with OpenAI Codex
Welcome back to our series, “Behind the Scenes of Immersive One”! The following is a conversation with BenMcCarthy, Lead Cybersecurity Engineer for Immersive One, and RebeccaSchimmoeller, Lead Product Marketing Manager. Today, we’re continuing the discussion on our Secure AI capability. There is a misconception that security is the enemy of development speed. But with AI, the opposite is true. If you don't have security engineered into your AI workflow, you can't actually go fast—because you’re constantly stopping to fix 'trash code' or patch vulnerabilities. The developers who win in this era aren't just the ones coding faster; they are the ones architecting systems that are secure by design, even at AI speeds.” Rebecca: That’s a crucial distinction, Ben. We often hear that AI is a "firehose" of productivity, but without control, that firehose just creates a mess. It seems like the role of the developer is shifting from "writing lines" to managing this high-velocity output. How does the new Building with AI: Codex CLI collection help them make that shift? Ben: By giving them the controls they need to harness that speed safely. If you let OpenAI’s Codex run without guardrails or understanding, you get velocity, sure—but you also get risk. We designed this collection to empower developers to become their own Security Architects for their workflows. We are leveraging the Azure AI Foundry capability to give learners real, secure access to these models. The goal isn't to teach you how to hit "Tab" to autocomplete; it's to teach you how to rigorously evaluate, guide, and constrain what the AI produces using the command line tool like Codex so you can ship code that is both fast and bulletproof. Rebecca: So it’s about elevating the human’s role to "Architect." Let’s talk specifics given what the collection covers—how did you instill that mindset? Ben: We start by ensuring developers know the power of what you can do with Codex. How to get the best out of your models in this CLI tool. We go over effective prompt engineering, tool usage, and how AI can help with "Greenfield" projects (net-new builds) and "Brownfield" projects (legacy codebases). This is a critical skill for a lead engineer. AI is great at generating new code (greenfield), but it can be dangerous when it doesn't understand the hidden dependencies of a ten-year-old application (brownfield). We teach engineers how to spot those context gaps, key stuff that the AI might miss. Rebecca: I saw "specification-driven development" was a big part of your roadmap, too. How does that fit into the "speed" theme? Ben: This is the ultimate accelerator. Instead of writing the code line-by-line, you write the "spec"—the blueprint—and let Codex handle the implementation details. It’s not about doing less work; it’s about doing higher-leverage work. You define the logic and security constraints, and the AI handles the boilerplate. It shifts the developer’s brain from "how do I type this function?" to "what should this system actually do?" Rebecca: That sounds like a powerful approach, Ben. But what about the security risks? If developers are offloading implementation to Codex, how do they avoid leaking data or introducing bugs? Ben: That’s non-negotiable. In the Guardrails lab, we show learners how to build a safety net. We teach practical methods for stripping PII (Personally Identifiable Information) and using hooks to sanitize inputs before they reach the model. It gives developers the confidence to use these tools freely, knowing they have already engineered the safety mechanisms to protect their org. Rebecca: I saw a lab in the collection called "Tools and MCP" (Model Context Protocol). Is that where you get into advanced workflows? Ben: Exactly. This is where we give developers the keys to become a force multiplier. We show users how to connect Codex to other tools. This is the ideal definition of ROI for developers. You’re automating the tedious "check your work" phase, allowing you to ship secure code faster without burning out on manual review. Rebecca: It feels like that approach accepts today’s AI era realities for what they are and finds the strategic advantages… pushing developers towards productivity and security gains with real mastery. And just like the Claude collection, users have access to a Demonstrate Lab, to prove that mastery, am I right? Ben: Absolutely. The Demonstrate Lab challenges users to build a solution that’s efficient, functional, and secure. It proves that you aren't just an "AI user"—you are an AI Engineer who understands the capabilities the collection covers. Final Thought Our Building with AI: Codex collection is about upgrading the developer’s toolkit. For the organization, it ensures AI adoption is secure and scalable. For the engineer, it removes the drudgery of boilerplate, freeing you to focus on the creative, architectural challenges that drive real value. Ready to upgrade your workflow? [Access Collection]New CTI Lab: Shai-Hulud 2.0: Analysis
In late November/early December 2025, a set of critical software supply chain intrusions took place when the highly dangerous Shai-Hulud 2.0 worm was used to steal GitHub, Cloud, and other credentials and secrets by gaining access to developer machines through the use of a malicious npm package installation. What is this about? By abusing the inherent trust in the npm ecosystem, Shai-Hulud guarantees execution during the crucial preinstall phase, effectively bypassing many traditional security scans that only review code after installation. Once running, the payload launches a concurrent, parallel attack across your environment: it hunts for local credentials, attempts to steal highly privileged temporary cloud tokens via the Instance Metadata Service (IMDS), and, most critically, can automatically inject itself into every other package the victim maintains on their machine. Why is this critical for you and your team? npm is massively popular, and many of the affected packages are widely used in software development and deployment. Shai-Hulud 2.0 is a devastating self-replicating worm that weaponizes your supply chain to steal highly privileged cloud credentials (IMDS) and establish a permanent C2 backdoor via GitHub Actions if the threat actor decides to set that up. Given the importance of npm packages to developers, customers from any organisation, and across all sectors, it is essential that they understand how this intrusion works to prevent their credentials and secrets from being stolen. Who is the content for? Security Analysts Incident Responders Software Developers/Secure Development teams Cloud Engineers Vulnerability Management Teams Here is a link to the lab: Shai-Hulud 2.0: Analysis36Views0likes0CommentsAnnouncing the Winners of the 2025 Cyber Resilience Customer Awards!
What a year for cyber resilience! As we say goodbye to another Cybersecurity Awareness Month, we are thrilled to celebrate the organizations and individuals who have demonstrated exceptional dedication to proving and improving their cybersecurity posture, defending against emerging threats, and embedding a culture of resilience across their organizations using the Immersive One platform. Collectively, our customers have tackled countless labs and simulations, setting new benchmarks for capability and speed. After crunching the numbers and reviewing the nominations, we're ready to announce just some of the winners who truly excelled in 2025 across the following categories: Emerging Threats Leader Award The Emerging Threats Leader award recognizes organizations and individuals at the forefront of threat detection and threat hunting; proactively identifying risks and strengthening defenses using insights from our Cyber Threat Intelligence labs. 🏆 Emerging Threats Award Organization Winners include: NHS England T-Mobile Arctic Wolf 🏆 Emerging Threats Award Individual Winners include: Steven Glogger, Swisscom Paul Blance, Specsavers Taz Wake, Jones Lang LaSalle Mark Cox, NationalGrid Stephen Wilson, BT Group Cyber Resilience Leader Award This award acknowledges organizations that maximize the full use of the Immersive One platform to fully optimize end-to-end cyber readiness. True cyber resilience goes beyond simply preventing attacks; it encompasses the ability to prove, improve, benchmark and report your cyber resilience. 🏆 Cyber Resilience Leader Award Winners include: Swisscom NHS England Arctic Wolf Darktrace BT Group Secure Development Champions Award This award celebrates organizations and individuals who champion security throughout the software development lifecycle. It recognizes a proactive approach to building secure applications, emphasizing practices like threat modeling, secure coding standards, and rigorous testing using the Immersive One platform to prepare and demonstrate secure coding practices. 🏆 Secure Development Champion Organization Award Winners include: Citigroup GfK Swisscom 🏆 Secure Development Champion Individual Award Winners include: Steffen Wacker, Arctic Wolf Joao Santos, GfK Omkar Joshi, GfK Balaji Kannan, GfK Naresh Sivakumar, GfK Alexander Kolyshkin, EMCD Exercising Excellence Award The Exercising Excellence award recognizes organizations that have excelled in regularly using scenarios on the Immersive One platform to prove their cyber resilience. They have successfully run multiple crisis simulations to regularly exercise their teams and have high levels of participation and engagement. 🏆 Exercising Execellence Award Winners include: Mastercard Citigroup Siemens Energy NHS England Immersive Trailblazer Award This award recognizes individuals who simply love Immersive and have shown exceptional dedication to the platform. They have been amongst our top point scorers since January 1st 2025, completing thousands of labs and truly immersing themselves in the platform. 🏆 Immersive Trailblazer Award Winners include: Mico Marcos, PepsiCo QingKai Ma, Hubbel Community Leader Award Our final award, the Community Leader award, recognizes individual members of the Human Connection Community that have contributed to, and engaged with, both community content and their fellow community members. They have consistently shared tips and advice, engaged with popular threads and participated in community events and meetups, helping to bring the Human Connection community to life. 🏆 Community Leader Award Winners include: netcat steven CyberSharpe autom8on MegMarCyberTrust Nneka_AN Dooley DG431Views4likes4Comments