content updates
50 TopicsBeyond the Chat Window: How to Securely Vibe Code with Anthropic’s Claude
Welcome back to our series, “Behind the Scenes of Immersive One”! The following is a conversation with RobertKlentzeris, Application Security Content Engineer for Immersive One, and RebeccaSchimmoeller, Lead Product Marketing Manager. Today, we’re deep diving into one facet of our Secure AI capability. “We are seeing a shift from ‘chatting with AI’ to ‘inviting AI into the terminal.’ With the release of tools like Claude Code, developers aren't just copying and pasting snippets from a browser anymore. They are letting an agent live directly in their CLI, giving it permission to read file specs, run commands, and architect entire features. It’s a massive leap in capability—but also in trust.” Rebecca: That is the big shift we’re hearing about, Rob. The market is obsessed with the idea of "vibe coding" right now—just describing what you want and letting the AI handle the implementation details. But for a security leader, the idea of an AI agent having direct access to the CLI (Command Line Interface) sounds terrifying. It feels less like a helper and more like handing a stranger your SSH keys. Rob: That is exactly what makes Claude Code different from your standard autocomplete tools. You aren't just getting code suggestions; you are interacting with an agent that has tooling capabilities—like using MCP (Model Context Protocol) or running slash commands. If you don't know what you're doing, you might accidentally let the agent produce insecure code or mishandle PII in a way that’s harder to spot than a simple copy-paste error. This new collection is about bridging that gap: how do we embrace the speed of vibe coding without sacrificing the security of our platform? Rebecca: So it’s about safe integration. Let’s get into the weeds—what does the "safe" version of this look like in the actual Immersive One labs you created? Rob: We start by defining common patterns used in AI coding agents such as manual prompts and how you can write them so Claude generates secure code. We then go a little deeper and explore how you can let your agents start coding securely with more autonomy and less intervention while staying secure with spec-driven development. From there, we move to the components of Claude Code and show how to leverage these advanced features, such as custom slash commands and skills that can enhance the security of both large legacy and greenfield projects. Rebecca: I noticed your roadmap included a focus on "Guardrails" and "Claude Agents." Is this where we stop "trash code" from hitting production? Rob: Exactly. This is unique to the agentic workflow. In the Claude Agents lab, we teach users how to set up a "Reviewer Agent" that audits the code generated by the first agent. We also have a dedicated lab on Guardrails, focusing on stripping PII (Personally Identifiable Information) before Claude ever sees the data. It’s about ensuring that even if the AI is "vibing," the security protocols remain rigid. Rebecca: That sounds incredible for the security team, but what about the developer? If I’m used to just doing my thing, head down to deliver on time, won’t specification-driven development cramp my style? Rob: Fun fact: It actually makes you faster. Think of the 'spec' as the prompt that saves you ten revisions. At Immersive, we focus heavily on ROI and removing pain for users. In this case, we show developers how to use slash commands and hooks to automate the boring stuff. When you learn to use these tools properly, you stop wrestling with the AI and start conducting it. And because these labs are hands-on with real Claude Code access in a secure sandbox, you can experiment with these powerful agents without worrying about breaking your own local environment. Your manager will love that too. Rebecca: Ha! You’re right. It sounds like we’re giving users a safe place to crash-test the car before they drive it. And I see you wrap it all up with a "Demonstrate" lab? Rob: We do. We want to prove competence. The Demonstrate Lab is a capstone where you have to combine everything—usage, security, and productivity. You have to prove you know how to use Claude Code to build something functional and secure. It validates that you aren't just generating code; you're engineering with it. Final Thought Our Building with AI: Claude Code collection isn't just another coding tutorial. It is a blueprint for the agentic future of development. For you the developer, it turns Claude from a vibe code buddy into a fully integrated, secure pair programmer. For your organization, it transforms a potential security risk into a governed, high-speed workflow. Want to get started? [Access Collection]27Views0likes0CommentsNew CTI Lab: Lazarus Cyberespionage Campaign: Analysis
In early November 2025, North Korean state-sponsored actor Lazarus was reported to have launched various attacks as part of a long-standing cyberespionage campaign linked to Operation DreamJob. Targets of the attacks include European organizations manufacturing unmanned aerial vehicles (UAV), aircraft component manufacturers, and British industrial automation organization. Lazarus's and by extension North Korea's operational objectives with these attacks is assessed with high confidence to be cyber espionage. What is this about? The attacks launched by Lazarus used a custom remote access trojan called ScoringMathTea RAT, which uses its own cipher system to obfuscate its code to conceal its functionality from analysts. The lab involves reverse engineering the malware and identify indicators of compromise by breaking the cipher and using that to identify what the malware is doing. Why is this critical for you and your team? North Korean cybercriminals and state sponsored actors are highly skilled, persistent, and aggressive in the pursuit of the North Korean regimes objectives, and one of those objectives is stealing information from targets that can affect national security. Understanding how North Korean cyber operators conduct attacks and understanding their tooling is essential for analysts to be better equipped to tackle these threats. Who is the content for? Malware Analysts and Reverse Engineers SOC Analysts Incident Responders Threat Hunters Tactical and Operational Cyber Threat Intelligence Analysts Here is a link to the lab: Lazarus Cyberespionage Campaign: Analysis53Views2likes1CommentOctober is here! Prepare for Cybersecurity Awareness Month with Immersive 🎃
In a world where technology and threats are constantly evolving, building a resilient team is more important than ever. At Immersive, we're proud to be your partner in this journey, and we've put together a fantastic lineup of events, challenges, and resources throughout October to help you and your teams stay ahead of the curve. What’s on at Immersive this Cybersecurity Awareness Month 📆 Oct 1st Whitepaper: GenAI’s Impact on Cybersecurity Skills and Training Oct 6th Trick or Treat on Specter Street Challenge Begins: Labs 1-3 Oct 9th Labs Live: Ripper's Riddle Community Webinar Oct 13th Trick or Treat on Specter Street Challenge: Labs 4 - 6 Oct 15th Webinar: How to Build a People-Centric Defense for AI-Driven Attacks Oct 16th Labs Live: Cursed Canvas Community Webinar Oct 20th Trick or Treat on Specter Street Challenge: Labs 7 - 9 Oct 22nd Cyber Resilience Customer Awards Winners Revealed Oct 23rd Labs Live: Macro Polo Community Webinar Oct 27th Trick or Treat on Specter Street Challenge: Labs 10-12 Oct 30th Labs Live: Phantom Pages Webinar Oct 31st Trick or Treat on Specter Street Challenge Finale: Labs 13 Oct 31st Virtual Crisis Sim: The Puppet Master’s Trick or Treat Challenges and Labs Trick or Treat on Specter Street 👻 Welcome to Trick or Treat on Specter Street, a Halloween-themed cybersecurity challenge where you'll use both offensive and defensive skills to solve a mystery unlike anything we’ve encountered before. Each week throughout October, we’ll drop new hands-on labs that slowly begin to uncover the secrets of Specter Street. Can you crack the case? Find out more. AI Foundations 🤖 Ready to navigate the rapidly evolving world of Artificial Intelligence with confidence? Give our new AI Foundations lab collection a go! Designed to equip your teams with critical AI knowledge and practical implementation skills; this initial collection features seven foundational labs that progressively guide your teams from high-level overviews to secure, hands-on AI implementation. Find out more. Events and Webinars Webinar How to Build a People-Centric Defense for AI-Driven Attacks Wednesday October 15th A must-attend event for understanding how threat actors are leveraging AI and other emerging technologies to carry out attacks. Register Now. Virtual Crisis Sim The Puppet Master’s Trick or Treat Friday October 31st Join us on Halloween as the notorious Puppet Master returns for a fiendish game of Trick or Treat 🎃 Play along with our Immersive crisis response experts as we tackle a LIVE coordinated attack from the Puppet Master on a Critical National Infrastructure organization. Dare you play the Puppet Master’s game and survive, or will they finally get their revenge?! Register Now. AI and Emerging Threats Throughout the month, we’re shining a spotlight on the rise of AI in cyber. From our all-new AI Foundational lab series to cutting edge research from the experts at the cutting edge of GenAI in cybersecurity in our latest whitepaper: GenAI’s Impact on Cybersecurity Skills and Training. Explore our latest AI-focused resources and upskill your teams to confidently face the future of cyber resilience. Check out our latest reports, articles, webinars and more on GenAI, here. Celebrating Cyber Resilience Heroes 🏆 We're also celebrating the individuals and organizations at the forefront of cyber resilience with our Cyber Resilience Customer Awards. Keep your eyes peeled on our social channels! We'll be unveiling our latest winners on October 22nd, recognizing those who demonstrate an outstanding commitment to proving and improving their cyber readiness. It's going to be a jam-packed month focused on practical application and deep engagement. Let’s make this the most secure October yet!210Views1like1CommentNew Labs: BlackHat 2025 and DefCon 33
Throughout early August 2025, representatives from Immersive's cyber team attended the BlackHat 2025 and DefCon 33 conferences and got great exposure to the latest technologies, topics, and techniques presented by the sharpest minds in our industry. As a result of attending these talks, workshops, and villages, Immersive has created brand new labs going through the various talks that took place, allowing you to get hands-on with the latest technologies and exploits. We present a number of brand new labs covering some of the most interesting and insightful topics from the events, from operational technology (OT) to achieving privilege escalation through firewall software. AI was a hot topic, as you would imagine, especially around Prompt Injection attacks. We already have plenty of content on Prompt Injection, not to mention the new AI Foundations content, so for this series, we created an Appsec Style lab around preventing Prompt Injection attacks. Why should our customers care? BlackHat and DefCon are two conferences that attract the greatest minds in cyber to get together and share their knowledge through workshops, official talks, and villages. Given the high diversity of events and talks that took place, there is something for everyone! Many of the topic areas shared are things that attackers could easily exploit themselves, so taking advantage of the information in these labs equips our customers with the knowledge of the latest vulnerabilities, threats, and exploitation techniques currently being talked about in the industry - improving your resilience and preparation against the latest threats. Who are the labs for? Offensive Security Engineers and Penetration Testers SOC Analysts and Incident Responders Malware Reverse Engineers Operational Technology Engineers Cyber Security Engineers Here is a list of the labs in this release: Binary Facades: Extracting Embedded Scripts CVE-2024-5921 Redux - Bypassing mitigations to PrivEsc with Palo Alto Global Protect Chrome Alone: Transforming a Browser into a C2 Platform No VPN Needed?: Cryptographic Attacks Against the OPC UA Protocol Python: AI Prompt Injection If you'd like to do any of these labs, here is a link to the BlackHat/DefCon collection: https://immersivelabs.online/series/defcon-black-hat/57Views0likes0CommentsNew CTI Lab: CVE-2025-9074 (Docker Container Escape): Defensive
Pvotal Technologies published a write-up for a vulnerability in Docker Engine, given a CVSS score of 9.3. CVE-2025-9074 is a flaw in Docker Desktop that exposes the Docker Engine API to any container, with no authentication. Exploitation of this critical vulnerability allows a low-privileged container to issue privileged API commands, take over other containers, and, in some cases, mount the host drive and access files and folders and eventually achieve remote code execution. Why should our customers care? Many organizations rely on containerization in their development teams, and a vulnerability like this could allow an attacker to gain access to any to developer's workstation by mounting a developer's host drive. The possibility of supply chain attacks is increased due to malicious containers that can be used by developers, which can have start-up scripts that mount and "escape" the containerized environment. Who is the defensive lab for? System Administrators Developers SOC Analysts Incident Responders Threat Hunting Here are the links to the labs: Defensive: https://immersivelabs.online/labs/cve-2025-9074-docker-container-escape-defensive70Views1like0CommentsImmersiveOne: Scattered Spider Release
Scattered Spider has continuously been a threat to many of our customers, and one of the reasons is that they have techniques and tactics that can affect all members of an organization. From their advanced social engineering tactics targeting less security-focused users in an organization to bypassing defences long enough to deploy ransomware and steal data from some of the largest organizations in the world. Therefore, Immersive is releasing an ImmersiveOne approach to protecting our customers. This means customers now have access to the following: Lab – Scattered Spider and Dragonforce: Campaign Analysis Lab – Threat Actors: Scattered Spider Workforce Scenario – Social Engineering Techniques Crisis Sim – Responding to a Scattered Spider Attack The technical and non-technical labs, workforce scenario, and Crisis Sim scenario release will enable everyone inside an organization to prepare and be ready for threats posed by Scattered Spider. For an in-depth blog on Scattered Spider and what to think about in a crisis, follow the link here: https://www.immersivelabs.com/resources/blog/scattered-spider-what-these-breaches-reveal-about-crisis-leadership-under-pressure52Views1like0CommentsArtificial Intelligence: Navigating the Evolving Landscape
The changing world To understand where we're going, you first need to grasp the sheer scale of what's happening now. The May 2025 report on Artificial Intelligence Trends by Mary Meeker and Bond Capital paints a vivid picture of a sector in overdrive: Unprecedented user adoption: Generative AI tools have achieved mass adoption faster than any previous technology, including the internet and smartphones. Soaring infrastructure investment: Top tech giants (Apple, NVIDIA, Microsoft, Alphabet, Amazon, Meta) spent a combined $212 billion on capital expenditures in 2024, a huge portion of which was dedicated to AI infrastructure like data centres and custom silicon. Shifting cost dynamics: The cost to train a state-of-the-art foundation model remains astronomically high, somewhere in the hundreds of millions of dollars. However, the cost to use these models (the inference cost) is plummeting, making AI more accessible than ever before. Intense competition and rapid imitation: AI is boosting productivity and driving competition between products. Global AI "space race": Nations are treating AI supremacy as a strategic imperative, leading to significant government investment and policy-making, particularly in areas like the semiconductor supply chain, with the US, Europe, and China all building new fabrication plants. With this level of investment and adoption, can you confidently say this is a bubble about to burst? Sir Demis Hassabis, CEO of Google DeepMind, puts this huge change on the same magnitude as the industrial revolution and the launch of the internet. Data from Gartner supports this, suggesting that by the end of 2025, 39% of organizations worldwide will have moved into the experimentation phase of AI adoption. The shift is well and truly on. What does AI look like in 2025? AI is underpinned by machine learning models, which are trained, not programmed. Engineers feed them vast amounts of data, and they learn patterns, concepts, and relationships. Different types of models are used for different purposes, such as those specialising in human language interactions (large language models, LLMs) and artwork generation (diffusion models). When using AI systems, such as chatbots, you’re not interacting with the model directly but rather with additional software that uses the model as its “brain”. This allows you to implement guardrails to check user inputs and model outputs, helping to filter out harmful or inappropriate content. Modern AI systems are rarely just a wrapper around a model. They integrate with other tools and services to enhance their capabilities, such as searching the web for real-time information or accessing private company documents to provide context-specific answers. The year of agentic AI An AI agent is a system that can autonomously pursue a goal. Instead of responding to a single prompt, it can reason, plan, and execute a series of steps to accomplish a complex task. It can also decide which tools to use and in what order. An AI agent may still be a chatbot or run constantly in the background. Big tech companies are adamant that agentic AI is the next evolution, with Google, Amazon, and Microsoft all predicting the next wave of innovation over the next two years. A key catalyst for this explosion was the release of the open-source Model Context Protocol (MCP) by Anthropic in late 2024. MCP provides a standardized way for AI models to discover and use tools. As the official documentation puts it: "Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals... MCP provides a standardized way to connect AI models to different data sources and tools." Source: Model Context Protocol - Getting Started MCP has been a game-changer, dramatically simplifying the process of giving AI systems new capabilities and accelerating the move from AI systems that know things to AI systems that do things. It’s no coincidence that technology companies then started to release their guides for building AI agents following MCP’s release – with Microsoft, Google, Cloudflare, OpenAI, and Amazon following close behind. Technology to watch Finally, a few key technologies that will define the next phase of AI include: Model Context Protocol (MCP) Continue to watch this standard. As more tools and platforms adopt MCP, the ecosystem of "plug-and-play" capabilities for agents will explode, as will the security risks. Simon Willison puts it perfectly by describing a “lethal trifecta”. AI systems with access to private data, the ability to communicate externally, and exposure to untrusted content could easily lead to serious consequences. Source: Simon Willison Authorisation for AI systems As agents move from knowing things to doing things (e.g., booking travel, purchasing supplies, modifying code), security becomes paramount. We need robust authorisation. This will involve human-in-the-loop (HITL) approvals, likely powered by modern authentication standards like Client-Initiated Backchannel Authentication (CIBA), which can send a push notification to a manager to approve an agent's action. Thought leaders from Microsoft suggest an overhaul to OAuth, with agentic systems having their own distinct identities and security considerations. One thing’s for sure: proper authorization is complex – difficult to get right and catastrophic to get wrong. Agent-to-agent communication Current AI agents are specialized for a specific purpose, but next-generation AI functionality comes through the use of multi-agent systems, which can be deployed in a variety of architectures, such as hierarchical or swarms. How agents communicate with each other, share memory, and share capabilities is still in its relative infancy, especially when AI agents may be hosted independently and written with different frameworks. Two competing protocols are emerging: Google's Agent2Agent protocol and IBM’s Agent Communication Protocol (ACP). It's too early to call a winner, but the development of a standard here will be a major milestone. We are at the beginning of the agentic era. 2025 is the year for experimentation. It's time to move from simply using AI to actively building with it, automating the tedious, and unlocking new forms of creativity and productivity. Getting the most out of AI If one thing’s for sure, it’s that the AI landscape is moving fast. So it’s crucial that you and your organisation are at the forefront of AI developments and making the most out of the latest technologies. Keep your eyes peeled for brand new labs in this space coming very soon! Our brand new collection will demystify terminology, explore the core concepts, and let you build and secure modern AI systems in a safe, sandbox environment. Sign up for email notifications from the Immersive Community so you don’t miss out on this brand new collection.66Views0likes0CommentsNew CTI Labs: CVE-2025-53770 (ToolShell SharePoint RCE): Offensive and Defensive
Recently, a critical zero-day vulnerability affecting on-premise SharePoint servers, identified as CVE-2025-53770, was uncovered. This vulnerability allows for authentication bypass, leading to remote code execution, and has been actively exploited in the wild. Eye Security researchers detected an in-the-wild exploit chain on July 18, 2025, during an incident response engagement. This discovery led to Microsoft assigning two CVEs: CVE-2025-53770 and CVE-2025-53771. The attack notably leveraged a combination of vulnerabilities to achieve its objectives, impacting numerous SharePoint servers globally. There is now a public exploit available for anyone wanting to achieve remote code execution. Why should our customers care? This critical vulnerability has been added to the CISA Kev Catalog. and with no authentication or user interaction, a vulnerable SharePoint server can be fully taken over remotely, letting attackers run arbitrary code as if they were privileged admins. SharePoint is a complex and large system that often holds a lot of sensitive data for organizations and is often a targeted system for attackers. Who is the defensive lab for? System Administrators SOC Analysts Incident Responders Threat Hunters Who is the offensive lab for? Red teamers Penetration Testers Threat Hunters Here are the links to the labs: Offensive: https://immersivelabs.online/v2/labs/cve-2025-53770-toolshell-sharepoint-rce-offensive Defensive: https://immersivelabs.online/v2/labs/cve-2025-53770-toolshell-sharepoint-rce-defensive189Views4likes1CommentNew CTI Lab: CVE-2025-32463 (Sudo Chroot Elevation of Privilege): Offensive
On June 30, 2025, the Stratascale Cyber Research Unit (CRU) team identified a critical local privilege escalation vulnerability in sudo, tracked as CVE-2025-32463. This vulnerability, related to sudo's chroot option, can allow an attacker to escalate privileges to root on an affected system. Why should our customers care? This critical vulnerability is reasonably trivial to exploit, and should an attacker gain user-level access to a vulnerable machine, they'll be able to elevate their privileges and have full control over the machine. It has come to our attention that not many people are aware that sudo has versioning. It is a binary that is constantly iterated upon, which naturally may introduce new vulnerabilities. If administrators and security analysts are not aware of how these vulnerabilities work, this can lead to significant risks and impacts. Who is it for? Red Teamers Penetration Testers System Administrators Here is a link to the lab: https://iml.immersivelabs.online/labs/cve-2025-32463-sudo-chroot-elevation-of-privilege-offensive86Views1like0CommentsNew CTI/OT Lab: Norwegian Dam Compromise: Campaign Analysis
We have received reports of a cyber incident that occurred at the Lake Risevatnet Dam, near Svelgen, Norway, in April 2025. A threat actor gained unauthorized access to a web-accessible Human-Machine Interface (HMI) and fully opened a water valve at the facility. This resulted in an excess discharge of 497 liters per second above the mandated minimum water flow. Which persisted for four hours before detection. This attack highlights a dangerous reality: critical OT systems are increasingly exposed to the internet, making them accessible to threat actors. In this case, control over a dam’s valve system was obtained via an insecure web interface, a scenario that could have had even more severe consequences. A recent report by Censys identified over 400 exposed web-based interfaces across U.S. water utilities alone. This dam incident in Norway exemplifies the tangible risks posed by such exposures. In this lab, you will be taken through the attack from an offensive viewpoint, including cracking an HMI and fully opening two valves. Why should our customers care? OT environments, including dams, energy grids, and oil pipelines, are foundational to national security and daily life. These systems cannot be secured using traditional IT playbooks. As OT becomes more connected, tailored security strategies are critical to prevent unauthorized access and catastrophic failures. Who is it for? Incident responders SOC analyst Threat Hunters Red Teamer Penetration Testers OT Engineers Here is the link to the lab: https://immersivelabs.online/v2/labs/norwegian-dam-compromise-campaign-analysis314Views1like0Comments