immersive labs application security
19 TopicsNetwork Hardening Lab Recommendations
I've been looking for some good training resources for network hardening. I've been working through the Windows Hardening collection and found it really useful. Are there any other lab recommendations similar to this? In particular hardening things such as PfSense firewalls, Vyos routers, Linux endpoints etc.Solved51Views0likes3CommentsIt’s Not Magic, It’s Mechanics: Demystifying the OWASP Top 10 for AI
Welcome back to our series, “Behind the Scenes of Immersive One”! The following is a conversation with Sabrina Kayaci, Cybersecurity Engineer for Immersive One, and Rebecca Schimmoeller, Lead Product Marketing Manager. Today, we’re continuing the discussion on our Secure AI capability. “When developers hear ‘AI Security,’ they either start to sweat or eye-roll. It either feels like a black box where the old rules don’t apply, or it feels like inflated marketing hype. The truth is, AI vulnerabilities aren't magic; they are mostly just new manifestations of the classic flaws we’ve been fighting for decades. Once you map the new threats to the old patterns, the mystique fades. You realize it’s not magic to fear or hype to ignore—it’s just an engineering problem to solve.” Rebecca: Awesome frame, Sabrina. No matter where you sit on the spectrum—whether you’re anxious about the risks or skeptical of the buzz—AI security doesn't mean starting from zero. Developers should already have the muscle memory for this. Sabrina: Exactly. We aren't asking them to learn a new language; we're asking them to apply their existing fluency to a new dialect. That’s the core philosophy behind our new OWASP Top 10 for LLMs and GenAI collection. We tackle the problem that AI is often treated as a "new and daunting" field. By framing threats like Supply Chain Vulnerabilities or Excessive Agency as variations of known issues, we accelerate the learning curve. We strip away the "AI mysticism" to reveal the underlying mechanical flaw. Rebecca: I love "stripping away the mysticism." Let’s talk about how that works, starting with the big one everyone is concerned about—Prompt Injection. How do you take that from "scary AI jailbreak" to something a grounded engineer can fix? Sabrina: In the media, Prompt Injection is portrayed as this sentient ghost in the machine. In our lab, we treat it as an Input Validation failure. We show that the system is simply confusing "user input" with "system instructions." When a developer sees it through that lens, the eye-roll stops. It’s no longer hype; it’s just mixed context. And they know how to fix mixed context. We show them how to apply that architectural fix to an LLM. Rebecca: That maps perfectly. But looking at the curriculum, I see we go much deeper than just a standard "Top 10" checklist. Why was it important to go beyond the simple definitions? Sabrina: Because a definition tells you what something is, but it doesn't tell you how it impacts you. In the new OWASP LLM collection, we focus on Core Mechanics and Attack Vectors. We deconstruct threats like Data and Model Poisoning or Supply Chain vulnerabilities to show you exactly how they infiltrate a system. It’s the difference between knowing what an engine looks like and knowing how to take it apart. You need to understand the mechanics of the vulnerability to understand the potential impact—otherwise, you're just guessing at the fix. Rebecca: It sounds like we're upgrading their threat modeling software, not just their syntax. Sabrina: Yes, 100%. Look at Excessive Agency. That sounds like a sci-fi plot about a robot takeover. But when you do the lab, you realize it’s just "Broken Access Control" on steroids. It’s about what happens when you give an automated component too much permission to act on your behalf. Once a developer maps "Excessive Agency" to "Least Privilege," they stop worrying about the robot and start locking down the permissions. Rebecca: Is the goal to get them through all ten modules to earn a Badge? Sabrina: The OWASP Top 10 for LLMs Badge is the end state. It proves you have moved past the "sweat or eye-roll" reactive phase. To your manager, it signals you have a proactive, structured understanding of the AI risk landscape and can speak the language of secure AI. There’s no hype in that. Only value-add to you and your team. Final Thought Our OWASP Top 10 for LLMs collection is the antidote to AI security angst. For the developer, it demystifies the threat landscape, proving that their existing security instincts are the key to solving new problems. For the organization, it ensures that your AI strategy is built on a bedrock of engineering reality, rather than a shaky foundation of fear. [Access Collection]157Views1like0CommentsArchitecting at Speed: Mastering Secure Development with OpenAI Codex
Welcome back to our series, “Behind the Scenes of Immersive One”! The following is a conversation with BenMcCarthy, Lead Cybersecurity Engineer for Immersive One, and RebeccaSchimmoeller, Lead Product Marketing Manager. Today, we’re continuing the discussion on our Secure AI capability. There is a misconception that security is the enemy of development speed. But with AI, the opposite is true. If you don't have security engineered into your AI workflow, you can't actually go fast—because you’re constantly stopping to fix 'trash code' or patch vulnerabilities. The developers who win in this era aren't just the ones coding faster; they are the ones architecting systems that are secure by design, even at AI speeds.” Rebecca: That’s a crucial distinction, Ben. We often hear that AI is a "firehose" of productivity, but without control, that firehose just creates a mess. It seems like the role of the developer is shifting from "writing lines" to managing this high-velocity output. How does the new Building with AI: Codex CLI collection help them make that shift? Ben: By giving them the controls they need to harness that speed safely. If you let OpenAI’s Codex run without guardrails or understanding, you get velocity, sure—but you also get risk. We designed this collection to empower developers to become their own Security Architects for their workflows. We are leveraging the Azure AI Foundry capability to give learners real, secure access to these models. The goal isn't to teach you how to hit "Tab" to autocomplete; it's to teach you how to rigorously evaluate, guide, and constrain what the AI produces using the command line tool like Codex so you can ship code that is both fast and bulletproof. Rebecca: So it’s about elevating the human’s role to "Architect." Let’s talk specifics given what the collection covers—how did you instill that mindset? Ben: We start by ensuring developers know the power of what you can do with Codex. How to get the best out of your models in this CLI tool. We go over effective prompt engineering, tool usage, and how AI can help with "Greenfield" projects (net-new builds) and "Brownfield" projects (legacy codebases). This is a critical skill for a lead engineer. AI is great at generating new code (greenfield), but it can be dangerous when it doesn't understand the hidden dependencies of a ten-year-old application (brownfield). We teach engineers how to spot those context gaps, key stuff that the AI might miss. Rebecca: I saw "specification-driven development" was a big part of your roadmap, too. How does that fit into the "speed" theme? Ben: This is the ultimate accelerator. Instead of writing the code line-by-line, you write the "spec"—the blueprint—and let Codex handle the implementation details. It’s not about doing less work; it’s about doing higher-leverage work. You define the logic and security constraints, and the AI handles the boilerplate. It shifts the developer’s brain from "how do I type this function?" to "what should this system actually do?" Rebecca: That sounds like a powerful approach, Ben. But what about the security risks? If developers are offloading implementation to Codex, how do they avoid leaking data or introducing bugs? Ben: That’s non-negotiable. In the Guardrails lab, we show learners how to build a safety net. We teach practical methods for stripping PII (Personally Identifiable Information) and using hooks to sanitize inputs before they reach the model. It gives developers the confidence to use these tools freely, knowing they have already engineered the safety mechanisms to protect their org. Rebecca: I saw a lab in the collection called "Tools and MCP" (Model Context Protocol). Is that where you get into advanced workflows? Ben: Exactly. This is where we give developers the keys to become a force multiplier. We show users how to connect Codex to other tools. This is the ideal definition of ROI for developers. You’re automating the tedious "check your work" phase, allowing you to ship secure code faster without burning out on manual review. Rebecca: It feels like that approach accepts today’s AI era realities for what they are and finds the strategic advantages… pushing developers towards productivity and security gains with real mastery. And just like the Claude collection, users have access to a Demonstrate Lab, to prove that mastery, am I right? Ben: Absolutely. The Demonstrate Lab challenges users to build a solution that’s efficient, functional, and secure. It proves that you aren't just an "AI user"—you are an AI Engineer who understands the capabilities the collection covers. Final Thought Our Building with AI: Codex collection is about upgrading the developer’s toolkit. For the organization, it ensures AI adoption is secure and scalable. For the engineer, it removes the drudgery of boilerplate, freeing you to focus on the creative, architectural challenges that drive real value. Ready to upgrade your workflow? [Access Collection]Beyond the Chat Window: How to Securely Vibe Code with Anthropic’s Claude
Welcome back to our series, “Behind the Scenes of Immersive One”! The following is a conversation with RobertKlentzeris, Application Security Content Engineer for Immersive One, and RebeccaSchimmoeller, Lead Product Marketing Manager. Today, we’re deep diving into one facet of our Secure AI capability. “We are seeing a shift from ‘chatting with AI’ to ‘inviting AI into the terminal.’ With the release of tools like Claude Code, developers aren't just copying and pasting snippets from a browser anymore. They are letting an agent live directly in their CLI, giving it permission to read file specs, run commands, and architect entire features. It’s a massive leap in capability—but also in trust.” Rebecca: That is the big shift we’re hearing about, Rob. The market is obsessed with the idea of "vibe coding" right now—just describing what you want and letting the AI handle the implementation details. But for a security leader, the idea of an AI agent having direct access to the CLI (Command Line Interface) sounds terrifying. It feels less like a helper and more like handing a stranger your SSH keys. Rob: That is exactly what makes Claude Code different from your standard autocomplete tools. You aren't just getting code suggestions; you are interacting with an agent that has tooling capabilities—like using MCP (Model Context Protocol) or running slash commands. If you don't know what you're doing, you might accidentally let the agent produce insecure code or mishandle PII in a way that’s harder to spot than a simple copy-paste error. This new collection is about bridging that gap: how do we embrace the speed of vibe coding without sacrificing the security of our platform? Rebecca: So it’s about safe integration. Let’s get into the weeds—what does the "safe" version of this look like in the actual Immersive One labs you created? Rob: We start by defining common patterns used in AI coding agents such as manual prompts and how you can write them so Claude generates secure code. We then go a little deeper and explore how you can let your agents start coding securely with more autonomy and less intervention while staying secure with spec-driven development. From there, we move to the components of Claude Code and show how to leverage these advanced features, such as custom slash commands and skills that can enhance the security of both large legacy and greenfield projects. Rebecca: I noticed your roadmap included a focus on "Guardrails" and "Claude Agents." Is this where we stop "trash code" from hitting production? Rob: Exactly. This is unique to the agentic workflow. In the Claude Agents lab, we teach users how to set up a "Reviewer Agent" that audits the code generated by the first agent. We also have a dedicated lab on Guardrails, focusing on stripping PII (Personally Identifiable Information) before Claude ever sees the data. It’s about ensuring that even if the AI is "vibing," the security protocols remain rigid. Rebecca: That sounds incredible for the security team, but what about the developer? If I’m used to just doing my thing, head down to deliver on time, won’t specification-driven development cramp my style? Rob: Fun fact: It actually makes you faster. Think of the 'spec' as the prompt that saves you ten revisions. At Immersive, we focus heavily on ROI and removing pain for users. In this case, we show developers how to use slash commands and hooks to automate the boring stuff. When you learn to use these tools properly, you stop wrestling with the AI and start conducting it. And because these labs are hands-on with real Claude Code access in a secure sandbox, you can experiment with these powerful agents without worrying about breaking your own local environment. Your manager will love that too. Rebecca: Ha! You’re right. It sounds like we’re giving users a safe place to crash-test the car before they drive it. And I see you wrap it all up with a "Demonstrate" lab? Rob: We do. We want to prove competence. The Demonstrate Lab is a capstone where you have to combine everything—usage, security, and productivity. You have to prove you know how to use Claude Code to build something functional and secure. It validates that you aren't just generating code; you're engineering with it. Final Thought Our Building with AI: Claude Code collection isn't just another coding tutorial. It is a blueprint for the agentic future of development. For you the developer, it turns Claude from a vibe code buddy into a fully integrated, secure pair programmer. For your organization, it transforms a potential security risk into a governed, high-speed workflow. Want to get started? [Access Collection]Trick or Treat on Specter Street: Morphy's Mansion Challenge
I understand that the move_logger is the vulnerable program, and tried a few methods to exploit it. However, where is the token.txt? Anyone managed to find it? "Whatever means necessary" is quite broad. Any hints from anyone?Solved109Views0likes1CommentCVE-2022-26134 (Confluence) – OGNL Injection
For Question 6. Look at the first exploit attempt by this attacker. What command did they run? I am wondering about why when sharing the commands found in the logs, it still outputs wrong. even if typing in "X-Cmd-Response" as the command as well as the entire string found. Wondering if they are exepecting a different format/snippet of the code, or the GET requests instead?80Views0likes4CommentsCVE-2022-30190 (Follina) ms-msdt Scheme Abuse – Offensive Question 11
Hey guys, wondering if when trying to upload the payload for "Question 11: In a browser, visit http://<TARGET_IP>:8080, upload the payload.docx file, then press Submit and Execute" if this error is supposed to be generated. After choosing the file after clicking browse sometimes this work. After executing nothing seems to happen though. even after 30 seconds of waiting.Solved50Views0likes1CommentSnort Rules: Ep.9 – Exploit Kits
I am pulling my hair with question number 8 Create a Snort rule to detect the third GET request in the second PCAP file, then submit the token. This one should do it but it is not working. alert tcp any any -> any any (msg:"detect the third GET request"; content:"e31e6edb08bf0ae9fbb32210b24540b6fl"; sid:1000001) I tried so many rules base on the first GET header and still unable to get the token. Any tips?Solved232Views0likes3CommentsStuck On Secure Spring Developer (Beginner) URL Parameters Challenge
The lab is around trying to mediate a vulnerability by changing a GET request to a POST request in order to keep sensitive login information out of the URL params. But basically I don't know how I need to go about changing the code(apart from changing "GET" to "POST" on the login form and in a backend method). I'm at a total loss on this one so I'd really appreciate some guidance or an example. I wasn't sure if I should also be making changes to the mapping on the controller (although this isn't mentioned in the lab). These are the changes I have made so far <form th:action="@{/login}" method="POST"> protected LoginProcessingFilter(AuthenticationManager authenticationManager) { super(new AntPathRequestMatcher("/login", "POST")); setAuthenticationManager(authenticationManager); setAuthenticationSuccessHandler(new SimpleUrlAuthenticationSuccessHandler("/home")); } Thanks in advance for any assistanceSolved74Views0likes2CommentsAdvanced CTF Challenge: Serial Maze
Need hint on Serial Maze. Have gone through html & javascript, couldn't find the token. Using dirb found one endpoint "http://10.102.17.87/2257", its response "What a pickle... You need the secret to continue." No sure how to proceed form here. Thanks, SabilSolved268Views0likes3Comments