news & announcements
42 TopicsNew CTI Lab: Lotus Blossom Notepad ++ Campaign: Analysis
In January 2026, threat researchers at Rapid7 detailed a sophisticated supply chain attack targeting the Notepad++ update mechanism. Between July and October 2025, attackers compromised the project’s distribution infrastructure to deliver a custom, undocumented backdoor dubbed Chrysalis. By intercepting update requests, the threat actor distributed malicious NSIS installers to a targeted set of victims across Southeast Asia and Australia. What is this about? Supply chain compromises represent one of the most dangerous threat vectors today. In this campaign, the Chinese state-sponsored group Lotus Blossom (also known as Billbug or Thrip) hijacked a trusted software update pipeline. The attack involves complex DLL sideloading techniques—abusing a renamed Bitdefender binary to execute a multi-layered encrypted payload. Once the Chrysalis backdoor is active, it provides the attackers with persistent, feature-rich remote access to the victim's environment. Why is this critical for you and your team? As organizations rely on legitimate third-party utilities like Notepad++, trust in the update process is paramount. This intrusion highlights how state-sponsored actors can weaponize that trust to bypass perimeter defences. Understanding the Chrysalis infection chain—from the initial NSIS installer to the triple-layer decryption of its C2 configuration—is vital for detecting similar "living-off-the-land" and sideloading tactics in your own network. If your team manages software deployments or monitors developer environments, you must be cognisant of how attackers leverage legitimate, signed binaries to mask malicious behaviour. This lab provides a deep dive into the specific obfuscation and persistence strategies used by one of the region's most persistent threat groups. Who is the content for? Security Analysts Threat Researchers Here is a link to the lab: Lotus Blossom Campaign: Analysis36Views3likes1CommentNew CTI Lab: CVE-2026-23744 (MCPJam RCE): Offensive
On January 16, 2026, advisories were released covering a critical vulnerability in MCPJam Inspector, the local-first development platform for MCP servers. The Latest version, 1.4.2 and earlier, is vulnerable to a remote code execution (RCE) vulnerability, a trivial yet highly impactful vulnerability that allows an attacker to send a crafted HTTP request that triggers the installation of an MCP server, leading to RCE. What is this about? Model context protocol (MCP) has become more popular as a way to connect applications and services together that use AI, such as connecting tools to your OpenAI account, so the AI can help you work with the tool, perform tasks on your behalf, or work as webhooks between tools. MCPJam is an example of a tool that makes these processes easier and more convenient. Why is this critical for you and your team? As AI adoption across industries and sectors rises, products and services have been released to help people interact with AI pipelines. With MCPJam and tools like it, you can test and develop MCP (model context protocol) servers, emulate deployments, and debug your workflow, making your entire MCP development pipeline much smoother.If you're using any tools like this, where you share you API keys and other sensitive data with the tool, you need to be cognisant of the risks that these tools carry, as many others are vulnerable to basic misconfigurations that can lead to serious impacts. Who is the content for? Penetration Testers Security Analysts Incident Responders Here is a link to the lab: CVE-2026-23744 - MCPJam: Offensive This application has no logging available at all, so no Defensive Variant of this lab40Views1like0CommentsNew CTI Lab: CVE-2026-21858 (n8n RCE): Offensive
On January 7, 2026, Cyera Research Labs released an advisory for "Ni8mare," a critical unauthenticated remote code execution vulnerability (CVE-2026-21858) in n8n with a CVSS score of 10.0. The flaw stems from a "Content-Type Confusion" bug in the Form Webhook node, which allows attackers to override internal file paths, thereby enabling the arbitrary disclosure of sensitive data, including database.sqlite and the system's unique encryption key. This vulnerability can be exploited to forge administrative sessions and achieve full system takeover. What is this about? n8n has had a lot of vulnerabilities over the last year or so, in particular, these vulnerabilities have required authentication to exploit, meaning the attacker would already need access to the n8n server to leverage the vulnerability. This critical vulnerability has a CVSS score of 10.0, and attackers can achieve unauthenticated remote code execution - making it a poignant discussion point for potential future vulnerabilities and attacks, given that n8n will likely receive more attention from vulnerability researchers and threat actors alike. Why is this critical for you and your team? n8n is very popular with organization and the wider community alike, with over 70,000 active instances exposed to the internet; there is a reasonably wide attack surface to be exploited. If you or your team uses n8n, and there is a reasonably high probability that you do (for example in human resources, project planning, news feeds etc) then learning about and mitigating this vulnerability is essential to protect yourself against attacks. Who is the content for? Penetration Testers Security Analysts Incident Responders Here is a link to the lab: CVE-2026-21858 (n8n RCE): Offensive34Views2likes2CommentsNew CTI Lab: CVE-2025-55182 (React - Next.js)
On December 3, 2025, the cybersecurity world received news of a critical vulnerability in the React 19 ecosystem. This critical flaw, tracked as CVE-2025-55182 with a CVSS score of 10.0, affects React Server Components (RSC). A major issue, this flaw allows unauthenticated attackers to achieve Remote Code Execution (RCE) on vulnerable servers by sending a specially crafted HTTP request. AI Hallucination Within the first 24 hours of the vulnerability being announced, a POC was published to GitHub, which looked convincing and, when tested, appeared to achieve the goal successfully, resulting in Code Execution. It turned out that this POC, which was picked up and circulated by researchers and social media, was actually an AI Hallucination. The AI had crafted a deliberately misconfigured and vulnerable server and a POC that appeared to match the requirements of the exploit, but only actually triggered the misconfiguration. What is this about? CVE-2025-55182 is a critical Insecure Deserialization vulnerability. It affects React Server Components (RSC) within the React 19 ecosystem. The flaw is located in the server-side logic that handles the React Flight protocol, which is used for client-to-server interactions, specifically Server Functions or Server Actions. An unauthenticated attacker can execute a specially crafted HTTP request containing a malicious, serialized payload. The vulnerable server-side code fails to validate this payload, allowing the attacker to achieve remote code execution on the server. Why is this critical for you and your team? This critical vulnerability has a CVSS score of 10, is fairly trivial to exploit, and has significant impacts when successfully exploited, given that its impact includes unauthenticated remote code execution. If your team uses React, React Server Components (RSC), or similar, are at risk. This flaw impacts the standard, default configurations of high-profile frameworks like the Next.js App Router, which many organizations rely on for building high-performance sites. Who is the content for? Security Analysts Penetration Testers Incident Responders Vulnerability Management Teams Here is a link to the lab: CVE-2025-55182 (React - Next.js)155Views3likes1CommentIt’s Not Magic, It’s Mechanics: Demystifying the OWASP Top 10 for AI
Welcome back to our series, “Behind the Scenes of Immersive One”! The following is a conversation with Sabrina Kayaci, Cybersecurity Engineer for Immersive One, and Rebecca Schimmoeller, Lead Product Marketing Manager. Today, we’re continuing the discussion on our Secure AI capability. “When developers hear ‘AI Security,’ they either start to sweat or eye-roll. It either feels like a black box where the old rules don’t apply, or it feels like inflated marketing hype. The truth is, AI vulnerabilities aren't magic; they are mostly just new manifestations of the classic flaws we’ve been fighting for decades. Once you map the new threats to the old patterns, the mystique fades. You realize it’s not magic to fear or hype to ignore—it’s just an engineering problem to solve.” Rebecca: Awesome frame, Sabrina. No matter where you sit on the spectrum—whether you’re anxious about the risks or skeptical of the buzz—AI security doesn't mean starting from zero. Developers should already have the muscle memory for this. Sabrina: Exactly. We aren't asking them to learn a new language; we're asking them to apply their existing fluency to a new dialect. That’s the core philosophy behind our new OWASP Top 10 for LLMs and GenAI collection. We tackle the problem that AI is often treated as a "new and daunting" field. By framing threats like Supply Chain Vulnerabilities or Excessive Agency as variations of known issues, we accelerate the learning curve. We strip away the "AI mysticism" to reveal the underlying mechanical flaw. Rebecca: I love "stripping away the mysticism." Let’s talk about how that works, starting with the big one everyone is concerned about—Prompt Injection. How do you take that from "scary AI jailbreak" to something a grounded engineer can fix? Sabrina: In the media, Prompt Injection is portrayed as this sentient ghost in the machine. In our lab, we treat it as an Input Validation failure. We show that the system is simply confusing "user input" with "system instructions." When a developer sees it through that lens, the eye-roll stops. It’s no longer hype; it’s just mixed context. And they know how to fix mixed context. We show them how to apply that architectural fix to an LLM. Rebecca: That maps perfectly. But looking at the curriculum, I see we go much deeper than just a standard "Top 10" checklist. Why was it important to go beyond the simple definitions? Sabrina: Because a definition tells you what something is, but it doesn't tell you how it impacts you. In the new OWASP LLM collection, we focus on Core Mechanics and Attack Vectors. We deconstruct threats like Data and Model Poisoning or Supply Chain vulnerabilities to show you exactly how they infiltrate a system. It’s the difference between knowing what an engine looks like and knowing how to take it apart. You need to understand the mechanics of the vulnerability to understand the potential impact—otherwise, you're just guessing at the fix. Rebecca: It sounds like we're upgrading their threat modeling software, not just their syntax. Sabrina: Yes, 100%. Look at Excessive Agency. That sounds like a sci-fi plot about a robot takeover. But when you do the lab, you realize it’s just "Broken Access Control" on steroids. It’s about what happens when you give an automated component too much permission to act on your behalf. Once a developer maps "Excessive Agency" to "Least Privilege," they stop worrying about the robot and start locking down the permissions. Rebecca: Is the goal to get them through all ten modules to earn a Badge? Sabrina: The OWASP Top 10 for LLMs Badge is the end state. It proves you have moved past the "sweat or eye-roll" reactive phase. To your manager, it signals you have a proactive, structured understanding of the AI risk landscape and can speak the language of secure AI. There’s no hype in that. Only value-add to you and your team. Final Thought Our OWASP Top 10 for LLMs collection is the antidote to AI security angst. For the developer, it demystifies the threat landscape, proving that their existing security instincts are the key to solving new problems. For the organization, it ensures that your AI strategy is built on a bedrock of engineering reality, rather than a shaky foundation of fear. [Access Collection]149Views1like0CommentsArchitecting at Speed: Mastering Secure Development with OpenAI Codex
Welcome back to our series, “Behind the Scenes of Immersive One”! The following is a conversation with BenMcCarthy, Lead Cybersecurity Engineer for Immersive One, and RebeccaSchimmoeller, Lead Product Marketing Manager. Today, we’re continuing the discussion on our Secure AI capability. There is a misconception that security is the enemy of development speed. But with AI, the opposite is true. If you don't have security engineered into your AI workflow, you can't actually go fast—because you’re constantly stopping to fix 'trash code' or patch vulnerabilities. The developers who win in this era aren't just the ones coding faster; they are the ones architecting systems that are secure by design, even at AI speeds.” Rebecca: That’s a crucial distinction, Ben. We often hear that AI is a "firehose" of productivity, but without control, that firehose just creates a mess. It seems like the role of the developer is shifting from "writing lines" to managing this high-velocity output. How does the new Building with AI: Codex CLI collection help them make that shift? Ben: By giving them the controls they need to harness that speed safely. If you let OpenAI’s Codex run without guardrails or understanding, you get velocity, sure—but you also get risk. We designed this collection to empower developers to become their own Security Architects for their workflows. We are leveraging the Azure AI Foundry capability to give learners real, secure access to these models. The goal isn't to teach you how to hit "Tab" to autocomplete; it's to teach you how to rigorously evaluate, guide, and constrain what the AI produces using the command line tool like Codex so you can ship code that is both fast and bulletproof. Rebecca: So it’s about elevating the human’s role to "Architect." Let’s talk specifics given what the collection covers—how did you instill that mindset? Ben: We start by ensuring developers know the power of what you can do with Codex. How to get the best out of your models in this CLI tool. We go over effective prompt engineering, tool usage, and how AI can help with "Greenfield" projects (net-new builds) and "Brownfield" projects (legacy codebases). This is a critical skill for a lead engineer. AI is great at generating new code (greenfield), but it can be dangerous when it doesn't understand the hidden dependencies of a ten-year-old application (brownfield). We teach engineers how to spot those context gaps, key stuff that the AI might miss. Rebecca: I saw "specification-driven development" was a big part of your roadmap, too. How does that fit into the "speed" theme? Ben: This is the ultimate accelerator. Instead of writing the code line-by-line, you write the "spec"—the blueprint—and let Codex handle the implementation details. It’s not about doing less work; it’s about doing higher-leverage work. You define the logic and security constraints, and the AI handles the boilerplate. It shifts the developer’s brain from "how do I type this function?" to "what should this system actually do?" Rebecca: That sounds like a powerful approach, Ben. But what about the security risks? If developers are offloading implementation to Codex, how do they avoid leaking data or introducing bugs? Ben: That’s non-negotiable. In the Guardrails lab, we show learners how to build a safety net. We teach practical methods for stripping PII (Personally Identifiable Information) and using hooks to sanitize inputs before they reach the model. It gives developers the confidence to use these tools freely, knowing they have already engineered the safety mechanisms to protect their org. Rebecca: I saw a lab in the collection called "Tools and MCP" (Model Context Protocol). Is that where you get into advanced workflows? Ben: Exactly. This is where we give developers the keys to become a force multiplier. We show users how to connect Codex to other tools. This is the ideal definition of ROI for developers. You’re automating the tedious "check your work" phase, allowing you to ship secure code faster without burning out on manual review. Rebecca: It feels like that approach accepts today’s AI era realities for what they are and finds the strategic advantages… pushing developers towards productivity and security gains with real mastery. And just like the Claude collection, users have access to a Demonstrate Lab, to prove that mastery, am I right? Ben: Absolutely. The Demonstrate Lab challenges users to build a solution that’s efficient, functional, and secure. It proves that you aren't just an "AI user"—you are an AI Engineer who understands the capabilities the collection covers. Final Thought Our Building with AI: Codex collection is about upgrading the developer’s toolkit. For the organization, it ensures AI adoption is secure and scalable. For the engineer, it removes the drudgery of boilerplate, freeing you to focus on the creative, architectural challenges that drive real value. Ready to upgrade your workflow? [Access Collection]New CTI Lab: Shai-Hulud 2.0: Analysis
In late November/early December 2025, a set of critical software supply chain intrusions took place when the highly dangerous Shai-Hulud 2.0 worm was used to steal GitHub, Cloud, and other credentials and secrets by gaining access to developer machines through the use of a malicious npm package installation. What is this about? By abusing the inherent trust in the npm ecosystem, Shai-Hulud guarantees execution during the crucial preinstall phase, effectively bypassing many traditional security scans that only review code after installation. Once running, the payload launches a concurrent, parallel attack across your environment: it hunts for local credentials, attempts to steal highly privileged temporary cloud tokens via the Instance Metadata Service (IMDS), and, most critically, can automatically inject itself into every other package the victim maintains on their machine. Why is this critical for you and your team? npm is massively popular, and many of the affected packages are widely used in software development and deployment. Shai-Hulud 2.0 is a devastating self-replicating worm that weaponizes your supply chain to steal highly privileged cloud credentials (IMDS) and establish a permanent C2 backdoor via GitHub Actions if the threat actor decides to set that up. Given the importance of npm packages to developers, customers from any organisation, and across all sectors, it is essential that they understand how this intrusion works to prevent their credentials and secrets from being stolen. Who is the content for? Security Analysts Incident Responders Software Developers/Secure Development teams Cloud Engineers Vulnerability Management Teams Here is a link to the lab: Shai-Hulud 2.0: Analysis30Views0likes0CommentsAnnouncing the Winners of the 2025 Cyber Resilience Customer Awards!
What a year for cyber resilience! As we say goodbye to another Cybersecurity Awareness Month, we are thrilled to celebrate the organizations and individuals who have demonstrated exceptional dedication to proving and improving their cybersecurity posture, defending against emerging threats, and embedding a culture of resilience across their organizations using the Immersive One platform. Collectively, our customers have tackled countless labs and simulations, setting new benchmarks for capability and speed. After crunching the numbers and reviewing the nominations, we're ready to announce just some of the winners who truly excelled in 2025 across the following categories: Emerging Threats Leader Award The Emerging Threats Leader award recognizes organizations and individuals at the forefront of threat detection and threat hunting; proactively identifying risks and strengthening defenses using insights from our Cyber Threat Intelligence labs. 🏆 Emerging Threats Award Organization Winners include: NHS England T-Mobile Arctic Wolf 🏆 Emerging Threats Award Individual Winners include: Steven Glogger, Swisscom Paul Blance, Specsavers Taz Wake, Jones Lang LaSalle Mark Cox, NationalGrid Stephen Wilson, BT Group Cyber Resilience Leader Award This award acknowledges organizations that maximize the full use of the Immersive One platform to fully optimize end-to-end cyber readiness. True cyber resilience goes beyond simply preventing attacks; it encompasses the ability to prove, improve, benchmark and report your cyber resilience. 🏆 Cyber Resilience Leader Award Winners include: Swisscom NHS England Arctic Wolf Darktrace BT Group Secure Development Champions Award This award celebrates organizations and individuals who champion security throughout the software development lifecycle. It recognizes a proactive approach to building secure applications, emphasizing practices like threat modeling, secure coding standards, and rigorous testing using the Immersive One platform to prepare and demonstrate secure coding practices. 🏆 Secure Development Champion Organization Award Winners include: Citigroup GfK Swisscom 🏆 Secure Development Champion Individual Award Winners include: Steffen Wacker, Arctic Wolf Joao Santos, GfK Omkar Joshi, GfK Balaji Kannan, GfK Naresh Sivakumar, GfK Alexander Kolyshkin, EMCD Exercising Excellence Award The Exercising Excellence award recognizes organizations that have excelled in regularly using scenarios on the Immersive One platform to prove their cyber resilience. They have successfully run multiple crisis simulations to regularly exercise their teams and have high levels of participation and engagement. 🏆 Exercising Execellence Award Winners include: Mastercard Citigroup Siemens Energy NHS England Immersive Trailblazer Award This award recognizes individuals who simply love Immersive and have shown exceptional dedication to the platform. They have been amongst our top point scorers since January 1st 2025, completing thousands of labs and truly immersing themselves in the platform. 🏆 Immersive Trailblazer Award Winners include: Mico Marcos, PepsiCo QingKai Ma, Hubbel Community Leader Award Our final award, the Community Leader award, recognizes individual members of the Human Connection Community that have contributed to, and engaged with, both community content and their fellow community members. They have consistently shared tips and advice, engaged with popular threads and participated in community events and meetups, helping to bring the Human Connection community to life. 🏆 Community Leader Award Winners include: netcat steven CyberSharpe autom8on MegMarCyberTrust Nneka_AN Dooley DG400Views4likes4CommentsCommunity Newsletter - October 2025
Let’s see what October had to offer… 🎃 Trick or Treat on Specter St. Have you completed the final lab of Specter St. and found your lost companion, Bones? If you find him before November 28th – and are a member of the Community – you’ll receive a shiny digital badge on your profile. Need a hint, check out the Labs Live recordings or head over to the Help Forum and ask an expert. 🏆 Customer Awards Soon to be revealed, our Customer Awards celebrate some of our incredible customers, and there’s a special award for Community users! All will be revealed soon… 🛡️ The Incident Room This month we started a series of LinkedIn challenges, where we present a cybersecurity or crisis situation along with three choices of how to respond. Voting takes place either through reactions or comments. We later reveal how our cyber experts would have dealt with the situation. Patch Newsday October 2025 - As per usual, the Container 7 team have reviewed the latest Microsoft patches so that you don't have to. 🙌 Special Shout Outs Please join me in thanking this month's most helpful members in our Help & Support Forum. 1. barney 2. edgarloredo 3. LewisMutton 4. DG 5. Dragonstar16 If you'd like to see your name here one day, head on over to the forum and answer a question. 🔮 Looking Forward We’ve got some exciting plans for a redesign of the Community, focussed around what users look for when they visit, and the ability to showcase the latest and greatest content – can’t wait to share it with you all! As always, we want to hear from you. Please give us your feedback on your community experience and let us know what else you'd like to see. Sam98Views1like4CommentsNo More Busy Work: How Programs Automate Personalized Cyber Readiness
Welcome back to our series, “Behind the Scenes of Immersive One”! The following is a conversation with MartinHewitt, Principal Product Manager for Immersive One, and RebeccaSchimmoeller, Lead Product Marketing Manager. “We’ve all seen the spreadsheet of doom. You assign a list of training labs to fifty people, and then you spend the next month chasing them down, manually checking completion statuses, and hoping the content you’ve assigned them is actually relevant—because if it’s not, your learners are just tuning out. It’s an operational nightmare, plain and simple.” Rebecca: Wow, yeah, we hear this constantly from the market, Martin. Leaders are drowning in admin work while trying to build resilience. It feels like we’ve been handing learners a stack of maps and hoping they figure out the route. Meanwhile, busy learners assume the content isn’t worth their time, so disengage. How does the new Programs capability change that dynamic? Martin: That map analogy is actually spot on. Until now, we’ve had Assignments and Collections—which are great, but they are static. Like you said, it’s handing someone a map. Programs is a fundamental shift … a GPS navigation system for learning. Instead of just handing a learner a stack of content and hoping for the best, a Program plots the optimal route based on their initial skill level. It re-routes them if necessary using logic, and it shows the manager if they fall behind schedule. We aren’t just looking at completion anymore; we are looking at flow. Rebecca: I love the "GPS" concept. But let’s make this real for our customers. What does one of these "routes" actually look like? Can you give us a concrete example of a path a team might take? Martin: Absolutely. Let’s look at the SOC Analyst Program. It doesn’t just start with a generic to-do list. It starts with an Adaptive Assessment. Based on those results, if the system sees a user is proficient and capable, it will route them into content that speaks to their level of knowledge and experience, rather than a one-size-fits-all (or, more often none) route. We see this for Cloud Security too. Engineers who know AWS inside-out don't need to waste time on S3 Buckets 101. The Program fast-tracks them to the advanced Cloud Defense scenarios. It’s about respecting their time Rebecca: That’s a perfect segue to the learner’s experience. We talk a lot about the manager’s benefit, but honestly, if I’m an analyst, why should I care? How does this make my day or professional life better? Martin: If you’re a learner, the biggest benefit is that you stop doing "busy work." Nothing kills morale faster than being a senior engineer forced to click through beginner labs just to get a completion checkmark. With Programs, the system recognizes your skill level immediately. You get to skip the stuff you already know and focus on the challenges that actually help you grow. Plus, because it’s a cohesive journey, you always know why you are doing a task. You aren’t just completing a random lab; you are moving through a cyber-narrative—from detection to analysis to remediation. It feels less like homework and more like a mission. Rebecca: So, we’re moving from "did you do it?" to "are you ready?" That sounds like it aligns perfectly with the CISO’s need to prove outcomes. But Martin, what about the manager’s visibility? You mentioned "flow" earlier—how is that different from just tracking who finished a lab? Martin: Right now, if you want to know who is struggling, you usually have to wait until the deadline passes and see who didn't finish. By then, it’s too late. With Programs, we focus on Pace. We capture a time commitment expectation—say, two hours a week—and the system calculates a "Burndown Rate." We can tell you in real-time if a user is Ahead, On Track, or Behind. It’s about finding what I call the "Bread and Valley Joes"—the people who are struggling silently. We want to surface those users to the manager before they fail, as well as highlighting the super-keen folk who really love stretching and testing their skills, we’re showing them as Ahead, making sure they’re spotted and give them the opportunity for recognition. Rebecca: That’s huge for "Management by Exception." You don't need to micromanage the high-performers, but you can quickly help those who are stuck. Martin: Exactly. And we’ve built the intervention right into the platform. You can filter for everyone who is "Behind" or stuck on a specific step—like Cloud Fundamentals—and bulk-message them right there. No more downloading CSVs and running mail merges just to nudge your team. Rebecca: Martin, this is a massive step forward. But knowing you and the engineering team, you’re already looking at what’s next. Can you give us a sneak peek at what’s coming for Programs? Martin: Don’t mind if I do! Right now, we have these amazing "Stock Programs" ready to go. In the New Year, we’re also handing the keys to customers … we’re going to introduce a custom builder. Managers will be able to build a completely bespoke journey tailored to their specific organization, drawing from right across our whole catalog. Things like being able to create your own "onboarding flow" to mirror your exact tech stack and security policies…. That’s when things will get even more exciting. Rebecca: I can't wait to see what customers build when that’s available, Martin. Thanks for walking us through the logic behind this milestone launch. This is major for customer outcomes. Martin: It is. We’re finally moving learners from just "completing tasks" to building real muscle memory. That’s the stuff that benefits their org now, and that they can carry it with them to their next professional opportunity. Final Thought Programs represent a shift that benefits the entire security function. For the organization, it replaces static assignments with an operational engine that measures true readiness against critical threats. For the learner, it transforms training from a checklist into a career-building journey, ensuring they develop skills that last far beyond their current role. Want to see how it works? Don’t miss this demo.46Views0likes0Comments