Feature Focus: Introducing the AI Scenario Generator
In this blog, we’ll cover what this feature is and how you can use it. For this release, we’ve made creation as easy as possible. Just click Create with AI, add a title, and select options for organisational sector, attack vector, threat actor, and scenario size. This will generate a full scenario, from the briefing to the epilogue. You can even make a cuppa while it works!🪄✨ Once created, these scenarios can be published or edited and published in your organisation's catalogue. But how exactly does it work? Organisation admins can turn the AI Scenario Generator on and off in the platform settings area, so you’ll need this to be turned on if you want to try it out! Our AI Scenario Generator is currently only available to our Cyber Crisis Simulator customers. It’s based on technology provided by OpenAI, with generations based on publicly available data related to crisis management as well as our own Immersive Labs Crisis Sim catalogue. Organisation admins can choose to use the feature in a layered approach: No AI access at all: This means that your organisation has chosen not to enable the AI Scenario Generator. Without scenario sharing enabled: You can generate AI scenarios based only on the inputs shown in the generation box. With scenario sharing enabled: The AI will access specific parts of your previously published scenarios when generating new ones, ensuring the new scenario is highly relevant to your context. These settings can be updated on the organisation's settings page. If you’re keen to use the AI Scenario Generator but it’s not enabled in your organisation, you’ll need to discuss this with your internal Organization Administrator. If you’re an Organization Administrator and want to know more about the feature, contact your CSM. Tell me more about scenario sharing! If your organisation chooses to also enable scenario sharing, Immersive Labs will include specific information from any previous custom scenarios that you’ve published in the temporary “context window” for requests to our third-party AI vendor. A "context window" is an extension of the query sent to an AI model. It exists only during the processing of the query and isn’t saved by any third parties. The third-party AI vendor won’t use any of the information you share to train its models. Shared data will only ever be included in this temporary "context window" of generation and won’t be stored by the third-party AI vendor. The shared information includes scenario titles, descriptions, inject titles, and response options. It excludes feedback to response options, exercise information, reporting, account or organisation information fields, or metadata. Scenario sharing is designed to make the generated scenario more relevant to your particular context. You can still create scenarios using AI without scenario sharing, but your scenario will likely be more generic and less relevant to your particular organisational context. However, you can still edit the final version to make it more relevant to you – just like with our catalogue scenarios. Let’s not forget the human in the loop As with all things AI, we recommend that you review the AI output before publishing your scenario, to ensure it meets your needs. The AI Scenario Generator currently only generates text content, so you’ll probably want to add additional rich media, such as images or videos, to your scenario. To get the most out of your crisis simulation, we also recommend enabling, adding, changing, and checking certain elements. These include: Checking that you’re happy with the text formatting and narrative content Checking that you’re satisfied with the role listed Enabling and adding response feedback or performance indicators If you want to capture ranked response data, select the ranked options setting and add a rank (great, good, weak, okay) to each response option to suit your organisation's preferred situational response Turning on response confidence or justifications Get involved and share your thoughts! We know that AI is a hot topic and we’re keen to hear and capture your feedback and suggestions on this first release of our AI Scenario Generator as part of our user research taking place this November and December. If you want to participate in this research, you’ll be able to share your thoughts and experiences of using our AI tool and scenario creation more generally directly with our team. Comment below if you’d like to find out more, and we’ll contact you with further details! If you’re an Immersive Labs customer, you can find out more about the AI Scenario Generator in our FAQ guide.257Views4likes1CommentWhere to Start? How Assess and Recommend can Unlock your Potential
What is Assess and Recommend? The Assess and Recommend feature was created with the end user in mind and helps determine the most appropriate content based on a learner’s knowledge and experience. The assessment leverages computer adaptive testing (CAT), which is a computer-based assessment that adjusts the difficulty of questions based on how a test taker answers previous questions. CAT is also known as tailored testing because it personalizes the test to the test taker's ability level. Having a more personalized assessment allows for a more personalized recommendation. Customized learning paths – NICE Framework One of the best things about the Assess and Recommend feature is that it creates personalized learning paths aligned to NIST's Workforce Framework for Cybersecurity (NICE Framework). The NIST NICE Framework, or NIST Special Publication 800-181, provides a structured guideline for defining and categorizing cybersecurity work roles, knowledge, skills, and abilities (KSAs). It aims to standardize the language around cybersecurity tasks and roles, enhancing workforce development, training, and alignment between job requirements and individual qualifications. Unlike traditional training programs, which tend to be the same for everyone, Immersive Labs uses assessment data to identify which roles in the NICE framework are most applicable to you. This means users focus on what they need to learn, rather than wasting time on topics they already know. As users upskill, they can retake assessments to receive new recommendations that match their evolving skill level, keeping training relevant and engaging. This dynamic approach is essential in a field where staying current is critical. By aligning with the NIST NICE Framework, the learning paths are tailored to specific roles, such as SOC analyst, pentester, or cyber professional, making the training even more effective. Benefits for organizations and users For organizations, the Assess and Recommend feature is incredibly valuable. It gives a clear picture of the team’s overall skills, strengths, and weaknesses. This information is crucial for planning targeted training, using resources wisely, and strengthening the organization’s cybersecurity defenses. Additionally, by promoting continuous learning and development, organizations can improve employee satisfaction and retention. Employees are more likely to stay with a company that invests in their growth, recognizing the importance of updated skills for job security and career advancement. Where can I Find this Feature? To find this feature, click the Upskill drop-down and navigate to Recommended Activities. Here, you’ll see a growing list of the assessments currently available in the platform. Share your Thoughts After completing your first assessment, tell us what you got as a recommendation in the comments below and share how your upskilling journey is going!125Views6likes3CommentsUnderstanding CTI and What it Means at Immersive Labs
The essence of cyber threat intelligence CTI involves understanding the who, what, why, and how of cyber threats. It's about transforming data into actionable intelligence, helping organizations anticipate threats, prepare defenses, and respond effectively. Imagine knowing not just that there’s a storm coming but precisely where it’ll hit, how strong it’ll be, and what precautions you need to take – that’s the power of CTI in cybersecurity. How cyber threat intelligence works Generating CTI is a complex process that begins with gathering data from various sources. These include network logs, threat feeds, social media, dark web forums, and cybersecurity agency reports. This raw data is then processed and analyzed for patterns, trends, and indicators of compromise (IoCs) like malicious IP addresses or hash files. Advanced techniques, including machine learning and behavioral analysis, help sift through the noise, turning raw data into meaningful insights. Turning intelligence into action CTI excels by providing context to security alerts, allowing teams to prioritize their responses based on a comprehensive understanding of the threat landscape. For instance, if CTI identifies a malware strain targeting financial institutions, a bank can proactively strengthen its defenses. This enhances protection and improves incident response efficiency, making containment and remediation faster and more effective. So, what is CTI at Immersive Labs? The Immersive Labs CTI team constantly monitors threats that target our customers’ industries. This includes common vulnerabilities and exposures (CVEs), malware campaigns, and new techniques that are likely to affect our customers’ cybersecurity landscape. Once we’ve identified a threat that our customers should protect themselves against, we respond rapidly to create a lab so our customers can stay ahead of the cyber threat landscape. Our labs provide all the information needed to understand and defend against threats, along with practical knowledge for using or analyzing them. It’s a very exciting part of the platform. Up to the release of this blog post, the team has released over 50 CTI labs on threats this year! What more can you expect from us? We know our customers love our CTI labs. Within this community, you can expect: Microsoft Patch Tuesday briefings: Patch News Day – We’ll release a brief about the Microsoft Patch Tuesday vulnerabilities each month. These briefings will help you to understand what new vulnerabilities mean and how they could impact you. New CTI lab releases – We release CTI labs at cyber speed, and you won’t miss a thing. We’ll announce new CTI labs within this community and give quick links to our platform so you can stay as up-to-date as possible! Cyber threat research and intelligence – We complete our own research and often find vulnerabilities in products. We reverse-engineer new malware and analyze new threats seldom discussed elsewhere in the industry. When we do this, we’ll release research articles here so you can go through the journey too. CTI discussions – While we’ll never give answers to labs or guidance on how to complete labs, we welcome vibrant and collaborative discussions about threats in our community forums. We’d love to hear your thoughts and interact with you! Don’t miss a beat Be sure to “follow” The Human Connection blog to receive notifications about new announcements and articles. Share your thoughts Comment below an introduction of yourself and a bit about a threat that you’ve recently analyzed or read about! As more of you introduce yourselves, it’ll be great to see how quickly threats are forgotten and as an industry we move on to the next! We spend a lot of time disseminating threat data to create threat labs for our customers. That means looking across threat actors and the industries they attack. What places have you found are best to collect threat data? Also, have you completed any of our recent threat labs? If so, which one? Was it a malware or CVE lab?91Views4likes3CommentsRealizing the Full Potential of Drill Mode in Crisis Simulator
Unless you’ve been living under a rock for the last decade or so, you already know cyber crises have become increasingly prevalent – posing significant threats to organizations worldwide. Organizations must continuously assess and improve their technical and non-technical teams’ knowledge, skills, and judgment to combat these challenges. This is where Immersive Labs’ Crisis Simulator comes into play. With single-player, drill, and presentation modes available, organizations can conduct team exercises that simulate real-world cyber crises in a number of different formats to prevent exercise fatigue. This allows organizations to create an exercising-first culture – as one tabletop exercise a year just isn’t enough. Let’s dig more into drill mode and learn how it helps users realize the true potential of cyber crisis planning. Crisis Simulator Drill Mode: What is it? Drill mode is a multiplayer crisis exercising format which allows participants to assume specific roles and tackle role-specific challenges. The goal is to strengthen their domain knowledge and develop muscle memory to more effectively deal with an actual crisis. A Crisis Sim administrator can assign clearly defined roles by aligning participants’ tasks with their actual job duties, ensuring the drills reflect real-life scenarios. Upon assignment, players receive notifications about their upcoming exercise, followed by a message signaling the start of their role-specific decision point or “inject.” Drill mode follows a sequential “pass the baton” style relay, allowing only one role to have an active task at any given time, with the completion of an active task triggering the next task. Some exercises may require players to complete multiple injects in succession, creating a cohesive and dynamic experience. Individual players’ decisions (good or bad) will significantly impact how the scenario unfolds for others, mimicking the interdependence and complexity of real crises. Benefits for Customers Drill mode was developed using direct customer feedback. Immersive Labs users were looking to exercise teams with role-relevant content to increase exercising engagement. With drill mode, and unlike competing solutions, participants aren’t expected to answer injects outside their area of expertise – ensuring a more focused and realistic experience. Drill mode’s emphasis on role-specific tasks promotes a more authentic depiction of how crisis responses really unfold. Recognizing no individual holds all decision-making power during a crisis, Drill mode reinforces collaboration and coordination among team members. Data gathered during a drill scenario allows teams to identify points of weakness and develop targeted training interventions. Drill mode also enables organizations to track the time needed for participants to complete each inject. This valuable metric provides insights into individual and team performance, giving organizations more data to refine their crisis response strategies and optimize resource allocation. Embracing Remote-First Work Environments With the proliferation of remote work, Crisis Simulator’s drill mode adapts nicely to evolving organizational needs. Players receive notifications and contribute when required. This remote-first approach enables seamless participation and ensures teams are well-prepared, regardless of geographical dispersion. Our micro-drills allow key contributors to allocate less than 10 minutes per decision point, significantly reducing their time commitment compared to traditional full-day drills. This efficient utilization of resources maximizes productivity and minimizes disruption to daily operations. Immersive Yourself Drill mode is a powerful feature within the Crisis Simulator that unleashes the true potential of cyber crisis planning. By assigning clearly defined roles to participants, organizations can conduct team exercises where each player assumes their actual job role in completing an assigned task. With a strategic and measurable approach to cyber crisis preparedness, Crisis Simulation with drill mode identifies weaknesses and promotes collaboration among team members. With the ability to track inject completion time, adapt to remote work environments, and offer versatile scenario options, drill mode empowers organizations to build greater resilience in the face of cyber threats.84Views10likes10CommentsFrom Concept to Content: A Deep Dive into Theorizing and Planning a Lab Collection
The decision process When creating new content, the first step is deciding what to commit to. We consider: User demand: Are users frequently requesting a specific topic? Evolving landscapes: Is there new technology or industry trends we should cover? Internal analysis: Do our cyber experts have unique insights not found elsewhere? Overarching goals: Is the content part of a larger initiative like AI security? Regulations and standards: Can we teach important regulations or standards? Cyber competency frameworks: Are we missing content from frameworks like NICE or MITRE? After considering these points, we prioritize one idea for creation and refinement. Lower-priority ideas are added to a backlog for future use. Feasibility and outcomes Having a concrete idea is just the beginning. Over the years, we’ve learned that understanding the desired outcomes is crucial in planning. Our core mission is education. We ensure that each lab provides a valuable learning experience by setting clear learning objectives and outcomes. We ask ourselves, “What should users learn from this content?” This ranges from specific outcomes, like “A user should be able to identify an SQL Injection vulnerability”, to broader skills, like “A user should be able to critically analyze a full web application”. Listing these outcomes ensures accountability and fulfillment in the final product. Setting clear learning objectives involves defining what users will learn and aligning these goals with educational frameworks like Bloom’s Taxonomy. This taxonomy categorizes learning into cognitive levels, from basic knowledge and comprehension to advanced analysis and creation. This ensures our content meets users at their level and helps them advance. Turning big topics into bite-sized chunks Once a topic is selected, we must figure out how to break down huge subject areas into digestible chunks. This is a fine balance; trying to cram too much information into one lab can be overwhelming, while breaking the subject down too much can make it feel disjointed. One good approach is to examine the learning objectives and outcomes set out in the first step, map them out to specific subtopics, and finally map those to labs or tasks. For example, consider this theoretical set of learning outcomes for a Web scraping with Python lab collection. A user should understand what web scraping is and when it’s useful. A user should be able to make web requests using Python. A user should be able to parse HTML using Python. A user should understand what headless browsers are and when to use them. A user should be able to use a headless browser to parse dynamic content on a webpage. These outcomes can be mapped into two categories: theory outcomes (“A user should understand”) and practical outcomes (“A user should be able to”). Understanding the difference between these two is useful, as a few things can be derived from it – for example, whether to teach a concept in a theory (heavy on theoretical knowledge without providing a practical task) or practical (teaching a concept and exercising it in a practical environment) lab. Using this, the outline for a lab collection can start to take shape, as seen in the table below. Learning outcome Knowledge Type Suggested Lab Title Suggested Lab Content A user should understand what web scraping is and when it is useful. Theory Web scraping with Python – Introduction A theory lab showing the basics of web scraping, how it works, and when it is useful. A user should be able to make web requests using Python. Practical Web scraping with Python – Making web requests A practical lab where the user will write a Python script that makes a web request using the “requests” library. A user should be able to parse HTML using Python. Practical Web scraping with Python – Parsing HTML A practical lab where the user will write a Python script that parses HTML using the “beautifulsoup” library. A user should understand what headless browsers are and when they should be used. Theory Web scraping with Python – Understanding headless browsers A theory lab explaining why dynamic content can’t be scraped using previous methods, and how headless browsers can solve the issue. A user should be able to use a headless browser to parse dynamic content on a webpage. Practical Web scraping with Python – Using headless browsers A practical lab where the user will write a Python script that scrapes dynamic content from a website using the “puppeteer” library. All Demonstrate Web scraping with Python – Demonstrate your skills A demonstrate lab where the user will complete a challenge that requires knowledge from the rest of the collection. Each learning objective is assigned to a lab to ensure thorough and user-friendly coverage. Often, multiple objectives are combined into one lab based on subtopic similarity and the total number of labs in a collection. The above example illustrates the process, but extensive fine-tuning and discussion are needed before finalizing content for development. Next time… In part two of this mini-series, you’ll read about the next stage of the content development process, which involves laying the technical foundations for a lab collection. Don't miss the Series… You can opt to receive an alert when part two of this series is released, by “following” activity in The Human Connection Blog using the bell at the top of this page. In the meantime, feel free to drop any questions about the content creation process in the replies. Are there any parts of the planning process you want to know more about?84Views3likes0CommentsFeature Focus: Introducing Drag and Drop, Free Text Questions, and Instructional Tasks in the Lab Builder
I’m excited to announce the latest updates to the Lab Builder. Today, we’ve introduced three new task types: Drag and drop Free-text questions Informational/instructional These exciting new task features will enhance the flexibility and interactivity of your labs, offering even more engaging learning experiences. The new tasks can be added to your lab as usual via the Tasks library. They’re live now, so you can start adding them to your labs right away. Drag and drop Drag-and-drop is a dynamic, interactive task. Designed to challenge the user's recognition and matching abilities, it’s perfect for testing their knowledge in various subjects. This task type consists of text-based items and targets. Users need to drag the items to the correct corresponding targets. It’s easy to add and edit items and targets in the Lab Builder quickly. You can have a minimum of two items and a maximum of 12. You could use the drag-and-drop task type for questions and answers, completing sentence fragments, or matching terms with definitions. Once added to your lab, the new task will appear as follows: Free-text questions This task type requires the user to manually enter text to answer a question. For this task type, you need to write a question and provide at least one possible answer – but there can be multiple correct answers. You can configure this easily in the Lab Builder. Fuzzy matching automatically detects answers that are close enough to the correct answer. For example, if the user submits the right answer with a minor spelling error, it’ll still be accepted. This is designed to reduce user frustration and is enabled by default. You can disable fuzzy matching by turning off the toggle at the bottom. Finally, you can also provide feedback to users if they get an answer wrong, sort of like a hint. This is useful if you want to help point your user in the right direction and prevent them from getting stuck. Instructional tasks This task type is designed to provide users with vital information, guidelines, or instructions. In the Lab Builder, they have the same configuration options as the Briefing panel. Instructional tasks are particularly useful in explaining what the user is expected to do in a following task, presenting story details, or providing a learning journey for users as they go through the lab. You may want to remind users about specific information they need to answer some tasks or tell them to log into an application. The example below reminds users to refer to a specific part of the briefing panel before answering the next questions. Why are these new features useful? Increased engagement: These new question types introduce a gamified element to your custom labs, making learning more interactive and enjoyable. Versatile content creation: These features expand the possibilities for creating diverse and engaging labs, allowing you to tailor your content to your organization's unique needs. Enhanced learning: Drag and drop encourages active recall and association, while free text questions promote critical thinking and deeper understanding. Go and build some engaging labs! Explore the possibilities and build labs that truly engage your users! For more guidance, visit our Help Center, where there’s ample documentation on using the Lab Builder in more detail.31Views3likes0CommentsFrom Concept to Content: A Deep Dive into Building and Critically Analyzing Labs
Putting it all together The main bulk of the development work is building the labs. This usually comprises two parts that require different skill sets; one is putting together the written portion of the lab (such as the briefing, tasks, and outcomes), and the other is implementing any technical needs for the practical side of the lab. While some labs may focus more on one component than the other, this general overview of lab development will demonstrate each step of the process. Developing written content Regardless of the lab, the written content forms the backbone of the educational material. Even with prior knowledge and planning, additional research is essential to ensure clear explanations. Once research is complete, an outline is drafted to focus on the flow, ensuring the information is presented logically and coherently. This step helps enhance the final product. The final step is turning the outline into the final written content. Everyone approaches this differently, but personally, I like to note all the points I want to cover in a bullet list before expanding on each one. This method ensures all necessary information is covered, remains concise and clear, and aligns with learning outcomes and objectives. Technical implementation For practical labs, technical setup is key. Practical tasks should reinforce the theoretical concepts covered in the written portion, helping users understand the practical application of what they’ve learned. Before implementing anything, the author decides what to include in the practical section. For a CTI lab on a vulnerability, the vulnerable software must be included, which involves finding and configuring it. For general topics, a custom script or program may be needed, especially for niche subjects. The key is ensuring the technical exercise is highly relevant to the subject matter. Balancing the difficulty of practical exercises is crucial. Too easy, and users won’t engage. Too hard, and they’ll get frustrated. Tasks should challenge users to think critically and apply their knowledge without discouraging them. This requires iterative testing and feedback to fine-tune the complexity. The goal is to bridge the gap between theoretical knowledge and real-world application, making learning effective and enjoyable. Quality assurance and finishing touches The development process is complete, but there’s still work to do before releasing the content. We take pride in polishing our content, so the final steps are crucial. Checking against expectations Before the official QA process, we review the original plan to spot any discrepancies, such as unmet learning objectives or missing topics. While deviations don’t always require changes, they must be justified. Assuring quality A thorough QA process is vital for catching grammatical errors, technical bugs, and general improvements before release. Each lab undergoes three rounds of QA, each performed by a different person – two rounds of technical QA, and one for presentation. Some of the steps taken during technical QA include: Verifying written content accuracy, flow, and completeness. Ensuring all learning objectives are covered. Identifying any critical bugs or vulnerabilities that would allow users to bypass the intended solution. Providing small tweaks or changes to tasks for clarity. Assigning relevant mappings (NICE K-numbers, MITRE tags, CWEs). After technical QA, the lab is reviewed by our quality team to ensure it meets our presentation standards. Once all labs in a collection pass rigorous QA, they are released for users. The final step occurs post-release on the platform. Gathering and implementing user feedback Users are at the heart of everything we do, and we strive to ensure our content provides real value. While our cyber experts share valuable knowledge, user feedback prevents echo chambers and highlights areas for improvement. After new releases, we conduct an evaluation stage to analyze what went well and where we can improve. User feedback We gather quantitative and qualitative feedback to help us identify root issues and solutions. Quantitative feedback involves analyzing metrics like completion rates and time taken. We also examine specific changes, such as frequently missed questions or labs where users drop out. These are important things to note, but we avoid drawing conclusions solely from this data. This is where qualitative feedback comes in. Qualitative feedback includes user opinions and experiences gathered from feedback text boxes, customer support queries, and direct conversations. These responses are stored and read by the team and provide context beyond raw numbers. Channels such as customer support queries and follow-ups with customers also help us improve our content. Post-release reviews We conduct post-release reviews at set intervals after content release to analyze quantitative and qualitative data. This review helps us assess the entire process and identify areas for improvement. These reviews allow us to update content with new features, like adding auto-completable tasks for CyberPro. The reviews ensure our content remains current and enhances user experience. Wrapping up Hopefully, this blog post has provided insight into all the care we put into building and tailoring our content for users. This process has come a long way since we started making labs in 2017! Don't forget — with our new Lab Builder feature, you can now have a go at creating your own custom labs. If there's a topic that interests you and you want to share that knowledge with your team, making your own lab is a great way to do it! If there’s any part of the process you’d like to know more about, ask in the comments. Are there any collections that made you think, “Wow, I wonder how this was made”? Let us know!29Views3likes1CommentFrom Concept to Content: Laying the Foundations of a Lab Collection
Technical planning At this stage, we address niche technical details not covered in initial planning but crucial for polished content. Below is an example of the question-and-answer process used for the “Web Scraping with Python” collection. Should the practical sections of the content be created using Docker, for optimal speed and modularity, or does the subject matter require the use of a full EC2 instance? As there are no unusual requirements for the technical portion of the collection (such as needing kernel-level access, network modifications, or third-party software that doesn’t run in containers), the labs can run on Docker. This is a benefit not only for the overall user experience, but also allows for image inheritance during development, which will be demonstrated a bit later on. Are there any tools, custom scripts, or system modifications that should be present across the whole piece of content? The collection is based around writing Python scripts, so ensuring that Python is installed on the containers as well as any required web scraping libraries is a must. In addition, some considerations for user experience can be made, such as installing an IDE like Visual Studio Code on the containers. How can task verification be implemented to make sure it’s both robust and non-intrusive? In the case of this collection, implementing auto-completable tasks may be difficult due to the variety of ways users can create solutions, as well as the lack of obvious traces left by web scraping. Instead, it may be more appropriate to insert task solutions into a mock website that needs to be scraped, which the user can retrieve by completing the task and providing the solution in an answer box. Understanding the technical requirements for a piece of content helps to bridge the gap between planning and development, making it a crucial step. With some of the key questions answered, it’s time to move on to implementation. Creating base images It’s finally time to put fingertips to keyboards and start programming! The first part of implementation creates what all labs in a collection will be built on – a base image. This is a skeleton that provides all the necessary tools and configuration needed for the whole collection, using a concept called image inheritance. If you're new to containerization software like Docker, don't worry – image inheritance is straightforward. Docker containers use images as blueprints to create consistent, mini-computers (containers). This is useful for labs because it allows you to quickly create a pre-configured container without the overhead of setting up a virtual machine, saving time and system resources. This is where image inheritance comes in. Docker images can inherit traits from parent images, similar to how you inherit eye color from your parents. Instead of one central image for all purposes, you create a parent image with shared requirements and then customize descendant images for specific needs. Let’s use the “Python for web scraping” collection as an example again. Think about what kind of things would need to be present in each lab: An installation of Python so the user can run scripts. A code editor to write the scripts in. A mock website for the user to test their scripts on. The first two of these requirements are essentially the same in every lab; there’s no real need to change the installation of Python or the code editor, and in fact, it would be better to have them all be identical, which would result in a smoother user experience. The third, however, does need to be changed — the specific task requirements are going to be different from lab to lab, and the website files will need to change to accommodate this. Taking into account the requirements, an inheritance structure like this can be used: Base image – Python installation and code editor present Lab 1 – Custom website files Lab 2 – Custom website files Lab 3 – Custom website files … Structuring images this way saves time, disk space, and development work by reusing shared configurations. Next time… In part three of this mini-series, you'll learn about the final stages of content development: creating labs, quality assurance, and release. To be notified when part three is released, follow The Human Connection Blog using the bell icon. Meanwhile, feel free to ask questions about the content creation process or specific collections in the replies. Have you used the Lab Builder feature to make any custom labs yet?29Views2likes0CommentsFrom Feng Shui to Surveys: How User Feedback Shapes Immersive Labs
We’ve all been asked to give product feedback in one way or another – a pop-up message after completing a purchase, an email asking how your visit went, or a poll appearing on your social media feed. They all have one thing in common: a real person behind them, looking for valuable insights. I’m one of those people! My role as Senior UX Researcher involves speaking to Immersive users and gathering their feedback to help the company make tangible improvements. UX, or user experience, is at the heart of what I do. And it’s been around for longer than you might think. What is UX? It’s believed that the origins of UX began in 4000 BC with the ancient Chinese philosophy of Feng Shui, the spatial arrangement of objects in relation to the flow of energy. In essence, designing the most user-friendly spaces possible. A short skip to 500 BC, and you can see UX at play with the Ancient Greeks' use of ergonomic principles (also known as human factors), defined as “an applied science concerned with designing and arranging things people use so that the people and things interact most efficiently and safely.” In short, people have been concerned about creating great user experiences for thousands of years. How does Immersive get feedback? Bringing you back to the present day, let me walk you through a recent research study undertaken with Immersive Labs users and what their experiences and feedback led to. In May this year, we sent out a survey to our users asking them about their needs for customised content. The feedback was given directly to the team working on the feature, helping to inform their design choices and confirm or question any assumptions they had about user needs. In July, we invited users, including Training Manager and community member mworkman to take part in a pilot study for the Custom Lab Builder, giving them exclusive access to the first iteration of the feature. They could use the builder in their own time, creating real examples of custom labs using their own content and resources. This gave them a realistic experience and highlighted issues along the way. What does Immersive do with that feedback? In August, those users joined a call with us to provide their feedback and suggestions. From these calls, we gained insights and statistics that were presented to the entire Product Team, voicing our customers’ needs. We then used this to shape the direction of the lab builder feature before its release. Customers told us that they wanted to create labs based on their own internal policies and procedures, which would require more flexible question-and-answer formats for tasks. They also wanted more formatting options and the ability to add media to labs. In response to this feedback, we increased the number of task format types from three to five, and we’ll continue to add to this. We also added the ability to include multiple task formats in the same lab. Users also now have the option to upload images and include rich text within their custom labs, enhancing the layout and customisation experience. The Custom Lab Builder was released in October 2024 with an update pushed in December, and we’re still working on improving it! Throughout this first quarter of 2025, we’ve released more new features, including drag and drop, free text questions, and instructional tasks in the Lab Builder. How can you get involved? Once again, we’ll be calling on our users to give feedback on their experiences with these features, continuing to involve you in our design process to ensure that our products and experiences reflect what users are looking for. Throughout 2025, Immersive Labs will be providing opportunities for our users to come along to feedback sessions, have their opinions heard through surveys, and many more exciting chances to talk to the people behind the product. Follow our Community Forum for hot-off-the-press opportunities! For more guidance on Lab Builder, visit our Help Center.17Views1like0Comments