Human Connection Challenge: Season 1 – Scanning Walkthrough Guide (Official Version)
Time’s Up! Congratulations to everyone who completed Lab 2: Scanning from the Human Connection Challenge: Season 1. In this walkthrough, I'll share some strategies for efficiently completing the lab, based on my perspective as the author. Remember, there are often multiple ways to approach a challenge, so if you used a different method and succeeded, that's perfectly fine! The goal is to learn, and I hope these notes help clarify any steps and reinforce key concepts for the next challenge. This challenge has now ended, but the lab remains available for practice. While prizes are no longer up for grabs, you can still complete the lab and use this walkthrough guide for support if needed. I’ve also used placeholders in some of the commands that would give away an answer directly, so if you see anything enclosed in angle brackets, such as <name server>, please make sure you replace it with the actual value, such as nameserver. With all that considered, let's get started. Overview Task: Identify the name server records of tinytown.bitnet. 1. What is the IP of the first name server for tinytown.bitnet? You’ll first need to open a Terminal on the Kali desktop. Next, you’ll need to query the DNS Server IP (found in the Machines panel) about the tinytown.bitnet domain using the nslookup (Name Server Lookup) tool. You’re specifically looking for NS (Name Server) records, so you can use the -type=ns parameter with nslookup to specify this: nslookup -type=ns tinytown.bitnet [DNS Server IP] The output of this command will return two name servers for the domain labelled with 1 and 2. Your next step is to identify what IP address is associated with the first name server (1). To do this, you can use nslookup along with the name server, domain, and DNS Server IP: nslookup <name server>1.tinytown.bitnet [DNS Server IP] This command will then return an IP address for the name server. 2. What is the IP of the second name server for tinytown.bitnet? As you’ve already identified both name servers, you’ll just need to run the previous command, except with the second (2) name server: nslookup <name server>2.tinytown.bitnet [DNS Server IP] You’ll then find the IP address associated with it. Task: Identify port service information for Target 1. 3. What service version is running on port 53? A network scanning tool like Nmap can help you identify the service version running on a specific port. To do this with Nmap, you can use the -sV option for service detection: nmap -sV [Target 1 IP Address] The output will show what service version is running on port 53. 4. What is the full service banner of port 22? There are a couple of ways to find the full service banner of port 22 – such as with Nmap or Netcat. If you’re using Nmap, you can modify the previous command to include the “banner” script along with the port number: nmap -sV -script=banner [Target 1 IP Address] -p22 The command line will then display the service banner from port 22. You can alternatively use netcat to manually connect to the SSH server. When a client connects, Netcat may present a banner that contains version information. To use Netcat, you’ll need the nc command along with the Target 1 IP address and specify you want to connect to port 22: nc [Target 1 IP Address] 22 When you run this command, the banner appears before the terminal hangs. Task: Identify a token on one of the ports. 5. What is the token? With the previous Nmap command, you initially found that three ports were open on Target 1. However, you’ll need to do a more thorough network scan to find another open port, one not initially found with the previous scans. To do this, you can expand your port scan to cover a much wider range by using Netcat to scan for open ports from 1 through 9000: nc -zvn <Target 1 IP Address> 1-9000 Here, -z will scan for listening services but won’t send any data, -v is verbose mode, which provides more detailed information, and -n tells Netcat not to resolve hostnames via DNS. This command will reveal a fourth open port. Now, you can use Netcat to connect to this port: nc <Target 1 IP Address> <open port> The token will then be displayed in the terminal. Task: Scan the TLS configuration on Target 2. 6. How many protocols are enabled? To scan for SSL/TLS configurations, you can use the sslscan tool. By default, sslscan scans port 443 and will return supported server ciphers, certificate details, and more. You can use sslscan like this: sslscan <Target 2 IP Address> The returned output will be verbose, but you can find and count the number of enabled protocols under the SSL/TLS Protocols subheading. 7. Name an enabled protocol. Using the previous output, name one of the enabled protocols. 8. What exploit are the protocols NOT vulnerable to? Using the same output, scroll down through the results until you find a subheading that’s named after a vulnerability and contains a similar string to: <Protocol> not vulnerable to <vulnerability name> The vulnerability has the same name as the subheading. Task: Identify and extract information from an SMB share on Target 3. 9. What Disk shared directory can you access? To extract information from an SMB (Server Message Block) share, you can use the smbclient tool. First, you’ll need to list the SMB shares on the target using the -L flag (the list/lookup option) with: smbclient -L //<Target 3 IP> You’ll then be prompted for a password, but you can press Enter to skip this. A list of SMB shares will then be displayed, three of which are shown to be a Disk type, so you know the answer will be one of these. You can now begin to go through the list and try to connect to the shares with: smbclient //<Target 3 IP>/<Sharename> However, this time when you’re prompted for a password and you press Enter, you might encounter a message when you try and connect to a share: NT_STATUS_ACCESS_DENIED If you attempt to connect to all shares, you’ll find you can connect to one share without a password. You’ll then be greeted with the following prompt to show the successful connection: smb: \> 10. What is the token stored in the directory? Now that you’re connected, you can execute commands to interact with the SMB share. If you run ls, you’ll find a token.txt file in the current directory. You can then download the file from the share onto your local machine with: get token.txt On the Kali desktop, open the Home folder and the token.txt will be inside. Open this file and find the token. 11. What is the username stored in the directory? After you’ve run ls in the SMB share, you’ll find not only token.txt, but also a file named creds.txt. Use the same command as you just did previously to download the file onto your machine: get creds.txt This file will also be downloaded to the Home folder, where you can find a username and password. Task: Identify open services on Target 3. Task: Connect to Target 3 with the previously found credentials. 12. What is the token stored in the user's /Documents directory? For this final task, you first need to scan the target using Nmap. You’ll find that if you attempt to scan the target without using the -Pn flag, you’ll get a response saying that the host seems down. However, if you run Nmap with -Pn, you’ll find some ports are open: nmap -Pn <Target 3 IP Address> However, the ports returned from this command don’t offer a way to connect to the target. You’ll also need to scan the 6000 most popular ports: nmap -Pn --top-ports 6000 <Target 3 IP Address> These results will now show two additional ports are open regarding the Web Services Management (Wsman) protocol, which is used to communicate with remote machines and execute commands. One of the tools that implement this protocol is Windows Remote Management (WinRM) which is Microsoft’s implementation of Wsman. Knowing this, you can now use Metasploit to interact with the target. In your terminal, run: msfconsole Once loaded, you can use the the following auxiliary module to connect to a system with WinRm enabled and execute a command with: set cmd ls You’ll then need to set the following options, using the credentials you found in the creds.txt file: set username <username> set password <password> set rhosts <Target 3 IP Address> Next, you need to set the cmd option with the command you want to run. If you use the ls command, you’ll be able to find what out files are in the directory you connect to: set cmd ls With all the options set, you can now run the module: run The results of the executed command will be printed on the screen and also saved to a directory, but both show the existence of a token.txt file in the current directory. You can now set the cmd option to type token.txt in Metasploit: set cmd type token.txt Once set, use the run command to send the updated command: run The contents of token.txt will then be displayed on the screen and outputted to a file. Tools For this challenge, you’ll use a range of tools including: Nslookup Nmap Netcat Sslscan Smbclient Metasploit Tips You can use different tools and parameters within those tools to scan for and find information, so don’t be afraid to try out a few different things! If you want to learn more about some of the tools within this lab, take a look at the following collections: Reconnaissance Nmap Infrastructure Hacking Introduction to Metasploit Post Exploitation with Metasploit Conclusion The steps I’ve laid out here aren’t the only way to find the answers to the questions, as long as you find the answer, you did it – well done! If you found another way to find some of these answers and think there’s a better way to do it, please post them in the comments below! I hope you enjoyed the challenge and I’ll see you for the next one.433Views3likes4CommentsExperience-Driven and Intrinsic Learning in Cybersecurity
Experience-driven learning Experience-driven learning can take many forms, including: Practical simulations Role-playing exercises Individual hands-on learning Team-based exercising For example, some employees may be presented with micro exercises that pivot around key risk areas such as device security, data handling or social engineering. Others may participate in a tabletop exercise that simulates a ransomware attack, allowing them to practice incident response, crisis management, and recovery procedures in a safe and engaging environment. More technical teams can experience a real attack on real infrastructure in a cyber range, working together to identify and understand the attack using defensive and forensic tools. These types of activities foster intrinsic learning, driven by personal interest and the desire for self-improvement rather than external rewards like grades or promotions. These types of activities also engage natural human behaviours related to gamified learning, both individually and as a team. Intrinsic learning Intrinsic learning can be particularly valuable, especially in the context of cybersecurity, because it allows employees to develop a deeper understanding and appreciation of the subject matter beyond what is required for their job. This approach to learning is not only more engaging and effective but also helps organizations identify areas for improvement and potential vulnerabilities. Intrinsic learning can also help foster a culture of continuous learning within the workforce. By encouraging employees to pursue their interests and explore new areas of cybersecurity, organizations can create an environment where individuals feel empowered to take ownership of their learning and seek out new opportunities for growth and development. To make your cybersecurity training more experiential and foster intrinsic motivation for learning, consider the following steps: Align with personal goals Empower team members to align upskilling pathways with their career aspirations and professional development. Emphasize real-world relevance Showcase how the skills learned directly apply to current cybersecurity challenges and job responsibilities. Provide autonomy Allow learners to freely explore different topics and skills. Create a supportive environment Encourage peer-to-peer learning and mentorship opportunities to build a culture of continuous improvement. Celebrate progress Recognize and highlight individual and team achievements to boost confidence and motivation. Implement adaptive challenges Gradually increase difficulty levels, ensuring learners are consistently challenged but not overwhelmed - the right level of learning is more important than the quantity. Encourage reflection Prompt learners to analyse their performance after each exercise, especially team-based, fostering a growth mindset and self-awareness. Facilitate knowledge sharing Organize regular debriefing sessions where individuals can discuss their experiences and insights gained from the training. Connect to organizational impact Demonstrate how improved cybersecurity skills contribute to the overall success and resilience of the organization. Provide immediate feedback Leverage Immersive Labs' real-time feedback mechanisms to help individuals understand their progress and areas for improvement. By implementing these steps, you can create a more engaging and intrinsically motivating cybersecurity training experience, fostering a culture of continuous learning and skill development within your organization. Conclusion Incorporating intrinsic and experience-driven exercises into your cyber resilience strategy can be an effective way of measuring and improving your overall resilience. Today, the need to exercise effectively has become a key feature of many cyber security frameworks and directives such as ISO27001, NIS2 and DORA, requiring organisations to maintain proof with policies and procedures underpinned by data and results. What have you experienced in your own upskilling journeys to get you where you are today, have you found some ways work better than others; Individual, team, hands-on, theory, classroom? What are your favourite ways to learn and stay motivated with the ever-changing cyber landscape right now? Share your stories and insights in the comments below!18Views2likes0CommentsMaking the Most of the Custom Lab Builder: Tone of Voice
Now you can build your own labs in the Custom Lab Builder, we thought we’d provide some guidance on writing with a strong tone of voice to ensure your labs are as engaging as possible. This blog is the third in a series on making the most of the Lab Builder, looking at what we call the Four Cs. Ensuring your writing is… Conversational Concise Conscious Consistent The previous two posts looked at accessibility and inclusivity. This post focuses on tone of voice and how to write authentically to ensure your audience engages with the lab and remembers the message you’re trying to teach them. Writing well For most of your life, you’ve probably been told to write properly. Avoid contractions at all costs. Use complex sentences with plenty of fancy connecting words like “furthermore” and “moreover”. And never start a sentence with “and”. This formal style works really well for some industries. Academia is traditionally an incredibly formal area when it comes to the written word, as is the broadsheet newspaper realm. This is often to reflect the work’s sincerity, to avoid weakening a writer’s reputation, and to present ideas consistently and objectively. But Immersive Labs believes writing can be sincere and objective without being so... dull! Be conversational Copywriting is increasingly conversational, appearing everywhere from LinkedIn posts to the back of your milk carton. This style engages readers by feeling personal and authentic, aligning with Richard Mayer’s Personalization Principle, that people learn more deeply when words are conversational rather than formal. A human-to-human copywriting style makes sense for Immersive Labs, as we’re all about focusing on the humans behind the screens. When using the Lab Builder, we recommend writing your labs in an engaging, approachable style to create a modern, user-friendly learning environment. But conversational doesn’t mean sloppy. It’s about presenting ideas clearly and confidently, helping users feel at ease while they learn. Use everyday, concrete language Using fancy, complex words doesn’t make content better – it can actually distract readers and undermine clarity. Instead, prioritize clear, straightforward language to ensure your message is easy to understand, especially by users with cognitive disabilities. Avoid overly poetic phrases, figures of speech, idioms, or ambiguous language, which can confuse or overwhelm readers, including those with autism spectrum conditions. Strive for clarity to help users grasp your message the first time, keeping their needs front and centre. Address the reader Authenticity is all about gaining your reader’s trust. We recommend speaking directly to them in your custom labs by using “you” throughout your copy. This handy trick also avoids any ambiguity when it comes to practical tasks. Take the following example. “In this lab, the machine must be analyzed and IoCs must be extracted.” Instead of being vague and passive, we recommend talking directly to the reader and telling them exactly what they need to do. “In this lab, you need to analyze the machine and extract IoCs.” Or better yet, you can be even more direct by cutting that down even further: “In this lab, analyze the machine and extract IoCs.” Our labs and scenarios frequently talk directly to the reader. Users are more likely to stay engaged when they’re spoken to, not at. Use contractions Contractions instantly make your writing more conversational by mimicking natural speech. Combining words like "it is" to "it’s" or "you are" to "you’re" adds a touch of informality that feels approachable and inclusive. While once discouraged in formal writing, contractions are ideal for a modern learning environment, making text easier to read, understand, and remember. Be concise Writing in plain language is good for all users, but can make a massive difference for neurodivergent users, those who struggle to focus, those who hyperfocus, or maybe those who find reading difficult. We follow recommendations from the Advonet Group, the British Dyslexia Association, and Clark and Mayer’s Coherence Principle to ensure accessibility for a diverse audience – and you should too! Writing simply and clearly doesn’t mean trivializing content or sacrificing accuracy; it just makes your message easier to understand. After all, no one's ever complained that something's too easy to read! The difficulty comes when balancing this with technical content. How can you make advanced, complex cybersecurity topics clear and concise? Keep it short and sweet Sentences longer than 20 words become difficult to understand and can detract from the point being made. It’s easy for people’s minds to wander, so get to your point in as few words as possible. The same goes for paragraphs. Try and avoid long, dense walls of text. Nobody wants to read that, and it’s no good when thinking about accessibility. Keep your paragraphs to four or five lines, maximum. Get to the point Avoid adding unnecessary side notes to your labs, as they can distract from the main message and make learning harder. Unnecessary content distracts the learner’s attention from the main message, making them less likely to remember the core topic. It disrupts the connections between key messages and diverts the learner’s focus, making it harder to piece together the bigger picture. This is all down to cognitive load theory, which says that in general, humans can handle around four pieces of new information at any one time. To help users focus, stick to the lab's core topic and avoid overloading them with unrelated details. TL;DR When writing your labs with the Custom Lab Builder, ensure all your text is conversational to engage your users with the topic. And also make all your copy as concise as possible. Getting your message across in as few words as possible will reduce cognitive overload, boredom, and frustration. By focusing on being conversational, as well as being consistent and conscious (as we covered in the previous blog posts in this series), your readers will engage with your content better, remember the topic, and be able to put it into practice more easily – improving their cybersecurity knowledge and driving their cyber resilience. Share your thoughts! What do you think about these tone of voice tips when writing your custom labs? Have you tried to write your labs in a conversational yet concise way, and how did this go down with your users? Do you have any other suggestions for the community on how to write conversationally? We’d love to hear from you!34Views2likes0CommentsMaking the Most of Custom Lab Builder: A Guide to Writing Inclusively for All
Language shapes how people perceive and engage with content, so it’s crucial to consider the kind of words you use. Using outdated terminology can offend and disengage learners, as well as hurt a company’s reputation. This blog is the second in a series on making the most of the Lab Builder, looking at what we call the Four Cs. Ensuring your writing is… Conscious Consistent Conversational Concise The previous post in this series looked at accessibility. In this post, we’ll explore what it means to write consciously and inclusively, share practical tips, and show how our platform supports this critical effort. Why is inclusive language important? Inclusive language avoids bias, respects diversity, and ensures accessibility for all. In cybersecurity, it means using terms that foster collaboration and trust, avoiding outdated or harmful phrases, and creating welcoming and empowering content The Quality Team at Immersive Labs is committed to staying up to date with how language changes in the cyber industry. We regularly undertake research and speak to other industry professionals to ensure that our language is appropriate. Words to avoid We recommend avoiding specific terms that some people may find offensive, and some socially charged language that may have negative connotations. Non-inclusive language to avoid Preferred inclusive versions Whitelist/Blacklist Allowlist/Denylist White hat/Black hat hackers Ethical/Unethical hackers Master/Slave Leader/Follower, Primary/Replica, Primary/Standby Grandfathered Legacy status Gendered pronouns (e.g. assuming “he/him/his”) They, them, their Gendered pronouns (e.g. “guys”) Folks, people, you all, y’all Man hours, man power Hours, engineer hours, workforce, staffing Man-in-the-middle attack Machine-in-the-middle attack Sanity check Quick check, confidence check, coherence check Dummy value Placeholder value, sample value Crazy, insane Amazing, incredible, or any other appropriate adjective Socially charged words Preferred inclusive versions Native Built-in, default, pre-installed, integrated, core Abort Stop, cancel, end, force quit Cripple Disable, impair, damage, destroy, ruin Kill Stop, force quit, close, shut down Trigger Activate, initiate, cause, launch Unsure if a phrase you’ve used could be seen as offensive? Ask yourself: is this the most accurate and appropriate choice? Often, you can find a more descriptive word and avoid using these examples. Top tips for inclusive language Use writing tools Tools like Grammarly can help identify problematic words or phrases. You can create customized lists in Grammarly, which will then flag when a word has been used in your writing. Additionally, there are many inclusive language guides available online. Keep it short and sweet Use short sentences and paragraphs. Shorter sentences are easier to read, scan, and understand – especially for those with cognitive disabilities. Aim for sentences around 10–15 words, with variation for a natural flow. Avoid sentences longer than 20 words, as they can be harder to follow. Read aloud Proofread your work aloud to catch awkward phrasing, overly complex sentences, or insensitive terms. Hearing the words can help identify spots where clarity or tone might need improvement. Get a second opinion Ask a colleague to review your final version. A fresh set of eyes can spot language that might be unclear, inappropriate, or overly complicated. Share your thoughts Now that Lab Builder is here and you’ve had a chance to create your own content, how have you made your content more inclusive? We’re always looking to stay up to date, so if you have any further suggestions to add to our list of words to avoid or any other tips, let us know! We’d love to learn from you and grow the collective community knowledge.45Views2likes0CommentsMaking the Most of the Custom Lab Builder: Writing With Accessibility in Mind
What if someone tried to access your content who was visually impaired? Or who had cognitive difficulties? Or who was hard of hearing? Would they be able to understand the information you’ve provided and improve their cyber resilience? Our in-house copyediting team has created a series of articles to help you craft high-quality labs, aligned to the rigorous processes we follow. We embrace what we call the Four Cs to ensure all labs are: Consistent Conscious Conversational Concise These articles delve into each of these principles, showing how to implement them in your labs to create content that resonates with readers, enhances learning, and boosts cyber resilience. This post highlights how being conscious of your formatting can enhance accessibility for assistive technology users and how consistent formatting improves navigation for everyone. Rich text formatting Rich text formatting tools like subheadings, bullet points, lists, and tables in the Custom Lab Builder help organise information for easier scanning, better retention, and improved comprehension. Using these will ensure your content is consistent, accessible, and reader-friendly for everyone! Rich text formatting elements carry specific meaning, which assistive technologies rely on to convey information to specific users. Headings Visually, headings represent hierarchy through different font styling and allow users to quickly scan content. Programmatically, they allow users who can’t see or perceive the visual styling to access the same structural ability to scan. Heading elements should reflect the structure of the content. So your title should go in ‘Heading 1’ formatting, your next subheading will go in ‘Heading 2’ formatting, and so on. To ensure your content reads correctly to screen reader users, don’t use HTML heading styling to represent emphasis, and don’t use bold to make text appear like a heading. Lists (bullets/numbering) Always use bullets or numbered lists using the provided formatting to convey a list. A screen reader will announce that the following information is a list. Links How a link is formed significantly impacts usability. Consider the following sentence: “To find out more about this topic, complete our Intro to Code Injection lab here.” Links are interactive elements, which means you can navigate to them using the tab key. A user who relies on screen magnification to consume content may choose to tab through content to see what's available. The example above would be communicated as just “here”, which provides no context. They’d need to manually scroll back to understand the link’s purpose. Always use descriptive link text that clearly indicates its destination. Avoid ambiguous phrases like “here”. If that’s not possible, ensure the surrounding text provides clear context. “To find out more about this topic, complete our Intro to Code Injection lab.” Bold Only use bold for emphasis! Avoid italics, capital letters, or underlining (reserved for hyperlinks) to prevent confusion. Consistency in formatting reduces cognitive load, making your text more accessible. Bold stands out, provides better contrast, and helps readers quickly identify key information. Avoid italics With 15–20% of the population having dyslexia, italics are worth avoiding because research shows it’s harder for this user group to read italic text. Italics can sometimes bunch up into the next non-italic word, which can be difficult to comprehend or distracting to read. Media If you’re adding media to your labs, such as videos and images, it’s especially important to consider those who use assistive technologies. These users need to have the same chance of understanding the content as everyone else. They shouldn’t miss out on crucial learning. What is alternative text? Alt text describes the appearance and function of an image. It’s the written copy that appears if the image fails to load, but also helps screen reading tools describe images to visually impaired people. Imagine you’re reading aloud over the phone to someone who needs to understand the content. Think about the purpose of the image. Does it inform users about something specific, or is it just decoration? This should help you decide what (if any) information or function the images have, and what to write as your alternative text. Videos Any videos you add to your lab should have a transcript or subtitles for those who can’t hear it. Being consistent Consistency is a major thinking point for accessibility. We recommend adhering to a style guide so all of your labs look and feel consistent. We recommend thinking about the structure of your labs and keeping them consistent for easy navigation. In our labs, users expect an introduction, main content, and a concluding “In This Lab” section outlining the task. This helps users recognize certain elements of the product. It reduces distraction and allows easier navigation on the page. For example, some users prefer diving into practical tasks and referring back to the content if they need it. By using the same structure across your lab collections, your users will know exactly where to find the instructions as soon as they start. TL;DR It’s crucial to focus on accessibility when writing your custom labs. Utilise the built-in rich text formatting options in the Custom Lab Builder (and stay consistent with how you use them!) to ensure your labs are easy to navigate for every single user. By being conscious and consistent with your formatting, every user will engage with your content better, remember the topic, and be able to put it into practice more easily, improving their cybersecurity knowledge and driving their cyber resilience. No matter how they consume content. Keep your eyes peeled for the next blog post in this series, which will look at inclusive language. Share your thoughts! There’s so much information out there on creating accessible content. This blog post just focused on the language, structure, and current formatting options available in the Custom Lab Builder. Have you tried to make your labs or upskilling more accessible, and how did this go down with your users? Do you have any other suggestions for the community on how to write content with accessibility in mind? Share them in the comments below!40Views2likes0CommentsFrom Feng Shui to Surveys: How User Feedback Shapes Immersive Labs
We’ve all been asked to give product feedback in one way or another – a pop-up message after completing a purchase, an email asking how your visit went, or a poll appearing on your social media feed. They all have one thing in common: a real person behind them, looking for valuable insights. I’m one of those people! My role as Senior UX Researcher involves speaking to Immersive users and gathering their feedback to help the company make tangible improvements. UX, or user experience, is at the heart of what I do. And it’s been around for longer than you might think. What is UX? It’s believed that the origins of UX began in 4000 BC with the ancient Chinese philosophy of Feng Shui, the spatial arrangement of objects in relation to the flow of energy. In essence, designing the most user-friendly spaces possible. A short skip to 500 BC, and you can see UX at play with the Ancient Greeks' use of ergonomic principles (also known as human factors), defined as “an applied science concerned with designing and arranging things people use so that the people and things interact most efficiently and safely.” In short, people have been concerned about creating great user experiences for thousands of years. How does Immersive get feedback? Bringing you back to the present day, let me walk you through a recent research study undertaken with Immersive Labs users and what their experiences and feedback led to. In May this year, we sent out a survey to our users asking them about their needs for customised content. The feedback was given directly to the team working on the feature, helping to inform their design choices and confirm or question any assumptions they had about user needs. In July, we invited users, including Training Manager and community member mworkman to take part in a pilot study for the Custom Lab Builder, giving them exclusive access to the first iteration of the feature. They could use the builder in their own time, creating real examples of custom labs using their own content and resources. This gave them a realistic experience and highlighted issues along the way. What does Immersive do with that feedback? In August, those users joined a call with us to provide their feedback and suggestions. From these calls, we gained insights and statistics that were presented to the entire Product Team, voicing our customers’ needs. We then used this to shape the direction of the lab builder feature before its release. Customers told us that they wanted to create labs based on their own internal policies and procedures, which would require more flexible question-and-answer formats for tasks. They also wanted more formatting options and the ability to add media to labs. In response to this feedback, we increased the number of task format types from three to five, and we’ll continue to add to this. We also added the ability to include multiple task formats in the same lab. Users also now have the option to upload images and include rich text within their custom labs, enhancing the layout and customisation experience. The Custom Lab Builder was released in October 2024 with an update pushed in December, and we’re still working on improving it! Throughout this first quarter of 2025, we’ve released more new features, including drag and drop, free text questions, and instructional tasks in the Lab Builder. How can you get involved? Once again, we’ll be calling on our users to give feedback on their experiences with these features, continuing to involve you in our design process to ensure that our products and experiences reflect what users are looking for. Throughout 2025, Immersive Labs will be providing opportunities for our users to come along to feedback sessions, have their opinions heard through surveys, and many more exciting chances to talk to the people behind the product. Follow our Community Forum for hot-off-the-press opportunities! For more guidance on Lab Builder, visit our Help Center.33Views1like0CommentsFrom Concept to Content: A Deep Dive into Building and Critically Analyzing Labs
Putting it all together The main bulk of the development work is building the labs. This usually comprises two parts that require different skill sets; one is putting together the written portion of the lab (such as the briefing, tasks, and outcomes), and the other is implementing any technical needs for the practical side of the lab. While some labs may focus more on one component than the other, this general overview of lab development will demonstrate each step of the process. Developing written content Regardless of the lab, the written content forms the backbone of the educational material. Even with prior knowledge and planning, additional research is essential to ensure clear explanations. Once research is complete, an outline is drafted to focus on the flow, ensuring the information is presented logically and coherently. This step helps enhance the final product. The final step is turning the outline into the final written content. Everyone approaches this differently, but personally, I like to note all the points I want to cover in a bullet list before expanding on each one. This method ensures all necessary information is covered, remains concise and clear, and aligns with learning outcomes and objectives. Technical implementation For practical labs, technical setup is key. Practical tasks should reinforce the theoretical concepts covered in the written portion, helping users understand the practical application of what they’ve learned. Before implementing anything, the author decides what to include in the practical section. For a CTI lab on a vulnerability, the vulnerable software must be included, which involves finding and configuring it. For general topics, a custom script or program may be needed, especially for niche subjects. The key is ensuring the technical exercise is highly relevant to the subject matter. Balancing the difficulty of practical exercises is crucial. Too easy, and users won’t engage. Too hard, and they’ll get frustrated. Tasks should challenge users to think critically and apply their knowledge without discouraging them. This requires iterative testing and feedback to fine-tune the complexity. The goal is to bridge the gap between theoretical knowledge and real-world application, making learning effective and enjoyable. Quality assurance and finishing touches The development process is complete, but there’s still work to do before releasing the content. We take pride in polishing our content, so the final steps are crucial. Checking against expectations Before the official QA process, we review the original plan to spot any discrepancies, such as unmet learning objectives or missing topics. While deviations don’t always require changes, they must be justified. Assuring quality A thorough QA process is vital for catching grammatical errors, technical bugs, and general improvements before release. Each lab undergoes three rounds of QA, each performed by a different person – two rounds of technical QA, and one for presentation. Some of the steps taken during technical QA include: Verifying written content accuracy, flow, and completeness. Ensuring all learning objectives are covered. Identifying any critical bugs or vulnerabilities that would allow users to bypass the intended solution. Providing small tweaks or changes to tasks for clarity. Assigning relevant mappings (NICE K-numbers, MITRE tags, CWEs). After technical QA, the lab is reviewed by our quality team to ensure it meets our presentation standards. Once all labs in a collection pass rigorous QA, they are released for users. The final step occurs post-release on the platform. Gathering and implementing user feedback Users are at the heart of everything we do, and we strive to ensure our content provides real value. While our cyber experts share valuable knowledge, user feedback prevents echo chambers and highlights areas for improvement. After new releases, we conduct an evaluation stage to analyze what went well and where we can improve. User feedback We gather quantitative and qualitative feedback to help us identify root issues and solutions. Quantitative feedback involves analyzing metrics like completion rates and time taken. We also examine specific changes, such as frequently missed questions or labs where users drop out. These are important things to note, but we avoid drawing conclusions solely from this data. This is where qualitative feedback comes in. Qualitative feedback includes user opinions and experiences gathered from feedback text boxes, customer support queries, and direct conversations. These responses are stored and read by the team and provide context beyond raw numbers. Channels such as customer support queries and follow-ups with customers also help us improve our content. Post-release reviews We conduct post-release reviews at set intervals after content release to analyze quantitative and qualitative data. This review helps us assess the entire process and identify areas for improvement. These reviews allow us to update content with new features, like adding auto-completable tasks for CyberPro. The reviews ensure our content remains current and enhances user experience. Wrapping up Hopefully, this blog post has provided insight into all the care we put into building and tailoring our content for users. This process has come a long way since we started making labs in 2017! Don't forget — with our new Lab Builder feature, you can now have a go at creating your own custom labs. If there's a topic that interests you and you want to share that knowledge with your team, making your own lab is a great way to do it! If there’s any part of the process you’d like to know more about, ask in the comments. Are there any collections that made you think, “Wow, I wonder how this was made”? Let us know!55Views3likes1CommentNew CTI Labs: Zero-day Behaviour: PDF Samples & UAC-0063 Intrusion: SIEM Analysis
Based on the report released by NCSC's CTO, a number of important cyber security developments occurred throughout the past week. We have created two labs on what we thought were interesting parts of the report to align with what NCSC is seeing out in the wild. Zero-day Behaviour: PDF Samples PDFs are used by everyone, and a researcher has found that you can embed commands that will communicate out to attacker-controlled servers – depending on which PDF reader a company has, you can exfiltrate NTLM data to aid in further attacks. PDFs can be used to initial access an attack, such as sending a malicious one via email. Therefore, we have created a lab for defensive teams to analyze what these PDFs look like under the hood and how to identify this newly found behavior. UAC-0063 Intrusion: SIEM Analysis It has been observed that the threat group UAC-0063 has been sending malicious documents around the world, targeting Asia and Eastern Europe in their latest operation. Their aim is cyber espionage and to gather information about governments, NGOs, defense, and academia. With their malware dubbed HATVIBE, they have been seen to use legitimate diplomatic documents with their malicious code embedded inside them. The lab provides an analysis of the attack chain, where our customers will understand what happens when one of the malicious documents is clicked on and what detections can be put in place to detect the attack. Why should our customers care? These two labs are based on information that the NCSC has thought the industry needs to know. Understanding the updated attack techniques of threat groups and new ways to execute commands in PDFs is incredibly important because social engineering is still one of the highest methods of initial access. Therefore, our customers will be able to analyze both these threats to develop detections early or to gain familiarity with how these threats work. Who is it for? Incident responders SOC analyst Malware reverse engineers CTI Analysts Threat Hunters Here is the link to the PDF lab: https://immersivelabs.online/labs/zero-day-behaviour-pdf-samples Here is the link to the UAC-0063 lab: https://immersivelabs.online/labs/uac-0063-siem-analysis56Views2likes0CommentsFrom Concept to Content: Laying the Foundations of a Lab Collection
Technical planning At this stage, we address niche technical details not covered in initial planning but crucial for polished content. Below is an example of the question-and-answer process used for the “Web Scraping with Python” collection. Should the practical sections of the content be created using Docker, for optimal speed and modularity, or does the subject matter require the use of a full EC2 instance? As there are no unusual requirements for the technical portion of the collection (such as needing kernel-level access, network modifications, or third-party software that doesn’t run in containers), the labs can run on Docker. This is a benefit not only for the overall user experience, but also allows for image inheritance during development, which will be demonstrated a bit later on. Are there any tools, custom scripts, or system modifications that should be present across the whole piece of content? The collection is based around writing Python scripts, so ensuring that Python is installed on the containers as well as any required web scraping libraries is a must. In addition, some considerations for user experience can be made, such as installing an IDE like Visual Studio Code on the containers. How can task verification be implemented to make sure it’s both robust and non-intrusive? In the case of this collection, implementing auto-completable tasks may be difficult due to the variety of ways users can create solutions, as well as the lack of obvious traces left by web scraping. Instead, it may be more appropriate to insert task solutions into a mock website that needs to be scraped, which the user can retrieve by completing the task and providing the solution in an answer box. Understanding the technical requirements for a piece of content helps to bridge the gap between planning and development, making it a crucial step. With some of the key questions answered, it’s time to move on to implementation. Creating base images It’s finally time to put fingertips to keyboards and start programming! The first part of implementation creates what all labs in a collection will be built on – a base image. This is a skeleton that provides all the necessary tools and configuration needed for the whole collection, using a concept called image inheritance. If you're new to containerization software like Docker, don't worry – image inheritance is straightforward. Docker containers use images as blueprints to create consistent, mini-computers (containers). This is useful for labs because it allows you to quickly create a pre-configured container without the overhead of setting up a virtual machine, saving time and system resources. This is where image inheritance comes in. Docker images can inherit traits from parent images, similar to how you inherit eye color from your parents. Instead of one central image for all purposes, you create a parent image with shared requirements and then customize descendant images for specific needs. Let’s use the “Python for web scraping” collection as an example again. Think about what kind of things would need to be present in each lab: An installation of Python so the user can run scripts. A code editor to write the scripts in. A mock website for the user to test their scripts on. The first two of these requirements are essentially the same in every lab; there’s no real need to change the installation of Python or the code editor, and in fact, it would be better to have them all be identical, which would result in a smoother user experience. The third, however, does need to be changed — the specific task requirements are going to be different from lab to lab, and the website files will need to change to accommodate this. Taking into account the requirements, an inheritance structure like this can be used: Base image – Python installation and code editor present Lab 1 – Custom website files Lab 2 – Custom website files Lab 3 – Custom website files … Structuring images this way saves time, disk space, and development work by reusing shared configurations. Next time… In part three of this mini-series, you'll learn about the final stages of content development: creating labs, quality assurance, and release. To be notified when part three is released, follow The Human Connection Blog using the bell icon. Meanwhile, feel free to ask questions about the content creation process or specific collections in the replies. Have you used the Lab Builder feature to make any custom labs yet?36Views2likes0CommentsFrom Concept to Content: A Deep Dive into Theorizing and Planning a Lab Collection
The decision process When creating new content, the first step is deciding what to commit to. We consider: User demand: Are users frequently requesting a specific topic? Evolving landscapes: Is there new technology or industry trends we should cover? Internal analysis: Do our cyber experts have unique insights not found elsewhere? Overarching goals: Is the content part of a larger initiative like AI security? Regulations and standards: Can we teach important regulations or standards? Cyber competency frameworks: Are we missing content from frameworks like NICE or MITRE? After considering these points, we prioritize one idea for creation and refinement. Lower-priority ideas are added to a backlog for future use. Feasibility and outcomes Having a concrete idea is just the beginning. Over the years, we’ve learned that understanding the desired outcomes is crucial in planning. Our core mission is education. We ensure that each lab provides a valuable learning experience by setting clear learning objectives and outcomes. We ask ourselves, “What should users learn from this content?” This ranges from specific outcomes, like “A user should be able to identify an SQL Injection vulnerability”, to broader skills, like “A user should be able to critically analyze a full web application”. Listing these outcomes ensures accountability and fulfillment in the final product. Setting clear learning objectives involves defining what users will learn and aligning these goals with educational frameworks like Bloom’s Taxonomy. This taxonomy categorizes learning into cognitive levels, from basic knowledge and comprehension to advanced analysis and creation. This ensures our content meets users at their level and helps them advance. Turning big topics into bite-sized chunks Once a topic is selected, we must figure out how to break down huge subject areas into digestible chunks. This is a fine balance; trying to cram too much information into one lab can be overwhelming, while breaking the subject down too much can make it feel disjointed. One good approach is to examine the learning objectives and outcomes set out in the first step, map them out to specific subtopics, and finally map those to labs or tasks. For example, consider this theoretical set of learning outcomes for a Web scraping with Python lab collection. A user should understand what web scraping is and when it’s useful. A user should be able to make web requests using Python. A user should be able to parse HTML using Python. A user should understand what headless browsers are and when to use them. A user should be able to use a headless browser to parse dynamic content on a webpage. These outcomes can be mapped into two categories: theory outcomes (“A user should understand”) and practical outcomes (“A user should be able to”). Understanding the difference between these two is useful, as a few things can be derived from it – for example, whether to teach a concept in a theory (heavy on theoretical knowledge without providing a practical task) or practical (teaching a concept and exercising it in a practical environment) lab. Using this, the outline for a lab collection can start to take shape, as seen in the table below. Learning outcome Knowledge Type Suggested Lab Title Suggested Lab Content A user should understand what web scraping is and when it is useful. Theory Web scraping with Python – Introduction A theory lab showing the basics of web scraping, how it works, and when it is useful. A user should be able to make web requests using Python. Practical Web scraping with Python – Making web requests A practical lab where the user will write a Python script that makes a web request using the “requests” library. A user should be able to parse HTML using Python. Practical Web scraping with Python – Parsing HTML A practical lab where the user will write a Python script that parses HTML using the “beautifulsoup” library. A user should understand what headless browsers are and when they should be used. Theory Web scraping with Python – Understanding headless browsers A theory lab explaining why dynamic content can’t be scraped using previous methods, and how headless browsers can solve the issue. A user should be able to use a headless browser to parse dynamic content on a webpage. Practical Web scraping with Python – Using headless browsers A practical lab where the user will write a Python script that scrapes dynamic content from a website using the “puppeteer” library. All Demonstrate Web scraping with Python – Demonstrate your skills A demonstrate lab where the user will complete a challenge that requires knowledge from the rest of the collection. Each learning objective is assigned to a lab to ensure thorough and user-friendly coverage. Often, multiple objectives are combined into one lab based on subtopic similarity and the total number of labs in a collection. The above example illustrates the process, but extensive fine-tuning and discussion are needed before finalizing content for development. Next time… In part two of this mini-series, you’ll read about the next stage of the content development process, which involves laying the technical foundations for a lab collection. Don't miss the Series… You can opt to receive an alert when part two of this series is released, by “following” activity in The Human Connection Blog using the bell at the top of this page. In the meantime, feel free to drop any questions about the content creation process in the replies. Are there any parts of the planning process you want to know more about?91Views3likes0Comments