immersive labs application security
24 TopicsHuman Connection Challenge: Season 1 – Scanning Walkthrough Guide (Official Version)
Time’s Up! Congratulations to everyone who completed Lab 2: Scanning from the Human Connection Challenge: Season 1. In this walkthrough, I'll share some strategies for efficiently completing the lab, based on my perspective as the author. Remember, there are often multiple ways to approach a challenge, so if you used a different method and succeeded, that's perfectly fine! The goal is to learn, and I hope these notes help clarify any steps and reinforce key concepts for the next challenge. This challenge has now ended, but the lab remains available for practice. While prizes are no longer up for grabs, you can still complete the lab and use this walkthrough guide for support if needed. I’ve also used placeholders in some of the commands that would give away an answer directly, so if you see anything enclosed in angle brackets, such as <name server>, please make sure you replace it with the actual value, such as nameserver. With all that considered, let's get started. Overview Task: Identify the name server records of tinytown.bitnet. 1. What is the IP of the first name server for tinytown.bitnet? You’ll first need to open a Terminal on the Kali desktop. Next, you’ll need to query the DNS Server IP (found in the Machines panel) about the tinytown.bitnet domain using the nslookup (Name Server Lookup) tool. You’re specifically looking for NS (Name Server) records, so you can use the -type=ns parameter with nslookup to specify this: nslookup -type=ns tinytown.bitnet [DNS Server IP] The output of this command will return two name servers for the domain labelled with 1 and 2. Your next step is to identify what IP address is associated with the first name server (1). To do this, you can use nslookup along with the name server, domain, and DNS Server IP: nslookup <name server>1.tinytown.bitnet [DNS Server IP] This command will then return an IP address for the name server. 2. What is the IP of the second name server for tinytown.bitnet? As you’ve already identified both name servers, you’ll just need to run the previous command, except with the second (2) name server: nslookup <name server>2.tinytown.bitnet [DNS Server IP] You’ll then find the IP address associated with it. Task: Identify port service information for Target 1. 3. What service version is running on port 53? A network scanning tool like Nmap can help you identify the service version running on a specific port. To do this with Nmap, you can use the -sV option for service detection: nmap -sV [Target 1 IP Address] The output will show what service version is running on port 53. 4. What is the full service banner of port 22? There are a couple of ways to find the full service banner of port 22 – such as with Nmap or Netcat. If you’re using Nmap, you can modify the previous command to include the “banner” script along with the port number: nmap -sV -script=banner [Target 1 IP Address] -p22 The command line will then display the service banner from port 22. You can alternatively use netcat to manually connect to the SSH server. When a client connects, Netcat may present a banner that contains version information. To use Netcat, you’ll need the nc command along with the Target 1 IP address and specify you want to connect to port 22: nc [Target 1 IP Address] 22 When you run this command, the banner appears before the terminal hangs. Task: Identify a token on one of the ports. 5. What is the token? With the previous Nmap command, you initially found that three ports were open on Target 1. However, you’ll need to do a more thorough network scan to find another open port, one not initially found with the previous scans. To do this, you can expand your port scan to cover a much wider range by using Netcat to scan for open ports from 1 through 9000: nc -zvn <Target 1 IP Address> 1-9000 Here, -z will scan for listening services but won’t send any data, -v is verbose mode, which provides more detailed information, and -n tells Netcat not to resolve hostnames via DNS. This command will reveal a fourth open port. Now, you can use Netcat to connect to this port: nc <Target 1 IP Address> <open port> The token will then be displayed in the terminal. Task: Scan the TLS configuration on Target 2. 6. How many protocols are enabled? To scan for SSL/TLS configurations, you can use the sslscan tool. By default, sslscan scans port 443 and will return supported server ciphers, certificate details, and more. You can use sslscan like this: sslscan <Target 2 IP Address> The returned output will be verbose, but you can find and count the number of enabled protocols under the SSL/TLS Protocols subheading. 7. Name an enabled protocol. Using the previous output, name one of the enabled protocols. 8. What exploit are the protocols NOT vulnerable to? Using the same output, scroll down through the results until you find a subheading that’s named after a vulnerability and contains a similar string to: <Protocol> not vulnerable to <vulnerability name> The vulnerability has the same name as the subheading. Task: Identify and extract information from an SMB share on Target 3. 9. What Disk shared directory can you access? To extract information from an SMB (Server Message Block) share, you can use the smbclient tool. First, you’ll need to list the SMB shares on the target using the -L flag (the list/lookup option) with: smbclient -L //<Target 3 IP> You’ll then be prompted for a password, but you can press Enter to skip this. A list of SMB shares will then be displayed, three of which are shown to be a Disk type, so you know the answer will be one of these. You can now begin to go through the list and try to connect to the shares with: smbclient //<Target 3 IP>/<Sharename> However, this time when you’re prompted for a password and you press Enter, you might encounter a message when you try and connect to a share: NT_STATUS_ACCESS_DENIED If you attempt to connect to all shares, you’ll find you can connect to one share without a password. You’ll then be greeted with the following prompt to show the successful connection: smb: \> 10. What is the token stored in the directory? Now that you’re connected, you can execute commands to interact with the SMB share. If you run ls, you’ll find a token.txt file in the current directory. You can then download the file from the share onto your local machine with: get token.txt On the Kali desktop, open the Home folder and the token.txt will be inside. Open this file and find the token. 11. What is the username stored in the directory? After you’ve run ls in the SMB share, you’ll find not only token.txt, but also a file named creds.txt. Use the same command as you just did previously to download the file onto your machine: get creds.txt This file will also be downloaded to the Home folder, where you can find a username and password. Task: Identify open services on Target 3. Task: Connect to Target 3 with the previously found credentials. 12. What is the token stored in the user's /Documents directory? For this final task, you first need to scan the target using Nmap. You’ll find that if you attempt to scan the target without using the -Pn flag, you’ll get a response saying that the host seems down. However, if you run Nmap with -Pn, you’ll find some ports are open: nmap -Pn <Target 3 IP Address> However, the ports returned from this command don’t offer a way to connect to the target. You’ll also need to scan the 6000 most popular ports: nmap -Pn --top-ports 6000 <Target 3 IP Address> These results will now show two additional ports are open regarding the Web Services Management (Wsman) protocol, which is used to communicate with remote machines and execute commands. One of the tools that implement this protocol is Windows Remote Management (WinRM) which is Microsoft’s implementation of Wsman. Knowing this, you can now use Metasploit to interact with the target. In your terminal, run: msfconsole Once loaded, you can use the the following auxiliary module to connect to a system with WinRm enabled and execute a command with: set cmd ls You’ll then need to set the following options, using the credentials you found in the creds.txt file: set username <username> set password <password> set rhosts <Target 3 IP Address> Next, you need to set the cmd option with the command you want to run. If you use the ls command, you’ll be able to find what out files are in the directory you connect to: set cmd ls With all the options set, you can now run the module: run The results of the executed command will be printed on the screen and also saved to a directory, but both show the existence of a token.txt file in the current directory. You can now set the cmd option to type token.txt in Metasploit: set cmd type token.txt Once set, use the run command to send the updated command: run The contents of token.txt will then be displayed on the screen and outputted to a file. Tools For this challenge, you’ll use a range of tools including: Nslookup Nmap Netcat Sslscan Smbclient Metasploit Tips You can use different tools and parameters within those tools to scan for and find information, so don’t be afraid to try out a few different things! If you want to learn more about some of the tools within this lab, take a look at the following collections: Reconnaissance Nmap Infrastructure Hacking Introduction to Metasploit Post Exploitation with Metasploit Conclusion The steps I’ve laid out here aren’t the only way to find the answers to the questions, as long as you find the answer, you did it – well done! If you found another way to find some of these answers and think there’s a better way to do it, please post them in the comments below! I hope you enjoyed the challenge and I’ll see you for the next one.1.8KViews4likes4CommentsAdvanced CTF Challenge: Serial Maze
Need hint on Serial Maze. Have gone through html & javascript, couldn't find the token. Using dirb found one endpoint "http://10.102.17.87/2257", its response "What a pickle... You need the secret to continue." No sure how to proceed form here. Thanks, SabilSolved302Views0likes3CommentsSnort Rules: Ep.9 – Exploit Kits
I am pulling my hair with question number 8 Create a Snort rule to detect the third GET request in the second PCAP file, then submit the token. This one should do it but it is not working. alert tcp any any -> any any (msg:"detect the third GET request"; content:"e31e6edb08bf0ae9fbb32210b24540b6fl"; sid:1000001) I tried so many rules base on the first GET header and still unable to get the token. Any tips?Solved265Views0likes3CommentsIntroduction to Elastic: Ep.9 - ES|QL
I’m stuck on question 18 i need this to complete the lab. The question says ‘Perform a final query using all of the techniques used in the previous questions. What is the average speed per hour for ALL trips that start in the borough of “Brooklyn” and end in the borough of “Manhattan”? Provide your answer to at least three decimal places. any ideas?Solved212Views1like1CommentIt’s Not Magic, It’s Mechanics: Demystifying the OWASP Top 10 for AI
Welcome back to our series, “Behind the Scenes of Immersive One”! The following is a conversation with Sabrina Kayaci, Cybersecurity Engineer for Immersive One, and Rebecca Schimmoeller, Lead Product Marketing Manager. Today, we’re continuing the discussion on our Secure AI capability. “When developers hear ‘AI Security,’ they either start to sweat or eye-roll. It either feels like a black box where the old rules don’t apply, or it feels like inflated marketing hype. The truth is, AI vulnerabilities aren't magic; they are mostly just new manifestations of the classic flaws we’ve been fighting for decades. Once you map the new threats to the old patterns, the mystique fades. You realize it’s not magic to fear or hype to ignore—it’s just an engineering problem to solve.” Rebecca: Awesome frame, Sabrina. No matter where you sit on the spectrum—whether you’re anxious about the risks or skeptical of the buzz—AI security doesn't mean starting from zero. Developers should already have the muscle memory for this. Sabrina: Exactly. We aren't asking them to learn a new language; we're asking them to apply their existing fluency to a new dialect. That’s the core philosophy behind our new OWASP Top 10 for LLMs and GenAI collection. We tackle the problem that AI is often treated as a "new and daunting" field. By framing threats like Supply Chain Vulnerabilities or Excessive Agency as variations of known issues, we accelerate the learning curve. We strip away the "AI mysticism" to reveal the underlying mechanical flaw. Rebecca: I love "stripping away the mysticism." Let’s talk about how that works, starting with the big one everyone is concerned about—Prompt Injection. How do you take that from "scary AI jailbreak" to something a grounded engineer can fix? Sabrina: In the media, Prompt Injection is portrayed as this sentient ghost in the machine. In our lab, we treat it as an Input Validation failure. We show that the system is simply confusing "user input" with "system instructions." When a developer sees it through that lens, the eye-roll stops. It’s no longer hype; it’s just mixed context. And they know how to fix mixed context. We show them how to apply that architectural fix to an LLM. Rebecca: That maps perfectly. But looking at the curriculum, I see we go much deeper than just a standard "Top 10" checklist. Why was it important to go beyond the simple definitions? Sabrina: Because a definition tells you what something is, but it doesn't tell you how it impacts you. In the new OWASP LLM collection, we focus on Core Mechanics and Attack Vectors. We deconstruct threats like Data and Model Poisoning or Supply Chain vulnerabilities to show you exactly how they infiltrate a system. It’s the difference between knowing what an engine looks like and knowing how to take it apart. You need to understand the mechanics of the vulnerability to understand the potential impact—otherwise, you're just guessing at the fix. Rebecca: It sounds like we're upgrading their threat modeling software, not just their syntax. Sabrina: Yes, 100%. Look at Excessive Agency. That sounds like a sci-fi plot about a robot takeover. But when you do the lab, you realize it’s just "Broken Access Control" on steroids. It’s about what happens when you give an automated component too much permission to act on your behalf. Once a developer maps "Excessive Agency" to "Least Privilege," they stop worrying about the robot and start locking down the permissions. Rebecca: Is the goal to get them through all ten modules to earn a Badge? Sabrina: The OWASP Top 10 for LLMs Badge is the end state. It proves you have moved past the "sweat or eye-roll" reactive phase. To your manager, it signals you have a proactive, structured understanding of the AI risk landscape and can speak the language of secure AI. There’s no hype in that. Only value-add to you and your team. Final Thought Our OWASP Top 10 for LLMs collection is the antidote to AI security angst. For the developer, it demystifies the threat landscape, proving that their existing security instincts are the key to solving new problems. For the organization, it ensures that your AI strategy is built on a bedrock of engineering reality, rather than a shaky foundation of fear. [Access Collection]170Views1like0CommentsVibe coding your way to a ZAP MCP server
If you haven’t heard already, Model Context Protocol (MCP) is a new, standardised way that LLMs and AI agents can communicate with external tools. Lots of people in the AI community have been calling it a “USB-C for AI” because you only need one connector for many tools. Today, however, I’m not going to go deep into the weeds about MCP particulars or what a typed JSON-RPC envelope is. Instead, I’m taking you on an adventure I embarked on a few weeks ago. It involved me talking to ZAP in plain English, going to get a cup of coffee, and coming back to a report on a vulnerable web application. But before I embarked on my quest, I needed some equipment: An LLM: I went with Claude Sonnet 4, from Anthropic’s family of foundational models and the creator of MCP. A way to interact with an LLM that can also use MCP servers: Conveniently, Claude has a wonderful desktop app that lets you do this easily. Some MCP servers: Most importantly, the Filesystem MCP server for the ultimate budget-friendly Claude Code imitation. An SDK to write my own ZAP MCP server, to avoid re-inventing the MCP wheel. The Python MCP SDK did nicely. With these tools in hand, it was time to see how far I could take this idea, based purely on vibe coding. Andrej Kaparthy, co-founder of OpenAI (the creators of ChatGPT) and coiner of the term, explains: “There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists”. Vibing responsibly While it’s tempting to be allured by the simplicity of telling an LLM to “code me a ZAP MCP server” and have it produce lines of code, it’s generally in your long-term interests to have some understanding of what you’re trying to achieve. Then, you can ask the AI to either expand on that understanding or formulate a plan before it attempts to execute. Don’t just take it from me; this is Anthropic’s first recommended workflow for using Agentic AI. With responsible vibing in mind, I crafted a fairly simple exploratory prompt. I want to create an MCP server for the open-source proxy tool ZAP. I know it already has a comprehensive Python SDK for interacting with it in an automated fashion. How could I leverage the existing SDK to create a ZAP MCP server easily? Putting the code into vibe coding I enabled Claude to read and write files using the Filesystem MCP to create the ZAP MCP server project. In the long run, this saved a lot of time copying and pasting code back and forth between VSCode and Claude Desktop. To do this, I just had to write a small bit of JSON to configure the Filesystem MCP so it knew what directories it could access. Once I’d set that, I restarted Claude Desktop for it to take effect. The next prompt was as simple as telling Claude where it should create the Python project, which version to use, and what configuration file to use for dependencies: Use the fileserver MCP to create the necessary project and code files for this ZAP MCP project in the directory ~/Development/mcp/zap-mcp which you have access to. The project should use Python 3.12 and dependencies set up with a pyproject.toml. And just like that, the AI is using one MCP to write another. Vibe check I love a YOLO code execution as much as anyone, but I decided to take a look at the project code it created, since I was running this directly on my personal computer. The main file is 1200 lines of code! But why should I care if I don’t have to write or maintain any of it myself? # Initialize MCP server self.server = Server(“zap-mcp-server”) I’m more interested in where the MCP stuff is actually happening. Here, the code initializes an MCP server, which makes sense, since I was trying to build one. On line 4, you can see a decorator @self.server.call_tool() is used to handle when the function is called by an MCP client, such as Claude Desktop. Interestingly, it chose to skip the higher-level @server.tool() function that handles all the boilerplate. But again, why should I care if I’m not writing it? The only other particularly interesting part is that the actual code to perform an action in ZAP is about three lines long. That’s because it’s leveraging the wonderful ZAP SDK, just like I planned in the first prompt. Then there’s another decorator on line 4 of this code snippet. Again, the code used the lower-level decorator function list_tools() because it used call_tool() to explicitly list what tools were available from the ZAP MCP server to the MCP client. Ultimately, this gave me more control over the input data – which was useful when interfacing with another SDK that was expecting certain types and structures like the ZAP one I was working with. I quickly checked the code, and then I was almost ready to take this MCP server for a spin! There was just one last bit of JSON configuration to do. I had to specify how to run the server, and from what directory. After one last restart of Claude Desktop, I was ready to make ZAP get to work, through the magic of plain English and MCP servers. Right…? WRONG! Turns out the vibes weren’t checked well enough, and they went from being immaculate to immensely painful. A fair few errors popped up before I could successfully get this to even run one ZAP command. So I manually fixed this code like a responsible developer kept feeding the errors into Claude until it fixed them. Which eventually gave me the ability to get the MCP tool working! Vibe code victory! Key takeaways I’ll leave you with a few lessons I took away from this project: Vibe coding is more than just a meme. It has genuine uses, especially for prototyping. That doesn’t mean it’s capable of autonomously creating projects from start to finish that are ready for enterprise deployment (yet). “Context engineering” is just as important as a good prompt. Use tools such as Claude’s Project Knowledge to improve an LLM's domain knowledge. MCP can be an incredibly powerful tool, but it doesn’t come without risks. You’re potentially giving information to parties that don’t have the same data privacy policies as the LLM you’re using. You’re authorizing actions on your behalf that you may not be aware of. MCPs aren’t particularly different or difficult to build than to a normal client server architecture. If you have a tool with an API, you’ll be surprised at how quickly you can turn it into an MCP server. That’s it! Thanks for coming on this adventure with me. I hope you get inspired to start writing and using your own MCP servers.144Views1like0CommentsFrom XSS and SQLi to AI-generated code and supply-chain compromise: How application security is evolving
Keeping up with vulnerabilities is like playing a never-ending game of whack-a-mole. One day, we were knee-deep in XSS payloads and buffer overflows; the next, developers everywhere were plugging SQL injection holes with duct tape and regex. A lot of our earlier tech wasn’t built with security in mind – or at least not at the forefront of our minds. But over the past two decades, the culture has changed, and developers are shifting left. Programming languages, as well as the frameworks and ecosystems around them, are evolving and adapting to evade security threats. XSS: From everyday headache to “mostly handled” Remember when cross-site scripting (XSS) was every web developer’s nightmare? In the 2000s, it seemed like every other website was vulnerable. If you were lucky, your users’ only punishment was an annoying pop-up. If not, credentials and session cookies were up for grabs. Modern languages and frameworks took proactive steps in their designs. Today, many of them have built-in protections against common vulnerabilities like XSS. Some examples are: React, Angular, Vue: Automatically escape output by default. You have to go out of your way to render raw HTML (and they make you feel guilty about it). Django, Ruby on Rails, ASP.NET Core: Templates escape user input by default. Browsers: Even they pitch in, with features like Content Security Policy (CSP). It’s still possible to write a vulnerable application, but the bar is higher. As a result, attackers have a bigger hurdle to jump over. This is thanks to the evolution of how languages and frameworks approach security – not as a feature, but as the default. SQL injection: OR 1=1 is mostly history SQL injection once powered major data breaches. Now, parameterized queries have become the norm, and we’re on our way to leaving those days behind. Object-Relational Mappers (ORMs) like SQLAlchemy, Entity Framework, and Hibernate generate safe SQL. Most modern languages make string concatenation in queries unnecessary (and uncool). Even PHP, once infamous for its “raw SQL everywhere” approach, now encourages prepared statements and offers safe database APIs. Not a perfect world, but much improved. It’s crucial to keep in mind, however, that technology shouldn’t be relied upon uncritically to keep applications secure. Developers must do their due diligence. Memory mischief: The rise of Rust (and memory-managed languages) C and C++ are legendary for performance. And legendary for buffer overflows, use-after-free, and all manner of memory mischief. Enter memory-safe languages: Java, C#, Python: Garbage collection and managed memory eliminate entire classes of bugs. Rust: Takes it up a notch with ownership semantics, preventing data races and dangling pointers at compile time. A lot of system-level work has migrated to these “safer” languages, and the impact is apparent in everything from embedded devices to operating systems (hello, Rust in the Linux kernel). The new frontiers: AI and supply chain attacks Just as one threat starts to become old news, new threats emerge. Two of these are: Supply chain compromise The SolarWinds breach and NPM “event-stream” incident have put supply chain attacks in the spotlight over the last decade. It goes to highlight that you can write perfect code and still get breached because a dependency many levels deep was compromised. Some of the changes we’re seeing as a result: Package registries are adopting MFA and signing requirements. Software Bill of Materials (SBOM) is becoming a must-have, especially in regulated industries. But the battle is ongoing. If anything, our code is more interconnected than ever, and the rise of “vibe coding” is complicating matters further. AI-generated code AI tools like GitHub Copilot and ChatGPT are generating millions of code snippets. This is a double-edged sword. You get increased productivity, but the code is often vulnerable. This is partly a reflection of the insecurities in the code that the models were trained on. I asked ChatGPT 4.5 to identify the vulnerability in the following code and it couldn't. Can you? Leave a reply with your thoughts! @limits(calls=6, period=60) @app.route("/change-password", methods=["POST"]) def change_password(): user = session.get("user") if not user: return jsonify({"message": "Unauthorised"}), 401 data = request.get_json() password = data.get("new_password") if not password: return jsonify({"message": "Password is required"}), 400 if not db.change_password(user, password): return jsonify({"message": "Failed to change password"}), 500 return jsonify({"message": "Password changed successfully"}), 200 @limits(calls=6, period=60) @app.route("/login", methods=["GET", "POST"]) def login(): data = request.get_json() user = data.get("user") password = data.get("password") if not user or not password: return jsonify({"message": "Username and password are required"}), 400 if not db.authenticate(user, password): return jsonify({"message": "Invalid username or password"}), 401 session["user"] = user return jsonify({"message": "Logged in successfully"}), 200 The changes I expect to see going forward are: Guidelines and governance for AI-generated code: Programming languages or coding standards may soon explicitly include guidelines, validation rules, or security frameworks tailored to AI-assisted coding, ensuring generated code adheres to secure patterns. Work is already being done to create rules files for improved security. Integrated security checks at the IDE level: IDEs may embed deeper vulnerability scanning and real-time feedback directly within coding processes. Development environments or compilers may also come integrated with validation tools specifically attuned to potential weaknesses inherent in AI-generated code. Increased reliance on static/dynamic security analysis tools: Enhanced automated scanning integrated into CI/CD pipelines, detecting flaws pre-deployment. Keeping people in the loop: Security awareness should be a top priority, so everyone is ready for the worst-case scenario. Proving and improving skills: Developers’ training should increasingly emphasize secure coding, particularly when assisted by AI tools. At the current stage, thorough PR reviews are more crucial than ever. As with any code, always review and test AI-generated code, especially for security-sensitive logic. AI is a tool, not an auditor. Security: Not a feature, but a default Application security has come a long way from the Wild West days of the early internet. The biggest shift in the software development landscape is that security is no longer a bolt-on. Modern programming languages and ecosystems try to make the secure path the easy path. Defaults are safe (escaped output, parameterized queries), dangerous operations are noisy (compiler warnings, explicit function names), and security updates are automated (thanks to package managers and CI/CD integrations). Of course, attackers are creative, and the landscape is always shifting. But the evolution of programming languages, as well as the surrounding tools and communities, means developers have a fighting chance. While we face newer threats like AI-generated code vulnerabilities and supply chain compromise, the foundations are getting stronger. The key is to keep learning, stay skeptical, and use the tools at your disposal properly. The biggest shift I would like to see is the human element no longer being viewed as the “weakest link”. The moles never stop popping up, but now, at least, we have better mallets – and the resources to help us use them.139Views2likes0CommentsTrick or Treat on Specter Street: Morphy's Mansion Challenge
I understand that the move_logger is the vulnerable program, and tried a few methods to exploit it. However, where is the token.txt? Anyone managed to find it? "Whatever means necessary" is quite broad. Any hints from anyone?Solved127Views0likes1CommentDecoding Coding: Picking a Language
These days, more and more jobs can benefit from being able to write simple scripts and programs, especially in cybersecurity. For example, pulling data from an API, scraping web pages, or processing large data files to extract information – the list of uses is virtually endless! Tempting as it is to dive right in, there are several things worth thinking about before you begin. This article will discuss one of the most important choices – selecting a language. What to consider when choosing a language A basic understanding of programming languages can make your life easier, increasing your adaptability and finesse in different environments. But with tons of languages like Python, Java, JavaScript, Go, Rust, and more, which one should you choose? Here are the crucial factors to consider: What's available Can you install whatever language you like to run your code, or are there limitations? If you have an enterprise-managed computer, you might not be able to install new software or languages, and you may need to use the default options. For Windows, this is PowerShell. Bash Script is the equivalent for Mac and Linux devices, and Python is often available too. Your personal experience and interest This one might sound obvious, but it does matter. We learn better and faster when we're invested in the subject. Look at your previous experiences. Have you worked with any programming languages before? Did you enjoy them? For example, if you had a good experience working with Python, let that guide your decision! That said, don't shy away from learning something new if there's a good reason or you’re curious to do so. What's trending in your organization Does your organization or team predominantly use a specific language? Not only would learning that one help you communicate better with your colleagues, but it could also give you an edge while working with systems developed in that language. Plus, there’ll be plenty of people to talk to if you get stuck! The language's capabilities and nature Like people, different languages have different strengths. Some are fantastic for web development (like JavaScript), while others are better suited for system-level programming (like C). Python is often an excellent choice. It's considered easy to learn, incredibly flexible, and powerful due to the huge catalog of packages available. While it isn't as fast as many other languages, for most purposes, it's usually more than fast enough. Java is a very widely used object-oriented programming language and can be extremely fast. The learning curve is steeper than Python, but there are loads of learning resources available. JavaScript (not to be confused with Java!) isn’t as useful for quick standalone scripts or applications, but it's the dominant language for websites and browsers, so understanding it is practically a superpower for testing and manipulating websites and applications. C and C++ allow low-level access to memory and offer a lot of flexibility – incredibly helpful when evaluating systems with these languages at their core. Available tools and training Great tools can make tough jobs easier. Certain programming languages have robust toolsets that can help automate your tasks. For instance, Python has a wide array of libraries and frameworks that make handling big projects a cinch while saving you time and effort – why reinvent the wheel when you can just import it? Take a look at what training is available for the language you’re interested in. Older and more popular languages are likely to have more to choose from, but there’s loads out there and a lot of it is free! Also, consider what tools you might already have access to within your organization. Community and support If a programming language has a large active community, it means help is readily available when you get stuck. Languages like Python, JavaScript, and Java have strong communities and plenty of online resources available. Scope for growth If you're planning to learn a language, why not pick one that's in demand? Check job boards, look at industry trends, and see if learning a particular language can give your professional growth a boost! Summary Remember, no language is “the best". The best is the one that suits your needs and circumstances. You might even find mastering multiple programming languages useful over time. Just like speaking multiple languages, the more you know, the better you can communicate in different environments! Once you understand some of the basic programming concepts, like variables and loops, it’s easier to learn a second or third language. Learning a programming language may initially seem like climbing a steep mountain. But once you get the hang of it, you'll realize that the view from the top was well worth the hike! Want to take the next step? Here are some lab collections that may help you learn a bit more about PowerShell and Python: PowerShell Basics Offensive PowerShell Introduction to Python Scripting Share your thoughts If you’re new to coding, tell us what language you’re trying out! Why did you pick it, and would you make the same choice again? Are there any specific challenges you found or any relevant experiences you’d like to share?121Views1like2CommentsArchitecting at Speed: Mastering Secure Development with OpenAI Codex
Welcome back to our series, “Behind the Scenes of Immersive One”! The following is a conversation with BenMcCarthy, Lead Cybersecurity Engineer for Immersive One, and RebeccaSchimmoeller, Lead Product Marketing Manager. Today, we’re continuing the discussion on our Secure AI capability. There is a misconception that security is the enemy of development speed. But with AI, the opposite is true. If you don't have security engineered into your AI workflow, you can't actually go fast—because you’re constantly stopping to fix 'trash code' or patch vulnerabilities. The developers who win in this era aren't just the ones coding faster; they are the ones architecting systems that are secure by design, even at AI speeds.” Rebecca: That’s a crucial distinction, Ben. We often hear that AI is a "firehose" of productivity, but without control, that firehose just creates a mess. It seems like the role of the developer is shifting from "writing lines" to managing this high-velocity output. How does the new Building with AI: Codex CLI collection help them make that shift? Ben: By giving them the controls they need to harness that speed safely. If you let OpenAI’s Codex run without guardrails or understanding, you get velocity, sure—but you also get risk. We designed this collection to empower developers to become their own Security Architects for their workflows. We are leveraging the Azure AI Foundry capability to give learners real, secure access to these models. The goal isn't to teach you how to hit "Tab" to autocomplete; it's to teach you how to rigorously evaluate, guide, and constrain what the AI produces using the command line tool like Codex so you can ship code that is both fast and bulletproof. Rebecca: So it’s about elevating the human’s role to "Architect." Let’s talk specifics given what the collection covers—how did you instill that mindset? Ben: We start by ensuring developers know the power of what you can do with Codex. How to get the best out of your models in this CLI tool. We go over effective prompt engineering, tool usage, and how AI can help with "Greenfield" projects (net-new builds) and "Brownfield" projects (legacy codebases). This is a critical skill for a lead engineer. AI is great at generating new code (greenfield), but it can be dangerous when it doesn't understand the hidden dependencies of a ten-year-old application (brownfield). We teach engineers how to spot those context gaps, key stuff that the AI might miss. Rebecca: I saw "specification-driven development" was a big part of your roadmap, too. How does that fit into the "speed" theme? Ben: This is the ultimate accelerator. Instead of writing the code line-by-line, you write the "spec"—the blueprint—and let Codex handle the implementation details. It’s not about doing less work; it’s about doing higher-leverage work. You define the logic and security constraints, and the AI handles the boilerplate. It shifts the developer’s brain from "how do I type this function?" to "what should this system actually do?" Rebecca: That sounds like a powerful approach, Ben. But what about the security risks? If developers are offloading implementation to Codex, how do they avoid leaking data or introducing bugs? Ben: That’s non-negotiable. In the Guardrails lab, we show learners how to build a safety net. We teach practical methods for stripping PII (Personally Identifiable Information) and using hooks to sanitize inputs before they reach the model. It gives developers the confidence to use these tools freely, knowing they have already engineered the safety mechanisms to protect their org. Rebecca: I saw a lab in the collection called "Tools and MCP" (Model Context Protocol). Is that where you get into advanced workflows? Ben: Exactly. This is where we give developers the keys to become a force multiplier. We show users how to connect Codex to other tools. This is the ideal definition of ROI for developers. You’re automating the tedious "check your work" phase, allowing you to ship secure code faster without burning out on manual review. Rebecca: It feels like that approach accepts today’s AI era realities for what they are and finds the strategic advantages… pushing developers towards productivity and security gains with real mastery. And just like the Claude collection, users have access to a Demonstrate Lab, to prove that mastery, am I right? Ben: Absolutely. The Demonstrate Lab challenges users to build a solution that’s efficient, functional, and secure. It proves that you aren't just an "AI user"—you are an AI Engineer who understands the capabilities the collection covers. Final Thought Our Building with AI: Codex collection is about upgrading the developer’s toolkit. For the organization, it ensures AI adoption is secure and scalable. For the engineer, it removes the drudgery of boilerplate, freeing you to focus on the creative, architectural challenges that drive real value. Ready to upgrade your workflow? [Access Collection]100Views1like0Comments