cloud security
52 TopicsSystems Manager: Automation
Hello, On exercise 4 (Create playbook) I'm getting an error if I configure Step One according the instruction, and I can't proceed with the playbook creation. "AccessDeniedException: User: {{user}} is not authorized to perform: ssm:CreateDocument on resource: {{resource}}/NewRunbook because no permissions boundary allows the ssm:CreateDocument action" This is how I structured the code: schemaVersion: '0.3' assumeRole: {{according the instructions}} description: EC2-Stop-Prod-EU-WEST-1 mainSteps: - name: Pause action: aws:pause nextStep: Approve isEnd: false inputs: {} - name: Approve action: aws:approve nextStep: get_instance_ids isEnd: false inputs: Approvers: - {{according the instructions}} - name: get_instance_ids action: aws:executeAwsApi nextStep: turn_off_prod_instances isEnd: false inputs: Api: DescribeInstances Service: ec2 Filters: - Name: tag-key Values: - prod - Name: instance-state-name Values: - running outputs: - Name: InstanceIds Selector: $.Reservations..Instances..InstanceId Type: StringList - name: turn_off_prod_instances action: aws:executeScript isEnd: true inputs: Runtime: python3.8 Handler: script_handler Script: |- def script_handler(events,context): import boto3 #Initialize client ec2 = boto3.client('ec2') instanceList = events['InstanceIds'] for instance in instanceList: ec2.stop_instances(InstanceIds=[instance]) InputPayload: InstanceIds: '{{get_instance_ids.InstanceIds}}' Does anyone had the same error while doing this lab? Regards,Solved97Views3likes2CommentsFrom Concept to Content: A Deep Dive into Building and Critically Analyzing Labs
Putting it all together The main bulk of the development work is building the labs. This usually comprises two parts that require different skill sets; one is putting together the written portion of the lab (such as the briefing, tasks, and outcomes), and the other is implementing any technical needs for the practical side of the lab. While some labs may focus more on one component than the other, this general overview of lab development will demonstrate each step of the process. Developing written content Regardless of the lab, the written content forms the backbone of the educational material. Even with prior knowledge and planning, additional research is essential to ensure clear explanations. Once research is complete, an outline is drafted to focus on the flow, ensuring the information is presented logically and coherently. This step helps enhance the final product. The final step is turning the outline into the final written content. Everyone approaches this differently, but personally, I like to note all the points I want to cover in a bullet list before expanding on each one. This method ensures all necessary information is covered, remains concise and clear, and aligns with learning outcomes and objectives. Technical implementation For practical labs, technical setup is key. Practical tasks should reinforce the theoretical concepts covered in the written portion, helping users understand the practical application of what they’ve learned. Before implementing anything, the author decides what to include in the practical section. For a CTI lab on a vulnerability, the vulnerable software must be included, which involves finding and configuring it. For general topics, a custom script or program may be needed, especially for niche subjects. The key is ensuring the technical exercise is highly relevant to the subject matter. Balancing the difficulty of practical exercises is crucial. Too easy, and users won’t engage. Too hard, and they’ll get frustrated. Tasks should challenge users to think critically and apply their knowledge without discouraging them. This requires iterative testing and feedback to fine-tune the complexity. The goal is to bridge the gap between theoretical knowledge and real-world application, making learning effective and enjoyable. Quality assurance and finishing touches The development process is complete, but there’s still work to do before releasing the content. We take pride in polishing our content, so the final steps are crucial. Checking against expectations Before the official QA process, we review the original plan to spot any discrepancies, such as unmet learning objectives or missing topics. While deviations don’t always require changes, they must be justified. Assuring quality A thorough QA process is vital for catching grammatical errors, technical bugs, and general improvements before release. Each lab undergoes three rounds of QA, each performed by a different person – two rounds of technical QA, and one for presentation. Some of the steps taken during technical QA include: Verifying written content accuracy, flow, and completeness. Ensuring all learning objectives are covered. Identifying any critical bugs or vulnerabilities that would allow users to bypass the intended solution. Providing small tweaks or changes to tasks for clarity. Assigning relevant mappings (NICE K-numbers, MITRE tags, CWEs). After technical QA, the lab is reviewed by our quality team to ensure it meets our presentation standards. Once all labs in a collection pass rigorous QA, they are released for users. The final step occurs post-release on the platform. Gathering and implementing user feedback Users are at the heart of everything we do, and we strive to ensure our content provides real value. While our cyber experts share valuable knowledge, user feedback prevents echo chambers and highlights areas for improvement. After new releases, we conduct an evaluation stage to analyze what went well and where we can improve. User feedback We gather quantitative and qualitative feedback to help us identify root issues and solutions. Quantitative feedback involves analyzing metrics like completion rates and time taken. We also examine specific changes, such as frequently missed questions or labs where users drop out. These are important things to note, but we avoid drawing conclusions solely from this data. This is where qualitative feedback comes in. Qualitative feedback includes user opinions and experiences gathered from feedback text boxes, customer support queries, and direct conversations. These responses are stored and read by the team and provide context beyond raw numbers. Channels such as customer support queries and follow-ups with customers also help us improve our content. Post-release reviews We conduct post-release reviews at set intervals after content release to analyze quantitative and qualitative data. This review helps us assess the entire process and identify areas for improvement. These reviews allow us to update content with new features, like adding auto-completable tasks for CyberPro. The reviews ensure our content remains current and enhances user experience. Wrapping up Hopefully, this blog post has provided insight into all the care we put into building and tailoring our content for users. This process has come a long way since we started making labs in 2017! Don't forget — with our new Lab Builder feature, you can now have a go at creating your own custom labs. If there's a topic that interests you and you want to share that knowledge with your team, making your own lab is a great way to do it! If there’s any part of the process you’d like to know more about, ask in the comments. Are there any collections that made you think, “Wow, I wonder how this was made”? Let us know!83Views3likes1CommentFeature Focus: Introducing Drag and Drop, Free Text Questions, and Instructional Tasks in the Lab Builder
I’m excited to announce the latest updates to the Lab Builder. Today, we’ve introduced three new task types: Drag and drop Free-text questions Informational/instructional These exciting new task features will enhance the flexibility and interactivity of your labs, offering even more engaging learning experiences. The new tasks can be added to your lab as usual via the Tasks library. They’re live now, so you can start adding them to your labs right away. Drag and drop Drag-and-drop is a dynamic, interactive task. Designed to challenge the user's recognition and matching abilities, it’s perfect for testing their knowledge in various subjects. This task type consists of text-based items and targets. Users need to drag the items to the correct corresponding targets. It’s easy to add and edit items and targets in the Lab Builder quickly. You can have a minimum of two items and a maximum of 12. You could use the drag-and-drop task type for questions and answers, completing sentence fragments, or matching terms with definitions. Once added to your lab, the new task will appear as follows: Free-text questions This task type requires the user to manually enter text to answer a question. For this task type, you need to write a question and provide at least one possible answer – but there can be multiple correct answers. You can configure this easily in the Lab Builder. Fuzzy matching automatically detects answers that are close enough to the correct answer. For example, if the user submits the right answer with a minor spelling error, it’ll still be accepted. This is designed to reduce user frustration and is enabled by default. You can disable fuzzy matching by turning off the toggle at the bottom. Finally, you can also provide feedback to users if they get an answer wrong, sort of like a hint. This is useful if you want to help point your user in the right direction and prevent them from getting stuck. Instructional tasks This task type is designed to provide users with vital information, guidelines, or instructions. In the Lab Builder, they have the same configuration options as the Briefing panel. Instructional tasks are particularly useful in explaining what the user is expected to do in a following task, presenting story details, or providing a learning journey for users as they go through the lab. You may want to remind users about specific information they need to answer some tasks or tell them to log into an application. The example below reminds users to refer to a specific part of the briefing panel before answering the next questions. Why are these new features useful? Increased engagement: These new question types introduce a gamified element to your custom labs, making learning more interactive and enjoyable. Versatile content creation: These features expand the possibilities for creating diverse and engaging labs, allowing you to tailor your content to your organization's unique needs. Enhanced learning: Drag and drop encourages active recall and association, while free text questions promote critical thinking and deeper understanding. Go and build some engaging labs! Explore the possibilities and build labs that truly engage your users! For more guidance, visit our Help Center, where there’s ample documentation on using the Lab Builder in more detail.65Views3likes0CommentsFrom Concept to Content: A Deep Dive into Theorizing and Planning a Lab Collection
The decision process When creating new content, the first step is deciding what to commit to. We consider: User demand: Are users frequently requesting a specific topic? Evolving landscapes: Is there new technology or industry trends we should cover? Internal analysis: Do our cyber experts have unique insights not found elsewhere? Overarching goals: Is the content part of a larger initiative like AI security? Regulations and standards: Can we teach important regulations or standards? Cyber competency frameworks: Are we missing content from frameworks like NICE or MITRE? After considering these points, we prioritize one idea for creation and refinement. Lower-priority ideas are added to a backlog for future use. Feasibility and outcomes Having a concrete idea is just the beginning. Over the years, we’ve learned that understanding the desired outcomes is crucial in planning. Our core mission is education. We ensure that each lab provides a valuable learning experience by setting clear learning objectives and outcomes. We ask ourselves, “What should users learn from this content?” This ranges from specific outcomes, like “A user should be able to identify an SQL Injection vulnerability”, to broader skills, like “A user should be able to critically analyze a full web application”. Listing these outcomes ensures accountability and fulfillment in the final product. Setting clear learning objectives involves defining what users will learn and aligning these goals with educational frameworks like Bloom’s Taxonomy. This taxonomy categorizes learning into cognitive levels, from basic knowledge and comprehension to advanced analysis and creation. This ensures our content meets users at their level and helps them advance. Turning big topics into bite-sized chunks Once a topic is selected, we must figure out how to break down huge subject areas into digestible chunks. This is a fine balance; trying to cram too much information into one lab can be overwhelming, while breaking the subject down too much can make it feel disjointed. One good approach is to examine the learning objectives and outcomes set out in the first step, map them out to specific subtopics, and finally map those to labs or tasks. For example, consider this theoretical set of learning outcomes for a Web scraping with Python lab collection. A user should understand what web scraping is and when it’s useful. A user should be able to make web requests using Python. A user should be able to parse HTML using Python. A user should understand what headless browsers are and when to use them. A user should be able to use a headless browser to parse dynamic content on a webpage. These outcomes can be mapped into two categories: theory outcomes (“A user should understand”) and practical outcomes (“A user should be able to”). Understanding the difference between these two is useful, as a few things can be derived from it – for example, whether to teach a concept in a theory (heavy on theoretical knowledge without providing a practical task) or practical (teaching a concept and exercising it in a practical environment) lab. Using this, the outline for a lab collection can start to take shape, as seen in the table below. Learning outcome Knowledge Type Suggested Lab Title Suggested Lab Content A user should understand what web scraping is and when it is useful. Theory Web scraping with Python – Introduction A theory lab showing the basics of web scraping, how it works, and when it is useful. A user should be able to make web requests using Python. Practical Web scraping with Python – Making web requests A practical lab where the user will write a Python script that makes a web request using the “requests” library. A user should be able to parse HTML using Python. Practical Web scraping with Python – Parsing HTML A practical lab where the user will write a Python script that parses HTML using the “beautifulsoup” library. A user should understand what headless browsers are and when they should be used. Theory Web scraping with Python – Understanding headless browsers A theory lab explaining why dynamic content can’t be scraped using previous methods, and how headless browsers can solve the issue. A user should be able to use a headless browser to parse dynamic content on a webpage. Practical Web scraping with Python – Using headless browsers A practical lab where the user will write a Python script that scrapes dynamic content from a website using the “puppeteer” library. All Demonstrate Web scraping with Python – Demonstrate your skills A demonstrate lab where the user will complete a challenge that requires knowledge from the rest of the collection. Each learning objective is assigned to a lab to ensure thorough and user-friendly coverage. Often, multiple objectives are combined into one lab based on subtopic similarity and the total number of labs in a collection. The above example illustrates the process, but extensive fine-tuning and discussion are needed before finalizing content for development. Next time… In part two of this mini-series, you’ll read about the next stage of the content development process, which involves laying the technical foundations for a lab collection. Don't miss the Series… You can opt to receive an alert when part two of this series is released, by “following” activity in The Human Connection Blog using the bell at the top of this page. In the meantime, feel free to drop any questions about the content creation process in the replies. Are there any parts of the planning process you want to know more about?118Views3likes0CommentsSystems Manager: Run Command (AWS)
Hi, I am attempting to complete the Systems Manager: Run Command lab and successfully complete run the commands (both turn green). It mentions there should be a token output from the second command but the commands fail each time. Anywhere else I should be looking to get the token and/or successful run the command.Solved107Views3likes4CommentsConfiguring Secure Web Hosting with AWS CloudFront
Hello, Q4 on this lab (Browse to the CloudFront console and click on Create a CloudFront distribution) don't complete even following all the instructions. When the deploy completes, the standard logging appears off: When I click on edit, it shows an IAM error: Anything that I can do from here to complete this task? Regards,Solved32Views2likes3CommentsMicrosoft Defender for Cloud: Setup, CSPM, and Compliance
In the above lab, the last question (11) asks for Mitre technique associated with the previous assessment. The noted Mitre exploit (both name and category number) associated with the answer is not accepted. Anyone else had the same issue?Solved65Views2likes7Comments[AWS]IAM: Tagging
Hello everyone. I'm stuck on Q3 of this lab. I'm leaving the ec2-custom-read policy as: { "Statement": [ { "Action": [ "ec2:GetTransitGateway*" ], "Effect": "Allow", "Resource": "*", "Condition": { "ForAllValues:StringEquals": { "aws:TagKeys": [ "automation" ] } }, "Sid": "ReadEC2TransitGateways" } ], "Version": "2012-10-17" } But if I try to save the policy, it gives me an error: Access denied to iam:CreatePolicyVersion You don't have permission to iam:CreatePolicyVersion Any hints on what I'm missing here? I think I didnt understand what exactly the exercise is asking for here. Regards,Solved46Views2likes2Comments