With artificial intelligence (AI) and large language models (LLMs) fast becoming a more popular and talked-about set of technologies in every industry in society, it's no surprise that LLM-enabled malware now exists that can dynamically generate code, query data, and offload malicious functionality to LLMs, lowering the barrier of entry for threat actors deploying malware. This lab introduces one of the first known malware samples to ever facilitate the use of LLMs to perform malicious functionality.
Why should our customers care?
Most, if not all, companies are looking into using AI to varying degrees, whether to make their workforce more efficient and productive or to build full models that facilitate technical processes. With this in mind, and with the advent of basic malware that can use API keys to query LLMs and AI services, we will likely see this particular malware set evolve over time.
By doing this lab, you'll begin to see how these pieces of malware are just the stub and querier for AI and how they can be used maliciously. This will showcase what this threat is like in its current state. We shall be monitoring how this threat evolves, so stay tuned for more labs.
Who is the defensive lab for?
- SOC Analysts
- Incident Responders
- Threat Hunting
Here is a link to the lab: https://immersivelabs.online/v2/labs/malterminal-analysis