Loading…
Welcome to CanSecWest 2025.
Thursday April 24, 2025 4:00pm - 5:00pm PDT
The once theoretical AI bogeyman has arrived—and it brought friends. Over the past 12 months, adversaries have shifted from exploratory probing to weaponized exploitation across the entire AI stack, requiring a fundamental reassessment of defense postures. This presentation dissects the evolution of AI-specific TTPs, including advancements in model poisoning, LLM jailbreaking techniques, and the abuse of vulnerabilities in ML tooling and infrastructure. One of the most concerning recent developments relates to architectural backdoors, such as ShadowLogic. Last year, we discussed the use of serialization vulnerabilities to execute traditional payloads via hijacked models, but ShadowLogic is quite a different beast. Instead of injecting easily detectable Python code to compromise the underlying system, ShadowLogic uses the subtleties of machine learning to manipulate the model’s behavior, offering persistent model control with a minimal detection surface. Attacks against generative AI have also stepped up, with threat actors devising novel techniques to bypass AI guardrails. Multimodal systems add to the attack surface, as indirect prompt injection is now possible through images, audio, and embedded content—because nothing says "secure by design" like accepting arbitrary input from untrusted users! The cybercrime ecosystem is naturally adapting to the shifting priorities, and AI-focused hacking-as-a-service portals are popping up on the Dark Web, where adversaries turn carefully guardrailed proprietary LLMs into unrestricted content generators. Even Big Tech is playing whack-a-mole with its own creations, with Microsoft's DCU recently taking legal action against hackers selling illicit access to its Azure AI. Finally, AI infrastructure itself is proving increasingly prone to attack. The number of ML tooling vulnerabilities has surged, fueled by arbitrary code execution flaws that could allow attackers to take over AI pipelines. GPU-based attacks are also on the rise, allowing for sensitive AI computations to be extracted from shared hardware resources. As these threats continue to evolve, defenders must shift from reactive fixes to proactive security-by-design approaches to mitigate the growing risks AI systems face today.
Speakers
avatar for Marta Janus

Marta Janus

Principal Researcher, HiddenLayer
Marta is a Principal Researcher at HiddenLayer, focused on investigating adversarial machine learning attacks and the overall security of AI-based solutions. Prior to HiddenLayer, Marta spent over a decade working as a researcher for leading anti-virus vendors. She has extensive experience... Read More →
Thursday April 24, 2025 4:00pm - 5:00pm PDT

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link