Loading…
Welcome to CanSecWest 2025.
Type: Presentation clear filter
arrow_back View All Dates
Friday, April 25
 

9:00am PDT

Keys to Freedom: Analysis and Resolution of Arab Ransom Locker Infections
Friday April 25, 2025 9:00am - 10:00am PDT
The presentation "Keys to Freedom: Analysis and Resolution of Arab Ransom Locker Infections" explores the intricate workings of the Arab Ransom Locker malware, focusing on its impact on mobile devices. This session delves into a comprehensive analysis of the ransomware's attack vector, encryption mechanisms, and behavioral patterns. It will also provide a step-by-step guide to unlocking infected devices, including proven recovery techniques, decryption tools, and preventive strategies. Targeted at cybersecurity professionals and mobile device users, the presentation aims to equip attendees with actionable insights to understand, mitigate, and neutralize the threat posed by this malicious ransomware.
Speakers
avatar for Dyaar Saadi

Dyaar Saadi

Security Operations Center, Spectroblock
Diyar Saadi Ali is a formidable force in the realm of cybersecurity, renowned for his expertise in cybercrime investigations and his role as a certified SOC and malware analyst. With a laser-focused mission to decode and combat digital threats, Diyar approaches the complex world of... Read More →
Friday April 25, 2025 9:00am - 10:00am PDT

10:00am PDT

Role Reversal: Exploiting AI Moderation Rules as Attack Vectors
Friday April 25, 2025 10:00am - 11:00am PDT
The rapid deployment of frontier large language models (LLMs) agents across applications, impacting sectors projected by McKinsey to potentially add $4.4 trillion to the global economy, has mandated the implementation of sophisticated safety protocols and content moderation rules. However, documented attack success rates (ASR) reaching as high as 0.99 against models like ChatGPT and GPT-4 using universal adversarial triggers (Shen et al., 2023) underscore a critical vulnerability: the safety mechanisms themselves. While significant effort is invested in patching vulnerabilities, this presentation argues that the rules, filters, and patched protocols often become primary targets, creating a persistent and evolving threat landscape. This risk is amplified by a lowered barrier to entry for adversarial actors and the emergence of new attack vectors inherent to LLM reasoning capabilities. 
This presentation focuses on showcasing documented instances where security protocols and moderation rules, specifically designed to counter known LLM vulnerabilities, are paradoxically turned into attack vectors. Moving beyond theoretical exploits, we will present real-world examples derived from extensive participation in AI safety competitions and red-teaming engagements spanning multiple well-known frontier and legacy models, illustrating systemic challenges, including how novel attacks can render older or open-source models vulnerable long after release. We will detail methodologies used to systematically probe, reverse-engineer, and bypass these safety guards, revealing predictable and often comical flaws in their logic and implementation. 
Furthermore, we critically examine why many mitigation efforts fall short. This involves analyzing the limitations of static rule-based systems against adaptive adversarial attacks, illustrated by severe vulnerabilities such as data poisoning where merely ~100 poisoned examples can significantly distort outputs (Wan et al., 2023) and memorization risks where models reproduce sensitive training data (Nasr et al., 2023). We explore the challenges of anticipating bypass methods, the inherent tension between safety and utility, alignment risks like sycophancy (Perez et al., 2022b), and how the complexity of rule sets creates exploitable edge cases. Specific, sometimes counter-intuitive, examples will demonstrate how moderation rules were successfully reversed or neutralized. 
This presentation aims to provide attendees with a deeper understanding of the attack surface presented by AI safety mechanisms. Key takeaways will include: 
Identification of common patterns and failure modes in current LLM moderation strategies, supported by evidence from real-world bypasses. 
Demonstration of practical techniques for exploiting safety protocols, including those targeting patched vulnerabilities. 
Analysis of the systemic reasons (technical and procedural) behind the fragility of current safety implementations. 
Presentation concludes by discussing the implications for AI developers, security practitioners, and organizations deploying LLMs, advocating for a paradigm shift towards Mitigation methods that could be used to lower risk that is inherently unavoidable.

References:Nasr, M., et al. (2023). Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks. https://arxiv.org/abs/1812.00910 

Perez, E., et al. (2022b). Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned https://arxiv.org/abs/2209.07858
 
Shen, X., et al. (2023). Jailbreaking Large Language Models with Universal Adversarial Triggers. https://arxiv.org/abs/2307.15043
 
Wan, X., et al. (2023). Data Poisoning Attacks on Large Language Models. https://arxiv.org/abs/2305.00944
Speakers
Friday April 25, 2025 10:00am - 11:00am PDT

11:00am PDT

Biometrics System Hacking
Friday April 25, 2025 11:00am - 12:00pm PDT
Biometric systems, such as facial recognition and voiceprint identification, are widely used for personal identification. In recent years, many manufacturers have integrated facial recognition technology into their products. But how secure are these systems?
In this talk, we will demonstrate simple yet highly effective attack methods to bypass facial recognition systems. Additionally, we will explore techniques for spoofing voiceprint-based authentication, exposing how smart speaker security mechanisms can be manipulated.
Speakers
Friday April 25, 2025 11:00am - 12:00pm PDT

1:00pm PDT

Blockchain's Biggest Heists - Bridging Gone Wrong
Friday April 25, 2025 1:00pm - 2:00pm PDT
$624 million lost in the Ronin hack. $611 million in the Poly Network exploit. These headlines share a common thread: security failures in the design and implementation of blockchain bridges—critical infrastructure that moves billions in value across networks.
Before you turn away from this talk because it’s about “crypto,” know this: there’s no hype here. This is a technical deep dive into how bridges work, why they break, and what their failures reveal about security engineering in highly adversarial environments. We’ll unpack real-world vulnerabilities, examine architectural trade-offs, and explore defense-in-depth strategies for building more resilient systems.

Beyond the headlines and market noise lies one of the most complex and high-stakes areas in modern security engineering—full of unsolved problems and opportunities for researchers to shape what comes next.
Speakers
avatar for Maxwell Dulin

Maxwell Dulin

Security Engineer, Asymmetric Research
Friday April 25, 2025 1:00pm - 2:00pm PDT

2:00pm PDT

Cross-Medium Injection: Exploiting Laser Signals to Manipulate Voice-Controlled IoT Devices
Friday April 25, 2025 2:00pm - 3:00pm PDT
With the increasing adoption of voice-controlled devices in various smart technologies, their interactive functionality has made them a key feature in modern consumer electronics. However, the security of these devices has become a growing concern as attack methods evolve beyond traditional network-based threats to more sophisticated physical-layer attacks, such as Dolphin Attack [1] and SurfingAttack [2], which exploit physical mediums to compromise the devices. This work introduces Laser Commands for Microphone Arrays (LCMA), a novel cross-medium attack that targets multi-microphone VC systems. LCMA utilizes Pulse Width Modulation (PWM) to inject light signals into multiple microphones, exploiting the underlying vulnerabilities in microphone structures that are designed for sound reception. These microphones can be triggered by light signals, producing the same effect as sound, which makes the attack harder to defend against. The cross-medium nature of the attack—where light is used instead of sound—further complicates detection, as light is silent, difficult to perceive, and can penetrate transparent media. This attack is scalable, cost-effective, and can be deployed remotely, posing significant risks to modern voice-controlled systems. The presentation will demonstrate LCMA’s capabilities and emphasize the urgent need for advanced countermeasures to protect against emerging cross-medium threats.

References:[1] Zhang, G., Yan, C., Ji, X., Zhang, T., Zhang, T., & Xu, W. (2017, October). Dolphinattack: Inaudible voice commands. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security.
[2]Yan, Q., Liu, K., Zhou, Q., Guo, H., & Zhang, N. (2020, February). Surfingattack: Interactive hidden attack on voice assistants using ultrasonic guided waves. Network and Distributed Systems Security (NDSS) Symposium.

Speakers
avatar for Hetian Shi

Hetian Shi

Engineer, Tsinghua University
Friday April 25, 2025 2:00pm - 3:00pm PDT
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -