Loading…
Welcome to CanSecWest 2025.
Thursday, April 24
 

10:00am PDT

Harnessing Language Models for Detection of Evasive Malicious Email Attachments
Thursday April 24, 2025 10:00am - 11:00am PDT
The HP Q3 2023 Threat Report [2] highlights that 80% of malware is delivered via email, with 12% bypassing detection technologies to reach endpoints. The 2023 Verizon Data Breach Report also indicates that 35% of ransomware infections originated from email. Two primary factors contribute to evasion: the volume and cost challenges of sandbox scanning, which lead to selective scanning and inadvertent bypasses, and the limitations of detection technologies like signature-based methods, sandbox[1] and machine learning, which rely on the final malicious payload for decision-making. However, evasive multi-stage malware and phishing URLs often lack malicious payload when analyzed by these technologies. Additionally, generative AI tools like FraudGPT and WormGPT facilitate the creation of new malicious payloads and phishing pages, further enabling malware to evade defenses and reach endpoints.
 
To address the challenge of detecting evasive malware and malicious URLs without requiring the final malicious payload, we will share the detailed design of an Neural Analysis and Correlation Engine (NACE) specifically designed to detect malicious attachments by understanding the semantics of the email and leveraging them as features instead of relying on the final malicious payload for its decision making. The NACE harnesses a layered approach employing supervised and unsupervised AI-based models leveraging transformer-based architecture to derive deeper meaning embedded within the email's body, text in the attachment, and subject.
 
We will first dive into the details of the semantics commonly used by threat actors to deliver malicious attachments, which lays the foundation of our approach. These details were derived from the analysis of a dataset of malicious emails. The text from the body of the email was extracted to create embeddings. UMAP aided in dimensionality reduction, and clusters were generated based on their density in the high-dimensional embedding space. These clusters represent different types of semantics employed by threat actors to deliver malicious attachments.
In the presentation we will share the details of our approach in which every incoming email undergoes zero-shot semantic analysis, similarity analysis using LLM to determine if it contains semantics typically used by the threat actors to deliver malicious attachments. Additionally, email's body is further analyzed for secondary semantics, including tone, sentiment, and other nuanced elements. Once semantics are identified, hierarchical topic, phrase topic modeling is then applied to uncover the relationships between various topics.
 
Primary and secondary semantics from the email, along with results from phrase hierarchical topic modeling, deep file parsing results of attachments, and email headers, are sent to the expert system. Contextual relationship between the features is used to derive the verdict of malicious and benign attachment without needing malicious payload. This comprehensive approach identifies malicious content without depending on the final payload, which is crucial for any detection technology.

Our presentation will show how LLM models can effectively detect evasive malicious attachments without depending on the analysis of the malicious payload, which typically occurs in the later stages of attachment analysis. Our approach is exemplified by our success in defending against real-world threats, in actual production traffic including HTML smuggling campaigns, Obfuscated SVG , Phishing Links behind CDN, CAPTCHA, Downloaders, Redirectors.
The presentation will conclude with results observed from the production traffic.

References:
[1] Abhishek Singh, Zheng Bu, "Hot Knives Through Butter: Evading File-based Sandboxes.", Black Hat 2013.https://media.blackhat.com/us-13/US-13-Singh-Hot-Knives-Through-Butter-Evading-File-based-San dboxes-WP.pdf 
[2] HP 2023, Q3 Threat Insights Report, HP Wolf Security Threat Insights Report Q3 2023
Speakers
AS

Abhishek Singh

CTO, InceptionCyber.ai
Thursday April 24, 2025 10:00am - 11:00am PDT

11:00am PDT

BadResolve: Bypassing Android's Intent Checks to Resurrect LaunchAnyWhere Privilege Escalations
Thursday April 24, 2025 11:00am - 12:00pm PDT
The LaunchAnywhere vulnerability in Android has been a significant security concern, enabling unprivileged applications to escalate privileges and invoke arbitrary protected or privileged activities. Despite extensive mitigation efforts by Google, such as introducing destination component checks via the resolveActivity API, these defenses have proven insufficient. In this talk, we introduce BadResolve, a novel exploitation technique that bypasses these checks using TOCTOU (Time of Check to Time of Use) race conditions. By controlling previously unforeseen parameters, BadResolve allows attackers to exploit Android's Intent resolution process, reintroducing LaunchAnywhere vulnerabilities.

We demonstrate how BadResolve works in practice, providing instructions for exploiting race conditions with 100% reliability, allowing unprivileged apps to invoke privileged activities. Our research also uncovers new CVEs that affect all Android versions, highlighting ongoing risks such as silent app installations, unauthorized phone calls, and modifications to critical system settings.

Additionally, we present a novel approach combining Large Language Models (LLMs) with traditional static analysis techniques to efficiently identify such kind of vulnerabilities in Android and OEM’s opensource and closed-source codebases.

Speakers
QH

Qidan He

Dawn Security Lab
Qidan He (a.k.a Edward Flanker, CISSP) is the winner of multiple Pwn2Own championships and Pwnie Award. He is the Director & Chief Security Researcher at Dawn Security Lab. He has spoken at conferences like Black Hat, DEFCON, RECON, CanSecWest, MOSEC, HITB, PoC, etc. He is also the... Read More →
Thursday April 24, 2025 11:00am - 12:00pm PDT

1:00pm PDT

Threat Modeling AI Systems – Understanding the Risks
Thursday April 24, 2025 1:00pm - 2:00pm PDT
AI is everywhere. From help bots to logistics systems to your car, it seems like every software company wants every new feature they release to include AI. But how do we keep it secure? In this session, we will discuss the threat landscape for AI systems.
Speakers
Thursday April 24, 2025 1:00pm - 2:00pm PDT

2:00pm PDT

SOAR Implementation Pain Points and How to Avoid Them
Thursday April 24, 2025 2:00pm - 3:00pm PDT
As cybersecurity threats continue to escalate in complexity and frequency, organizations increasingly rely on automation to enhance their defenses. Security Orchestration, Automation, and Response (SOAR) platforms have emerged as powerful tools for streamlining operations and reducing the burden of repetitive tasks on security teams. However, implementing SOAR is not without its challenges. This presentation will explore the common challenges organizations encounter when deploying SOAR and provide actionable strategies to overcome them. By examining real-world scenarios and best practices, attendees will gain insights into managing expectations, developing effective playbooks, addressing training and adoption barriers, and ensuring seamless integration with existing tools such as Security Information and Event Management (SIEM) systems. The session will cover practical approaches to conducting readiness assessments, planning phased rollouts, and measuring success to ensure that SOAR implementations deliver tangible results. Additionally, lessons learned from successful deployments will be shared to help participants avoid common pitfalls and realize the full potential of SOAR in their security operations. Common SOAR Pain Points to discuss: Integration challenges with existing tools and technologies, such as SIEMs and threat intelligence platforms. Misaligned expectations between stakeholders and technical teams. Automation pitfalls, including over-automation and inadequate planning. Training and adoption barriers within security teams. Maintaining playbook relevance in evolving threat landscapes. Intended Audience: This session is designed for cybersecurity managers, SOC analysts, engineers, and other professionals who are considering or actively planning to implement SOAR solutions in their organizations. It will provide valuable insights into overcoming implementation challenges and maximizing the benefits of SOAR to streamline operations and enhance incident response capabilities.
Thursday April 24, 2025 2:00pm - 3:00pm PDT

3:00pm PDT

Deepfake Deception: Weaponizing AI-Generated Voice Clones in Social Engineering Attacks
Thursday April 24, 2025 3:00pm - 4:00pm PDT
As deepfake technology rapidly evolves, its application in social engineering has reached a new level of sophistication. This talk will explore a real-world red team engagement where AI-driven deepfake voice cloning was leveraged to test an organization’s security controls. Through extensive research, we examined multiple deepfake methods, from video-based impersonation for video calls to voice cloning for phishing scenarios. Our findings revealed that audio deepfakes were the most effective and hardest to detect by human targets.

In this engagement, we successfully cloned a CTO’s voice using audio samples extracted from a publicly available podcast interview. A trained AI model was then developed to convincingly replicate the executive’s voice. This model was deployed in a social engineering campaign targeting the organization’s helpdesk, who believed I was their Chief Technology Officer for about 11 minutes.

This talk will provide attendees with an in-depth look at how threat actors exploit deepfake technology, the technical process of voice cloning, and the implications for enterprise security. We will also discuss countermeasures and detection techniques that organizations can implement to mitigate these emerging threats.
Speakers
Thursday April 24, 2025 3:00pm - 4:00pm PDT

4:00pm PDT

AI Security Landscape: Tales and Techniques from the Frontlines
Thursday April 24, 2025 4:00pm - 5:00pm PDT
The once theoretical AI bogeyman has arrived—and it brought friends. Over the past 12 months, adversaries have shifted from exploratory probing to weaponized exploitation across the entire AI stack, requiring a fundamental reassessment of defense postures. This presentation dissects the evolution of AI-specific TTPs, including advancements in model poisoning, LLM jailbreaking techniques, and the abuse of vulnerabilities in ML tooling and infrastructure. One of the most concerning recent developments relates to architectural backdoors, such as ShadowLogic. Last year, we discussed the use of serialization vulnerabilities to execute traditional payloads via hijacked models, but ShadowLogic is quite a different beast. Instead of injecting easily detectable Python code to compromise the underlying system, ShadowLogic uses the subtleties of machine learning to manipulate the model’s behavior, offering persistent model control with a minimal detection surface. Attacks against generative AI have also stepped up, with threat actors devising novel techniques to bypass AI guardrails. Multimodal systems add to the attack surface, as indirect prompt injection is now possible through images, audio, and embedded content—because nothing says "secure by design" like accepting arbitrary input from untrusted users! The cybercrime ecosystem is naturally adapting to the shifting priorities, and AI-focused hacking-as-a-service portals are popping up on the Dark Web, where adversaries turn carefully guardrailed proprietary LLMs into unrestricted content generators. Even Big Tech is playing whack-a-mole with its own creations, with Microsoft's DCU recently taking legal action against hackers selling illicit access to its Azure AI. Finally, AI infrastructure itself is proving increasingly prone to attack. The number of ML tooling vulnerabilities has surged, fueled by arbitrary code execution flaws that could allow attackers to take over AI pipelines. GPU-based attacks are also on the rise, allowing for sensitive AI computations to be extracted from shared hardware resources. As these threats continue to evolve, defenders must shift from reactive fixes to proactive security-by-design approaches to mitigate the growing risks AI systems face today.
Speakers
avatar for Marta Janus

Marta Janus

Principal Researcher, HiddenLayer
Marta is a Principal Researcher at HiddenLayer, focused on investigating adversarial machine learning attacks and the overall security of AI-based solutions. Prior to HiddenLayer, Marta spent over a decade working as a researcher for leading anti-virus vendors. She has extensive experience... Read More →
Thursday April 24, 2025 4:00pm - 5:00pm PDT
 
Friday, April 25
 

9:00am PDT

Keys to Freedom: Analysis and Resolution of Arab Ransom Locker Infections
Friday April 25, 2025 9:00am - 10:00am PDT
The presentation "Keys to Freedom: Analysis and Resolution of Arab Ransom Locker Infections" explores the intricate workings of the Arab Ransom Locker malware, focusing on its impact on mobile devices. This session delves into a comprehensive analysis of the ransomware's attack vector, encryption mechanisms, and behavioral patterns. It will also provide a step-by-step guide to unlocking infected devices, including proven recovery techniques, decryption tools, and preventive strategies. Targeted at cybersecurity professionals and mobile device users, the presentation aims to equip attendees with actionable insights to understand, mitigate, and neutralize the threat posed by this malicious ransomware.
Speakers
avatar for Dyaar Saadi

Dyaar Saadi

Security Operations Center, Spectroblock
Diyar Saadi Ali is a formidable force in the realm of cybersecurity, renowned for his expertise in cybercrime investigations and his role as a certified SOC and malware analyst. With a laser-focused mission to decode and combat digital threats, Diyar approaches the complex world of... Read More →
Friday April 25, 2025 9:00am - 10:00am PDT

10:00am PDT

Role Reversal: Exploiting AI Moderation Rules as Attack Vectors
Friday April 25, 2025 10:00am - 11:00am PDT
The rapid deployment of frontier large language models (LLMs) agents across applications, impacting sectors projected by McKinsey to potentially add $4.4 trillion to the global economy, has mandated the implementation of sophisticated safety protocols and content moderation rules. However, documented attack success rates (ASR) reaching as high as 0.99 against models like ChatGPT and GPT-4 using universal adversarial triggers (Shen et al., 2023) underscore a critical vulnerability: the safety mechanisms themselves. While significant effort is invested in patching vulnerabilities, this presentation argues that the rules, filters, and patched protocols often become primary targets, creating a persistent and evolving threat landscape. This risk is amplified by a lowered barrier to entry for adversarial actors and the emergence of new attack vectors inherent to LLM reasoning capabilities. 
This presentation focuses on showcasing documented instances where security protocols and moderation rules, specifically designed to counter known LLM vulnerabilities, are paradoxically turned into attack vectors. Moving beyond theoretical exploits, we will present real-world examples derived from extensive participation in AI safety competitions and red-teaming engagements spanning multiple well-known frontier and legacy models, illustrating systemic challenges, including how novel attacks can render older or open-source models vulnerable long after release. We will detail methodologies used to systematically probe, reverse-engineer, and bypass these safety guards, revealing predictable and often comical flaws in their logic and implementation. 
Furthermore, we critically examine why many mitigation efforts fall short. This involves analyzing the limitations of static rule-based systems against adaptive adversarial attacks, illustrated by severe vulnerabilities such as data poisoning where merely ~100 poisoned examples can significantly distort outputs (Wan et al., 2023) and memorization risks where models reproduce sensitive training data (Nasr et al., 2023). We explore the challenges of anticipating bypass methods, the inherent tension between safety and utility, alignment risks like sycophancy (Perez et al., 2022b), and how the complexity of rule sets creates exploitable edge cases. Specific, sometimes counter-intuitive, examples will demonstrate how moderation rules were successfully reversed or neutralized. 
This presentation aims to provide attendees with a deeper understanding of the attack surface presented by AI safety mechanisms. Key takeaways will include: 
Identification of common patterns and failure modes in current LLM moderation strategies, supported by evidence from real-world bypasses. 
Demonstration of practical techniques for exploiting safety protocols, including those targeting patched vulnerabilities. 
Analysis of the systemic reasons (technical and procedural) behind the fragility of current safety implementations. 
Presentation concludes by discussing the implications for AI developers, security practitioners, and organizations deploying LLMs, advocating for a paradigm shift towards Mitigation methods that could be used to lower risk that is inherently unavoidable.

References:Nasr, M., et al. (2023). Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks. https://arxiv.org/abs/1812.00910 

Perez, E., et al. (2022b). Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned https://arxiv.org/abs/2209.07858
 
Shen, X., et al. (2023). Jailbreaking Large Language Models with Universal Adversarial Triggers. https://arxiv.org/abs/2307.15043
 
Wan, X., et al. (2023). Data Poisoning Attacks on Large Language Models. https://arxiv.org/abs/2305.00944
Speakers
Friday April 25, 2025 10:00am - 11:00am PDT

11:00am PDT

Biometrics System Hacking
Friday April 25, 2025 11:00am - 12:00pm PDT
Biometric systems, such as facial recognition and voiceprint identification, are widely used for personal identification. In recent years, many manufacturers have integrated facial recognition technology into their products. But how secure are these systems?
In this talk, we will demonstrate simple yet highly effective attack methods to bypass facial recognition systems. Additionally, we will explore techniques for spoofing voiceprint-based authentication, exposing how smart speaker security mechanisms can be manipulated.
Speakers
Friday April 25, 2025 11:00am - 12:00pm PDT

1:00pm PDT

Blockchain's Biggest Heists - Bridging Gone Wrong
Friday April 25, 2025 1:00pm - 2:00pm PDT
$624 million lost in the Ronin hack. $611 million in the Poly Network exploit. These headlines share a common thread: security failures in the design and implementation of blockchain bridges—critical infrastructure that moves billions in value across networks.
Before you turn away from this talk because it’s about “crypto,” know this: there’s no hype here. This is a technical deep dive into how bridges work, why they break, and what their failures reveal about security engineering in highly adversarial environments. We’ll unpack real-world vulnerabilities, examine architectural trade-offs, and explore defense-in-depth strategies for building more resilient systems.

Beyond the headlines and market noise lies one of the most complex and high-stakes areas in modern security engineering—full of unsolved problems and opportunities for researchers to shape what comes next.
Speakers
avatar for Maxwell Dulin

Maxwell Dulin

Security Engineer, Asymmetric Research
Friday April 25, 2025 1:00pm - 2:00pm PDT

2:00pm PDT

Cross-Medium Injection: Exploiting Laser Signals to Manipulate Voice-Controlled IoT Devices
Friday April 25, 2025 2:00pm - 3:00pm PDT
With the increasing adoption of voice-controlled devices in various smart technologies, their interactive functionality has made them a key feature in modern consumer electronics. However, the security of these devices has become a growing concern as attack methods evolve beyond traditional network-based threats to more sophisticated physical-layer attacks, such as Dolphin Attack [1] and SurfingAttack [2], which exploit physical mediums to compromise the devices. This work introduces Laser Commands for Microphone Arrays (LCMA), a novel cross-medium attack that targets multi-microphone VC systems. LCMA utilizes Pulse Width Modulation (PWM) to inject light signals into multiple microphones, exploiting the underlying vulnerabilities in microphone structures that are designed for sound reception. These microphones can be triggered by light signals, producing the same effect as sound, which makes the attack harder to defend against. The cross-medium nature of the attack—where light is used instead of sound—further complicates detection, as light is silent, difficult to perceive, and can penetrate transparent media. This attack is scalable, cost-effective, and can be deployed remotely, posing significant risks to modern voice-controlled systems. The presentation will demonstrate LCMA’s capabilities and emphasize the urgent need for advanced countermeasures to protect against emerging cross-medium threats.

References:[1] Zhang, G., Yan, C., Ji, X., Zhang, T., Zhang, T., & Xu, W. (2017, October). Dolphinattack: Inaudible voice commands. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security.
[2]Yan, Q., Liu, K., Zhou, Q., Guo, H., & Zhang, N. (2020, February). Surfingattack: Interactive hidden attack on voice assistants using ultrasonic guided waves. Network and Distributed Systems Security (NDSS) Symposium.

Speakers
avatar for Hetian Shi

Hetian Shi

Engineer, Tsinghua University
Friday April 25, 2025 2:00pm - 3:00pm PDT

4:00pm PDT

Lightning Talks
Friday April 25, 2025 4:00pm - 5:00pm PDT
5 Minute student and Early Professional Lightning Talks.
Friday April 25, 2025 4:00pm - 5:00pm PDT
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.