Loading…
Welcome to CanSecWest 2025.
Type: Presentation clear filter
arrow_back View All Dates
Thursday, April 24
 

10:00am PDT

Harnessing Language Models for Detection of Evasive Malicious Email Attachments
Thursday April 24, 2025 10:00am - 11:00am PDT
The HP Q3 2023 Threat Report [2] highlights that 80% of malware is delivered via email, with 12% bypassing detection technologies to reach endpoints. The 2023 Verizon Data Breach Report also indicates that 35% of ransomware infections originated from email. Two primary factors contribute to evasion: the volume and cost challenges of sandbox scanning, which lead to selective scanning and inadvertent bypasses, and the limitations of detection technologies like signature-based methods, sandbox[1] and machine learning, which rely on the final malicious payload for decision-making. However, evasive multi-stage malware and phishing URLs often lack malicious payload when analyzed by these technologies. Additionally, generative AI tools like FraudGPT and WormGPT facilitate the creation of new malicious payloads and phishing pages, further enabling malware to evade defenses and reach endpoints.
 
To address the challenge of detecting evasive malware and malicious URLs without requiring the final malicious payload, we will share the detailed design of an Neural Analysis and Correlation Engine (NACE) specifically designed to detect malicious attachments by understanding the semantics of the email and leveraging them as features instead of relying on the final malicious payload for its decision making. The NACE harnesses a layered approach employing supervised and unsupervised AI-based models leveraging transformer-based architecture to derive deeper meaning embedded within the email's body, text in the attachment, and subject.
 
We will first dive into the details of the semantics commonly used by threat actors to deliver malicious attachments, which lays the foundation of our approach. These details were derived from the analysis of a dataset of malicious emails. The text from the body of the email was extracted to create embeddings. UMAP aided in dimensionality reduction, and clusters were generated based on their density in the high-dimensional embedding space. These clusters represent different types of semantics employed by threat actors to deliver malicious attachments.
In the presentation we will share the details of our approach in which every incoming email undergoes zero-shot semantic analysis, similarity analysis using LLM to determine if it contains semantics typically used by the threat actors to deliver malicious attachments. Additionally, email's body is further analyzed for secondary semantics, including tone, sentiment, and other nuanced elements. Once semantics are identified, hierarchical topic, phrase topic modeling is then applied to uncover the relationships between various topics.
 
Primary and secondary semantics from the email, along with results from phrase hierarchical topic modeling, deep file parsing results of attachments, and email headers, are sent to the expert system. Contextual relationship between the features is used to derive the verdict of malicious and benign attachment without needing malicious payload. This comprehensive approach identifies malicious content without depending on the final payload, which is crucial for any detection technology.

Our presentation will show how LLM models can effectively detect evasive malicious attachments without depending on the analysis of the malicious payload, which typically occurs in the later stages of attachment analysis. Our approach is exemplified by our success in defending against real-world threats, in actual production traffic including HTML smuggling campaigns, Obfuscated SVG , Phishing Links behind CDN, CAPTCHA, Downloaders, Redirectors.
The presentation will conclude with results observed from the production traffic.

References:
[1] Abhishek Singh, Zheng Bu, "Hot Knives Through Butter: Evading File-based Sandboxes.", Black Hat 2013.https://media.blackhat.com/us-13/US-13-Singh-Hot-Knives-Through-Butter-Evading-File-based-San dboxes-WP.pdf 
[2] HP 2023, Q3 Threat Insights Report, HP Wolf Security Threat Insights Report Q3 2023
Speakers
AS

Abhishek Singh

CTO, InceptionCyber.ai
Thursday April 24, 2025 10:00am - 11:00am PDT

11:00am PDT

BadResolve: Bypassing Android's Intent Checks to Resurrect LaunchAnyWhere Privilege Escalations
Thursday April 24, 2025 11:00am - 12:00pm PDT
The LaunchAnywhere vulnerability in Android has been a significant security concern, enabling unprivileged applications to escalate privileges and invoke arbitrary protected or privileged activities. Despite extensive mitigation efforts by Google, such as introducing destination component checks via the resolveActivity API, these defenses have proven insufficient. In this talk, we introduce BadResolve, a novel exploitation technique that bypasses these checks using TOCTOU (Time of Check to Time of Use) race conditions. By controlling previously unforeseen parameters, BadResolve allows attackers to exploit Android's Intent resolution process, reintroducing LaunchAnywhere vulnerabilities.

We demonstrate how BadResolve works in practice, providing instructions for exploiting race conditions with 100% reliability, allowing unprivileged apps to invoke privileged activities. Our research also uncovers new CVEs that affect all Android versions, highlighting ongoing risks such as silent app installations, unauthorized phone calls, and modifications to critical system settings.

Additionally, we present a novel approach combining Large Language Models (LLMs) with traditional static analysis techniques to efficiently identify such kind of vulnerabilities in Android and OEM’s opensource and closed-source codebases.

Speakers
QH

Qidan He

Dawn Security Lab
Qidan He (a.k.a Edward Flanker, CISSP) is the winner of multiple Pwn2Own championships and Pwnie Award. He is the Director & Chief Security Researcher at Dawn Security Lab. He has spoken at conferences like Black Hat, DEFCON, RECON, CanSecWest, MOSEC, HITB, PoC, etc. He is also the... Read More →
Thursday April 24, 2025 11:00am - 12:00pm PDT

1:00pm PDT

Threat Modeling AI Systems – Understanding the Risks
Thursday April 24, 2025 1:00pm - 2:00pm PDT
AI is everywhere. From help bots to logistics systems to your car, it seems like every software company wants every new feature they release to include AI. But how do we keep it secure? In this session, we will discuss the threat landscape for AI systems.
Speakers
Thursday April 24, 2025 1:00pm - 2:00pm PDT

2:00pm PDT

SOAR Implementation Pain Points and How to Avoid Them
Thursday April 24, 2025 2:00pm - 3:00pm PDT
As cybersecurity threats continue to escalate in complexity and frequency, organizations increasingly rely on automation to enhance their defenses. Security Orchestration, Automation, and Response (SOAR) platforms have emerged as powerful tools for streamlining operations and reducing the burden of repetitive tasks on security teams. However, implementing SOAR is not without its challenges. This presentation will explore the common challenges organizations encounter when deploying SOAR and provide actionable strategies to overcome them. By examining real-world scenarios and best practices, attendees will gain insights into managing expectations, developing effective playbooks, addressing training and adoption barriers, and ensuring seamless integration with existing tools such as Security Information and Event Management (SIEM) systems. The session will cover practical approaches to conducting readiness assessments, planning phased rollouts, and measuring success to ensure that SOAR implementations deliver tangible results. Additionally, lessons learned from successful deployments will be shared to help participants avoid common pitfalls and realize the full potential of SOAR in their security operations. Common SOAR Pain Points to discuss: Integration challenges with existing tools and technologies, such as SIEMs and threat intelligence platforms. Misaligned expectations between stakeholders and technical teams. Automation pitfalls, including over-automation and inadequate planning. Training and adoption barriers within security teams. Maintaining playbook relevance in evolving threat landscapes. Intended Audience: This session is designed for cybersecurity managers, SOC analysts, engineers, and other professionals who are considering or actively planning to implement SOAR solutions in their organizations. It will provide valuable insights into overcoming implementation challenges and maximizing the benefits of SOAR to streamline operations and enhance incident response capabilities.
Thursday April 24, 2025 2:00pm - 3:00pm PDT

3:00pm PDT

Deepfake Deception: Weaponizing AI-Generated Voice Clones in Social Engineering Attacks
Thursday April 24, 2025 3:00pm - 4:00pm PDT
As deepfake technology rapidly evolves, its application in social engineering has reached a new level of sophistication. This talk will explore a real-world red team engagement where AI-driven deepfake voice cloning was leveraged to test an organization’s security controls. Through extensive research, we examined multiple deepfake methods, from video-based impersonation for video calls to voice cloning for phishing scenarios. Our findings revealed that audio deepfakes were the most effective and hardest to detect by human targets.

In this engagement, we successfully cloned a CTO’s voice using audio samples extracted from a publicly available podcast interview. A trained AI model was then developed to convincingly replicate the executive’s voice. This model was deployed in a social engineering campaign targeting the organization’s helpdesk, who believed I was their Chief Technology Officer for about 11 minutes.

This talk will provide attendees with an in-depth look at how threat actors exploit deepfake technology, the technical process of voice cloning, and the implications for enterprise security. We will also discuss countermeasures and detection techniques that organizations can implement to mitigate these emerging threats.
Speakers
Thursday April 24, 2025 3:00pm - 4:00pm PDT

4:00pm PDT

AI Security Landscape: Tales and Techniques from the Frontlines
Thursday April 24, 2025 4:00pm - 5:00pm PDT
The once theoretical AI bogeyman has arrived—and it brought friends. Over the past 12 months, adversaries have shifted from exploratory probing to weaponized exploitation across the entire AI stack, requiring a fundamental reassessment of defense postures. This presentation dissects the evolution of AI-specific TTPs, including advancements in model poisoning, LLM jailbreaking techniques, and the abuse of vulnerabilities in ML tooling and infrastructure. One of the most concerning recent developments relates to architectural backdoors, such as ShadowLogic. Last year, we discussed the use of serialization vulnerabilities to execute traditional payloads via hijacked models, but ShadowLogic is quite a different beast. Instead of injecting easily detectable Python code to compromise the underlying system, ShadowLogic uses the subtleties of machine learning to manipulate the model’s behavior, offering persistent model control with a minimal detection surface. Attacks against generative AI have also stepped up, with threat actors devising novel techniques to bypass AI guardrails. Multimodal systems add to the attack surface, as indirect prompt injection is now possible through images, audio, and embedded content—because nothing says "secure by design" like accepting arbitrary input from untrusted users! The cybercrime ecosystem is naturally adapting to the shifting priorities, and AI-focused hacking-as-a-service portals are popping up on the Dark Web, where adversaries turn carefully guardrailed proprietary LLMs into unrestricted content generators. Even Big Tech is playing whack-a-mole with its own creations, with Microsoft's DCU recently taking legal action against hackers selling illicit access to its Azure AI. Finally, AI infrastructure itself is proving increasingly prone to attack. The number of ML tooling vulnerabilities has surged, fueled by arbitrary code execution flaws that could allow attackers to take over AI pipelines. GPU-based attacks are also on the rise, allowing for sensitive AI computations to be extracted from shared hardware resources. As these threats continue to evolve, defenders must shift from reactive fixes to proactive security-by-design approaches to mitigate the growing risks AI systems face today.
Speakers
avatar for Marta Janus

Marta Janus

Principal Researcher, HiddenLayer
Marta is a Principal Researcher at HiddenLayer, focused on investigating adversarial machine learning attacks and the overall security of AI-based solutions. Prior to HiddenLayer, Marta spent over a decade working as a researcher for leading anti-virus vendors. She has extensive experience... Read More →
Thursday April 24, 2025 4:00pm - 5:00pm PDT
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -