Black Hat USA 2025 | Clue-Driven Reverse Engineering by LLM in Real-World Malware Analysis

Here is a comprehensive summary of the Black Hat USA 2025 presentation "Pay Attention to the Clue: Clue-driven Reverse Engineering by LLM in Real-world Malware Analysis" by Tien-Chih Lin and Wei-Chieh Chao from CyCraft Technology. Summary The presentation explores how to effectively use Large Language Models (LLMs) for malware reverse engineering while overcoming their biggest flaw: hallucinations. The speakers introduce Celebi, an automated, context-aware system that uses the internal mechanics of LLMs (attention heads and token probabilities) to verify if the AI is telling the truth, ultimately resulting in faster, more accurate…

Black Hat USA 2025 | Hack to the Future: Owning AI-Powered Tools with Old School Vulns

"Hack To The Future: Owning AI-Powered Tools With Old School Vulns" by Nils Amiet and Nathan Hamiel at Black Hat USA 2025: Core Thesis The integration of generative AI into developer productivity tools (like AI code reviewers and data analytics assistants) is creating massive new attack surfaces. While the underlying Large Language Models (LLMs) are not being "hacked," the applications wrapping them are poorly designed, overly permissive, and riddled with classic, "old-school" vulnerabilities like Remote Code Execution (RCE), Prompt Injection, and Insecure Direct Object Reference (IDOR). Because these AI…

Black Hat USA 2025 | Reinventing Agentic AI Security With Architectural Controls

"When Guardrails Aren't Enough: Reinventing Agentic AI Security With Architectural Controls," delivered by David Brauchler III from NCC Group. The Core Thesis The central argument of the presentation is that guardrails are not security boundaries. Much like Web Application Firewalls (WAFs) in the early days of the internet, AI guardrails are merely statistical heuristics. They reduce risk but do not provide "hard" security guarantees and can always be bypassed by a determined attacker. As AI evolves into "agentic" systems—where models can execute tool calls, read databases, and take actions—relying solely…

Black Hat USA 2025 | Invoking Gemini for Workspace Agents with a Simple Google Calendar Invite

"Invitation is All You Need! TARA for Targeted Promptware Attack against Gemini-Powered Assistants," presented by Ben Nassi, Or Yair, and Stav Cohen. Core Premise The presentation highlights a new, highly practical class of cyberattack called "Promptware," specifically targeting Large Language Model (LLM) powered personal assistants like Google's Gemini for Workspace and Android. The researchers demonstrate how an attacker can completely compromise a user's AI assistant simply by sending them a Google Calendar invitation containing hidden, malicious instructions. The Attack Mechanism: Indirect Prompt Injection Unlike traditional hacking that targets memory corruption or…

Black Hat USA 2025 | Training Specialist Models: Automating Malware Development

"Training Specialist Models: Automating Malware Development" explores how small, specialized Large Language Models (LLMs) can be trained to outperform massive generalist models in specific, highly technical tasks—specifically, the creation of evasive malware. Here is a summary of the key points: The Problem with Current ModelsAvery identifies a gap in the current AI landscape for offensive security professionals: Large Generalists (OpenAI, Anthropic): These models are highly capable but come with privacy concerns, high costs, and strict safety filters (refusals) that make them difficult to automate for red teaming. Small Local Models…

Black Hat USA 2025 | Watching the Watchers: Exploring and Testing Defenses of Anti-Cheat Systems

Introduction to the Anti-Cheat Ecosystem The World of Game Cheats: The speakers explore the fast-paced, high-stakes battleground between cheat developers (attackers) and anti-cheat systems (defenders) in modern competitive shooter games []. The Cheat Economy: Cheating is a massive industry. Cheats are often sold via subscription models by well-run, sometimes legally registered companies, with some cheats costing upwards of $200 a month []. Because it is so lucrative, the attack-defense cycle is incredibly rapid. The Shift to the Kernel: Historically, cheats operated in user mode. As anti-cheats adapted, the…

How to Think About TPUs

Part 2 of  ( | ) This section is all about how TPUs work, how they're networked together to enable multi-chip training and inference, and how this affects the performance of our favorite algorithms. There's even some good stuff for GPU users too! What Is a TPU? A TPU is basically a compute core that specializes in matrix multiplication (called a TensorCore) attached to a stack of fast memory (called high-bandwidth memory or HBM) [1]. Here’s a diagram: Figure: the basic components of a TPU chip. The TensorCore is the gray left-hand box,…

Executable Exports Symbols

There are actually several critical scenarios where an executable must export symbols. The confusion usually lies in the direction of the linking. You are right that Executable A rarely links dynamically to Executable B to call functions inside B. However, the reverse happens frequently: Dynamic Libraries (Plugins) loaded by Executable A often need to call functions inside Executable A. Here are the specific reasons why an executable needs to keep exported symbols: 1. The "Host-Plugin" Architecture (Most Common) This is the primary reason. If your executable supports plugins…

Tailcall in AArch64

In AArch64 (ARM64), for a tail call to work, the current function must tear down its own stack frame before branching to the next function. If it didn't, the stack would grow infinitely with every tail call, causing a stack overflow. Here is exactly how the "reuse" works at the assembly level, step-by-step. 1. The Standard Mechanism In a normal return, a function ends with an epilogue that restores registers and the stack pointer, followed by a ret instruction. In a tail call, the compiler generates a special…

AFL_SKIP_BIN_CHECK

export AFL_SKIP_BIN_CHECK=1 is an environment variable setting that tells AFL++ to stop complaining that your target program doesn't look like it was compiled with AFL. By default, AFL++ checks your target binary for specific "instrumentation" markers before it starts. If it doesn't find them, it assumes you made a mistake (like compiling with gcc instead of afl-cc) and refuses to run to save you from wasting time. When should you use this? You generally should not use this unless you know exactly why. However, here are the valid…