📖 Blog Post

Deepbits

Deep Thinking

Blog

Agentic Binary Analysis: When AI Becomes the Analyst

by Heng Yin , Founder & CEO @Deepbits
Share via TwitterShare via FaceBook

Agentic Binary Analysis: When AI Becomes the Analyst

By Heng YinJuly 2025


🧠 Rethinking Binary Analysis

Binary analysis is one of the cornerstones of cybersecurity — powering everything from malware detection and vulnerability discovery to exploit generation and firmware auditing.

But despite decades of progress, it remains a highly specialized craft. Tools are fragmented, difficult to integrate, and demand deep domain expertise. Analysts spend days stitching together fuzzers, disassemblers, and symbolic engines just to answer a single question.

That’s changing.

We’re entering a new era — one where AI doesn’t just assist analysts but acts as one.
We call this paradigm Agentic Binary Analysis.


🚀 What Is Agentic Binary Analysis?

At its core, Agentic Binary Analysis is about giving large language models (LLMs) agency — the ability to reason, plan, and act autonomously across complex binary-analysis workflows.

Think of it as turning an LLM into a self-directed analyst. It plans tasks, queries tools, interprets results, and refines its strategy — all in a continuous reasoning loop.

This is made possible through structured orchestration protocols such as the Model Context Protocol (MCP), which allow the model to interact directly with disassemblers, diffing tools, or symbolic engines.

The result is a system that doesn’t just answer questions — it investigates binaries end-to-end.


🔬 A Decade of Research That Made It Possible

Agentic Binary Analysis builds on over a decade of binary-analysis research — much of it led by Heng Yin and collaborators at UC Riverside.

Traditional Static and Dynamic Analysis

Over the years, their research produced specialized tools that solved key sub-problems:

Each of these tools advanced one piece of the puzzle — but they were never fully integrated into a unified workflow.


🤖 From Static Code to Learned Representations

The next wave of research brought machine learning into binary understanding. Instead of handcrafted heuristics, models learned vector representations of code semantics.

Projects such as
Genius (CCS 2016),
Gemini (CCS 2017),
Asm2Vec (Oakland 2019),
PalmTree (CCS 2021),
StateFormer (FSE 2021),
jTrans (ISSTA 2022), and
CLAP (ISSTA 2024)
used neural embeddings to capture the semantics of instructions and functions — enabling large-scale binary diffing and similarity search.


🧩 AI-Assisted Binary Diffing

Yin’s later work —
DeepBinDiff (NDSS 2020) and
SigmaDiff (NDSS 2024)
combined neural code embeddings with symbolic reasoning to compare binaries at scale.

These tools could detect semantic differences between binaries compiled under different optimizations — an essential capability for malware tracking and patch analysis.


💡 Why We Need a New Paradigm

Even with these innovations, binary analysis is still hard to use. Every tool has its own input format, runtime environment, and analysis focus. Human analysts must coordinate them manually, interpret outputs, and connect dots across tools.

Meanwhile, modern LLMs like GPT-4 and GPT-5 have become adept at:

  • Reading and reasoning over disassembly or decompiled code.
  • Writing analysis scripts in Python (e.g., angr, pwntools).
  • Calling command-line tools or APIs.
  • Explaining intermediate findings conversationally.

Agentic Binary Analysis brings these pieces together — allowing the AI to plan, execute, and explain the full process autonomously.


🧰 Inside Dr.Binary

To prove the concept, we built Dr.Binary — an interactive system that unifies AI reasoning with traditional binary-analysis tools.

What It Can Do

What We Observed

LLMs can already:

  • Understand assembly and decompiled output at near-expert levels.
  • Generate complex multi-tool pipelines.
  • Adjust their plans dynamically based on tool results.
  • Provide explanations clear enough for both engineers and students.

🔧 Designing the Right AI–Tool Interface

A major open question is how these AI agents should interact with analysis tools.

This is an interface-design problem similar to those between hardware and software (RISC vs CISC) or between kernel and userspace (syscalls).

  • Low-Level Approach: Feed raw disassembly to the LLM — flexible but costly.
  • High-Level Approach: Let tools preprocess data (e.g., BinDSA pointer analysis) — efficient but limited by tool scope.
  • Middle Ground: Provide structured outputs (control-flow graphs, symbol tables) for the LLM to reason over.

This emerging field — AI Tool Interface Design — may become as important as HCI once was.


📊 Early Results and Challenges

Early experiments with Dr.Binary show:

  • Faster malware triage and binary diffing.
  • High-quality code explanations and auto-generated analysis scripts.
  • Reduced need for manual tool orchestration.

But challenges remain:

  • Dynamic Behavior: LLMs still struggle with runtime reasoning — emulators like Unicorn or Qiling may help.
  • Context Size: Large binaries exceed current model limits; context optimization is essential.
  • Scalability & Cost: Balancing reasoning depth and inference cost is key for production use.

🔭 What’s Next

Looking ahead, we’re exploring:

  • Tighter integration with emulation frameworks for dynamic behavior analysis.
  • Benchmarks for evaluating agentic analysis systems.
  • Methods to quantify trust and explainability in AI-generated findings.
  • Standard schemas for tool–agent communication.
  • Techniques to test and validate AI agents themselves.

🏁 Conclusion

Agentic Binary Analysis marks a turning point for cybersecurity research.
It unites years of binary-analysis innovation with the reasoning power of modern LLMs.

Through Dr.Binary, analysts can now converse with an AI that plans, executes, and explains binary-analysis tasks autonomously — transforming reverse engineering from a manual craft into an interactive, intelligent process.

The future of binary analysis isn’t just automated.
It’s agentic — adaptive, explainable, and continuously learning.