• Hi HN,

    I built this because reverse engineering software across multiple versions is painful. You spend hours annotating functions in version 1.07, then version 1.08 drops and every address has shifted — all your work invisible.

    The core idea is a normalized function hashing system. It hashes functions by their logical structure — mnemonics, operand categories, control flow — not raw bytes or absolute addresses. When a binary is recompiled or rebased, the same function produces the same hash. All your documentation (names, types, comments) transfers automatically.

    Beyond that, it's a full MCP bridge with 110 tools for Ghidra: decompilation, disassembly, cross-referencing, annotation, batch analysis, and headless/Docker deployment. It integrates with Claude, Claude Code, or any MCP-compliant client.

    For context, the most popular Ghidra MCP server (LaurieWired's, 7K+ stars) has about 15 tools. This started as a fork of that project but grew into 28,600 lines of substantially different code.

    Architecture:

      Java Ghidra Plugin (22K LOC) → embeds HTTP server inside Ghidra
      Python MCP Bridge (6.5K LOC) → 110 tools with batch optimization
      Any MCP client → Claude, scripts, CI pipelines
    
    I validated the hashing against Diablo II — dozens of patch versions, each rebuilding DLLs at different base addresses. The hash registry holds 154K+ entries, and I can propagate 1,300+ function annotations from one version to the next automatically.

    The headless mode runs in Docker (docker compose up) for batch processing and CI integration — no GUI required.

    v2.0.0 adds localhost-only binding (security), configurable timeouts, label deletion tools, and .env-based configuration.

    Happy to discuss the hashing approach, MCP protocol design decisions, or how this fits into modern RE workflows.

    • What does your function-hashing system offer over ghidra's built in FunctionID, or the bindiff plugin[0]?

      [0] https://github.com/google/bindiff

      • Or better yet, the built-in Version Tracker, which is designed for porting markup to newer versions of binaries with several different heuristic tools for correlating functions that are the same due to e.g. the same data or function xrefs, and not purely off of identical function hashes...

        Going off of only FunctionID will either have a lot of false positives or false negatives, depending on if you compute them masking out operands or not. If you mask out operands, then it says that "*param_1 = 4" and "*param_1 = 123" are the same hash. If you don't mask out operands, then it says that nearly all functions are different because your call displacements have shifted due to different code layout. That's why the built-in Version Tracker tool uses hashes for only one of the heuristics, and has other correlation heuristics to apply as well in addition.

    • Was hoping to kick the tires but seem to be spinning my wheels trying to get Ghidra to see the plugin. Is GH Discussions your preferred means of communications?
    • How does it compare to other Ghidra MCP servers?

      - pyghidra-mcp - ReVa - GhidrAssistMCP - GhydraMCP - etc...

    • How does this compare to ReVa? https://github.com/cyberkaida/reverse-engineering-assistant

      I think your installation instructions are incomplete. I followed the instructions and installed via file -> install in the project view. Restarted. But GhidraMCP is not visible in Tools after opening a binary.

      • I've been using ReVa for a long time (even upstreamed some changes to it) and it works great.
    • Thank you for sharing, will soon try out. Does it support decompilation of android binaries?
  • I used a different Ghidra MCP server (LaurieWired's) to, umm, liberate some software recently. I can’t express how fun straightforward it was to analyze the binary and generate a keygen.

    I learnt a ton in the progress. I highly recommend others do the same, it’s a really fun way of spending an evening.

    I will certainly be giving this MCP server a go.

    • I have some old software I wrote that calls home to a server that no longer exists to do a cert check that would never pass in order to install it. I tried writing my own Ghidra tool, skill, agent, MCP and still can’t seem to figure it out. I’m positive it’s a “human skill” issue but man… ironic that this pops up the week after I gave up trying.
    • This branch is 110 commits ahead of LaurieWired/GhidraMCP:main.
  • Reverse engineering with LLMs is very underrated for some reason.

    I'm working on a hobby project - reverse-engineering a 30 year old game. Passing a single function disassembly + Ghidra decompiler output + external symbol definitions RAG-style to an agent with a good system prompt does wonders even with inexpensive models such as Gemini 3 Flash.

    Then chain decompilation agent outputs to a coding agent, and produced code can be semi-automatically integrated into the codebase. Rinse and repeat.

    Decompiled code is wrong sometimes, but for cleaned up disassembly with external symbols annotated and correct function signatures - decompiled output looks more or less like it was written by a human and not mechanically decompiled.

    • I've found that Gemini models often produce pseudocode that seems good at first glance but is typically wrong or incomplete, especially for larger or more complex functions. It might produce pseudocode for 70% of the function, then silently drop the last 30%. Or it might elide the inside of switch blocks or if statements, only including a comment explaining what should happen.

      Alternatively, Claude Opus generally output actual code that included more of the original functionality. Even Qwen3-30B-A3B performs better than Gemini, in my experience.

      It's honestly really frustrating. The huge context size available with Gemini makes the model family seem like a boon for this task; PCode is very verbose, impinging on the headroom needed for the model's response.

      • In my case I'm decompiling into C and it does a pretty good job at translation. There were situations where it missed an important implementation detail. For example, there is an RLE decompressor and Gemini generated plausible, but slightly incorrect code. Gemini 3 Pro was not able to find the bug and produced code that was similar to Gemini 3 Flash.

        The bug was one-shotted by GPT 5.2.

  • I haven't looked at the MCP server, but generally, reverse engineering with AI is quite underrated. I’ve had success extracting encryption keys from an android app that uses encryption to vendor-lock users by forcing them to use that specific app to open files that should otherwise be in an open format.

    By the way, this app had embedded the key into the shader, and it was required to actually run this shader on android device to obtain the key.

    • My friend and I were able to give claude a (no longer updated) unity arcade game. It decompiled it and created a one-to-one typescript port so it can run in the browser and now we're adding multiplayer support (for personal use, don't worry HN - we won't be distributing it). I'm very excited for what AI can do for legacy software.
    • I agree, I tried RE using multiple tools connected to MCP and a agent, it was tasked to recreate what the source code might have looked like from a binary and what possible vulnerabilities there could be. It did a incredible job when I compared it to the actual source.
    • > By the way, this app had embedded the key into the shader, and it was required to actually run this shader on android device to obtain the key.

      Oh that's clever. I don't suppose you can share more about how this was done?

  • Ive been using it (the original 15 tool version) for months now. It’s amazing. Any app's inner workings are suddenly transparent. I can track down bugs. Get a deeper understanding of any tool, and even write plug-ins or preload shims that mod any app. It’s like I finally actually _own_ the software I bought years ago.

    For objective C heavy code, I also use Hopper Disassembler (which now has a built in MCP server).

    Some related academic work (full recompilation with LLMs and Ghidra): https://dl.acm.org/doi/10.1145/3728958

    • Talking about RE'ing applications and equating that to OSS is not a good look when you work at GitHub...
      • I have no idea about any of that but like I wasn't thinking of github until you mentioned it and this comment I upvoted because was informative and relevant to the discussion and I don't know about R.E but curious to try and this kind of activity just seems like the sort of things people who are interested in software, learning and aware of security do... like to find bugs or malware or something... FOSS or not - actually "especially if not FOSS" you'd kinda like people to scan their binaries at <big tech corp> and have that knowledge indigenous wouldn't you? while thinking of code security etc, anyway

        Is this a bad look for Derrida.org?

        Anyway, "not my business"

      • That's why I put it in quotes. In no way am I equating anything. Making the inner workings visible is what I was referring to.
  • I thought MCP interfaces with high amounts of tools perform much worse than MCP interfaces with fewer tools, this doesn't seem like a great design.

    This also seems to just be vibecoded garbage.

    • Haven't looked at the app itself but the MCP tool problem is mainly solved now using lazy loading, it's far from perfect but the immediate context window overload problem is gone (in clients that support it anyway).

      Now just onto the fact that most MCP tools are just transforming API calls and their functionality and return data structures suck for LLM's....

    • True. Though vibecoded skill-based tools would perform much more efficiently than this.
  • I am not a reverse engineer. In fact, I only consider myself an intermediate coder(more of a scripter tbh), but I have decades of (fairly deep) technical experienced as a generalist. With Claude code and another Ghidra MCP I was able to reverse engineer a ransomware encryptor and decryptor (had both) to create a much more reliable version of the decryptor. Saved terabytes of data. Felt like a super power!
  • Interesting to see Ghidra here!

    A friend from work just used it (with Claude) to hack River Ride game (https://quesma.com/blog/ghidra-mcp-unlimited-lives/).

    Inspired by the, I have it a try as well. While I have no prior experience with reverse engineering, I ported an old game from PowerPC to Apple Silicon.

    First, including a few MCPs with Claude Code (including LaurieWired/GhidraMCP you forked from, and https://github.com/jtang613/GhidrAssistMCP). Yet, the agent fabricated as lot of code, instead for translating it from source.

    I ended up using headless mode directly in Cursor + GPT 5.2 Codex. The results were the best.

    Once I get some time, will share a write-up.

    • I’ve also been playing around with reverse engineering, and I’m very impressed. It turns out that Codex with GPT-5.2 is better at reverse engineering than Claude.

      For example, Codex can completely reverse-engineer this 1,300-line example [0] of a so-called C64-SID file within 30 minutes, without any human interaction.

      I am working on a multi-agent system that can completely reverse-engineer C64 games. Old MS-DOS games are still too massive to analyze for my budget limit.

      [0] https://gist.github.com/s-macke/595982d46d6699b69e1f0e051e7b...

      • Oh, interesting. I started using the ReVa/Ghidra MCP server together with Claude since day 1 (Well, since Claude Sonnet 4.0 was released) and I saw Claude get better at it with every update. I've gotten pretty far in reverse engineering a game from the early 2000s (though I still have to do a lot of things manually, but this then also taught me A TON about Ghidra)

        I'm very interested in trying out Codex now.

  • The cross-binary documentation transfer via normalized function hashing is really compelling for anyone tracking software that updates frequently. I've dealt with similar pain points analyzing game clients that push patches weekly — manually re-annotating shifted addresses is brutal.

    Curious about the hash collision rate in practice. The README mentions 154K+ entries from Diablo II patches. With that sample size, have you encountered meaningful false positives where structurally similar but semantically different functions matched? The Version Tracker comparison in the comments is fair — seems like combining this hash approach with additional heuristics (xref patterns, call graph structure) could reduce both false positives and negatives.

    The headless Docker mode is a nice touch for CI integration. Being able to batch-analyze binaries and auto-propagate annotations without spinning up a GUI opens up some interesting automated diffing workflows.

  • Simple question: why not a cli instead? As seems that lately LLM and agentic tools seems to be better at using clis rather than bloated MCPs?
    • I think they're only better for CLI tools that are in the training data. If it's a new tool, you'd need to dump the full documentation in the context either way.
      • This can be solved well enough by having the model invoke `--help`
      • [dead]
    • Tools like Claude Code have improved here. They won’t load all tools but instead rely on tool search. Context bloat of MCP servers was a thing if badly written clients but it’s certainly getting better
    • Because it was started before the revelation MCP was a context hog. https://github.com/LaurieWired/GhidraMCP
    • This is what I was thinking, or fewer, more versatile tools. Having the description of 110 tools in your context window at all times is just noise.
  • Funny coincidence, I'm working on a benchmark showcasing AI capabilities in binary analysis.

    Actually, AI has huge potential for superhuman capabilities in reverse engineering. This is an extremely tedious job with low productivity. Currently reserved, primarily when there is no other option (e.g., malware analysis). AI can make binary analysis go mainstream for proactive audits to secure against supply-chain attacks.

    • Great point! Not just binary analysis, plus even self-analysis! (See skill-snitch analyze and snitch on itself below!)

      MOOLLM's Anthropic skill scanning and monitoring "skill-snitch" skill has superhuman capabilities in reviewing and reverse engineering and monitoring the behavior of untrusted Anthropic and MOOLLM skills, and is also great for debugging and optimizing skills.

      It composes with the "cursor-mirror" skill, which gives you full reflective access to all of Cursor's internal chat state, behavior, tool calls, parameters, prompts, thinking, file reads and writes, etc.

      That's but one example of how skills can compose, call each other, delegate from one to another, even recurse, iterate, and apply many (HUNDREDS) of skills in one llm completion call.

      https://news.ycombinator.com/item?id=46878126

      Leela MOOLLM Demo Transcript: https://github.com/SimHacker/moollm/blob/main/designs/LEELA-...

      I call this "speed of light" as opposed to "carrier pigeon". In my experiments I ran 33 game turns with 10 characters playing Fluxx — dialogue, game mechanics, emotional reactions — in a single context window and completion call. Try that with MCP and you're making hundreds of round-trips, each suffering from token quantization, noise, and cost. Skills can compose and iterate at the speed of light without any detokenization/tokenization cost and distortion, while MCP forces serialization and waiting for carrier pigeons.

      speed-of-light skill: https://github.com/SimHacker/moollm/tree/main/skills/speed-o...

      Skills also compose. MOOLLM's cursor-mirror skill introspects Cursor's internals via a sister Python script that reads cursor's chat history and sqlite databases — tool calls, context assembly, thinking blocks, chat history. Everything, for all time, even after Cursor's chat has summarized and forgotten: it's still all there and searchable!

      cursor-mirror skill: https://github.com/SimHacker/moollm/tree/main/skills/cursor-...

      MOOLLM's skill-snitch skill composes with cursor-mirror for security monitoring of untrusted skills, also performance testing and optimization of trusted ones. Like Little Snitch watches your network, skill-snitch watches skill behavior — comparing declared tools and documentation against observed runtime behavior.

      skill-snitch skill: https://github.com/SimHacker/moollm/tree/main/skills/skill-s...

      You can even use skill-snitch like a virus scanner to review and monitor untrusted skills. I have more than 100 skills and had skill-snitch review each one including itself -- you can find them in the skill-snitch-report.md file of each skill in MOOLLM. Here is skill-snitch analyzing and reporting on itself, for example:

      skill-snitch's skill-snitch-report.md: https://github.com/SimHacker/moollm/blob/main/skills/skill-s...

      MOOLLM's thoughtful-commitment skill also composes with cursor-mirror to trace the reasoning behind git commits.

      thoughtful-commit skill: https://github.com/SimHacker/moollm/tree/main/skills/thought...

      MCP is still valuable for connecting to external systems. But for reasoning, simulation, and skills calling skills? In-context beats tool-call round-trips by orders of magnitude.

      More: Speed of Light -vs- Carrier Pigeon (an allegory for Skills -vs- MCP):

      https://github.com/SimHacker/moollm/blob/main/designs/SPEED-...

      • Haven't dived deep into it yet, but dabbled in similar areas last year (trying to get various bits to reliably "run" in-context).

        My immediate thought was to want to apply it to the problem I've been having lately: could it be adapted to soothe the nightmare of bloated llm code environments where the model functionally forgets how to code/follow project guidelines & just wants to complete everything with insecure tutorial style pattern matching?

  • Have you had any issues with models "refusing" to do reverse engineering work?
    • From my experience, OpenAI Codex loves reverse engineering work. In one case it did a very thorough job of disassembling a 8051 MCUs firmware and how it spoke to its attached LCD controller.

      Another (semi-related) project, given the manufacturers of above MCUs proprietary flashing SDK, it found the programmers firmware, extracted the decryption key from the updating utility, decrypted the firmware and accompanying flashing software and is currently tracing the necessary signals to use an Arduino as a programmer.

      So not only is it willing, it's actually quite good at it. My thinking is that reverse engineering is a lot of pattern recognition and not a lot of "original thinking". I.e. the agent doesn't need to come up with anything new, just recognise what already exists.

    • I've had no issues with Claude refusing the few times I've done it. But I also remember I phrased things in a sort of way to make sure it didn't sound shady.

      I suspect if I asked it to crack DRM or help me make a cheat for an online game, it would probably have refused. Or maybe it wouldn't have cared, I was just not interested in testing that and risking ending up banned from using Claude.

  • I was just looking for an active fork of LaurieWired/GhidraMCP. I am currently using GhidrAssistMCP.

    First impressions of the fork: everything has deviated too much from the original. look a bit sloppy in places. Everything seems overly complicated in areas where it could have been simpler.

    There is an error in the release: Ghidra → File → Configure → Miscellaneous → Enable GhidraMCP. Developer not Miscellaneous.

    I can't test it in antigravity there tools limit per mcp: Error: adding this instance with 110 enabled tools would exceed max limit of 100.

  • Reverse engineering is illegal in many cases. Aren't you afraid you might be automating the process for your users to get into (legal) trouble? Will your tool warn the user if they are about to violate laws?
    • Claude is already known for its attempts to send emails to FBI ;)
  • 110 tools. That’s probably a reason why Anthropic is probably switching to sandboxed code execution over MCPs.

    It’s just easier to write code and do something specific for a task than load so many tool metadata.

    I did not go past IDA. But I remember idc and IDA python. I wonder if it’s a better approach to expose a single tool to execute scripts to query what the agent needs.

  • 110 is a bit... much. Not complaining about the achievement, just pointing out that most models will be swamped with that much tooling available, so I hope they can be toggled on/off as groups (I can do that individually in VS Code, but sometimes you need to do that on the server side as well)
  • I have never tried to decompile using an LLM but I have heard that it can recognize the binary patterns and do it. Has anyone tried to decompile a major software and been successful?
  • Super interesting.

    Last week-end I was exploring the current possibilities of automated Ghidra analysis with Codex. My first attempt derailed quickly, but after giving it the pyghidra documentation, it reliably wrote Python scripts that would alter data types etc. exactly how I wanted, but based on fixed rules.

    My next goal would be to incorporate LLM decisions into the process, e.g. let the LLM come up with a guess at a meaningful function name to make it easier to read, stuff like that. I made a skill for this functionality and let Codex plough through in agentic mode. I stopped it after a while as I was not sure what it was doing, and I didn't have more time to work on it since. I would need to do some sanity checks on the ones it has already renamed.

    Would be curious what workflows others have already devised? Is MCP the way to go?

    Is there a place where people discuss these things?

  • Very cool project! The MCP surface area here (110 tools) is a great example of why tool-output validation is becoming critical.

    When an AI agent interacts with binary analysis tools, there are two injection vectors worth considering:

    1. *Tool output injection* — Malicious binaries could embed prompt injection in strings/comments that get passed back to the LLM via MCP responses

    2. *Indirect prompt injection via analyzed code* — Attackers could craft binaries where the decompiled output contains payloads designed to manipulate the agent

    For anyone building MCP servers that process untrusted content (like binaries, web pages, or user-generated data), filtering the tool output before it reaches the model is a real gap in most setups.

    (Working on this problem at Aeris PromptShield — happy to share attack patterns we've seen if useful)

  • I don't see hardware requirements anywhere. Does this run on a simple CPU, or is a decent GPU required?
  • LLMs are very good at understanding decompiled code. I don't think people have updated on the fact that almost everything is effectively open source now!
    • Being able to read some iteration of potential source code doesn’t make it open source. Licensing, copyright, build chains, rights to modify and redistribute, etc are all factors.
  • I have this weird thing with Ghidra where I can’t get it to disassemble .s37 or .hex flash files for PPC (e200z4). The bytes show OK and I’m pretty sure I’m selecting the right language. Any insight on things to try would be appreciated.

    IDA work(ed) fine but I misplaced my license somewhere.

  • Tool stuffing degrades LLM tool use quality. 100+ tools is crazy. We probably need a tool that does relevant tool retreaval and reranking lol
  • Interesting project. In one of our reverse engineering projects we used Gemini to interpret the decompiled C code. Worked really well. Hope to publish it next month.
  • how do you handle intent orchestration? I see you have workflows, but imagine this is used in combination with other MCP servers, how do you make sure the prompt is sent to the right MCP server and that the right tool or chain of tools gets executed?
  • Thank you for sharing this, it's a a huge amount of work and I now know how I'll be spending this weekend!
  • I saw this earlier, but opted for LaurieWired's MCP because it had a nice README and seemed to be the most common. How does this one compare? Are there any benchmark or functionality comparisons?

    https://github.com/LaurieWired/GhidraMCP

  • How could this be more efficiently and elegantly refactored as an Anthropic or MOOLLM skill set that was composable and repeatable (skills calling other skills, and iterating over MANY fast skill calls, in ONE llm completion call, as opposed many slow MCP calls ping-ponging back and forth, waiting for network delay + tokenization/detokenization cost, quantization and distortion each round)?

    What parts of Ghidra (like cross referencing, translating, interpreting text and code) can be "uplifted" and inlined into skills that run inside the LLM completion call on a large context window without doing token IO and glacially slow and frequently repeated remote procedure calls to external MCP servers?

    https://news.ycombinator.com/item?id=46878126

    >There's a fundamental architectural difference being missed here: MCP operates BETWEEN LLM complete calls, while skills operate DURING them. Every MCP tool call requires a full round-trip — generation stops, wait for external tool, start a new complete call with the result. N tool calls = N round-trips. Skills work differently. Once loaded into context, the LLM can iterate, recurse, compose, and run multiple agents all within a single generation. No stopping. No serialization.

    >Skills can be MASSIVELY more efficient and powerful than MCP, if designed and used right. [...]

    Leela MOOLLM Demo Transcript: https://github.com/SimHacker/moollm/blob/main/designs/LEELA-...

    >I call this "speed of light" as opposed to "carrier pigeon". In my experiments I ran 33 game turns with 10 characters playing Fluxx — dialogue, game mechanics, emotional reactions — in a single context window and completion call. Try that with MCP and you're making hundreds of round-trips, each suffering from token quantization, noise, and cost. Skills can compose and iterate at the speed of light without any detokenization/tokenization cost and distortion, while MCP forces serialization and waiting for carrier pigeons.

    speed-of-light skill: https://github.com/SimHacker/moollm/tree/main/skills/speed-o...

    More: Speed of Light -vs- Carrier Pigeon (an allegory for Skills -vs- MCP):

    https://github.com/SimHacker/moollm/blob/main/designs/SPEED-...

  • i wonder how this compares to the work I've been doing @ 2389 with the binary-re skill: https://github.com/2389-research/claude-plugins/tree/main/bi...

    Specifically the dynamic analysis skills could get a really big boost with this MCP server, I also wonder if this MCP server could be rephrased into a pure skill and not come with all the context baggage.

  • Now we just need to choose a game and run Claude Code with Ghidra MCP in a loop until the game is completely decompiled.