A Turnkey Replacement for ngrok and RabbitMQ in Your Webhook Setup
The classic DIY webhook stack has two halves. In development, ngrok exposes localhost to the public internet so a provider's webhook can reach your machine. In production, RabbitMQ (plus a worker pool) absorbs spikes and retries failures. Both work — and both are operational liabilities, not solutions to a webhook problem.
The dev tool and the prod queue have nothing to do with each other. Your webhook handler is the integration point, and that handler ends up doing signature verification, deduplication, and retry-policy logic in two slightly different shapes — once locally with whatever helper code you wrote, and once in production with whatever lives in front of RabbitMQ. Two systems, two failure modes, two cost lines, and a perpetual gap between your dev experience and your prod runtime.
In this article, we'll look at why ngrok-plus-RabbitMQ is so common as a webhook stack, where it breaks, and how it compares to a unified replacement: Hookdeck CLI and Event Gateway, plus alternatives like Cloudflare Tunnel + AWS SQS, localtunnel + Postgres queues, and Convoy.
Why this combination is so common (and where it breaks)
The pattern starts simple. A developer needs to test a Stripe webhook on their laptop. ngrok solves "expose localhost to the public internet." Once the integration is real, the team moves to production and discovers that webhook spikes overwhelm the application servers and that delivery to downstream services sometimes fails. RabbitMQ solves "absorb spikes and retry." Two reasonable choices, taken at different times, for different problems.
Neither was designed with the other in mind, and neither knows what a webhook is. ngrok is a generic HTTPS tunnel — it'll forward whatever the provider sends, but it can't tell you whether the signature is valid, can't dedupe a duplicate event, can't filter to events you actually care about. RabbitMQ is a generic message broker — it'll smooth spikes and retry on failure, but the receiver service in front of it owns every webhook-specific concern: signature verification, idempotency keys, source-aware retry policy, replay tooling. The webhook-specific code ends up split across "what the dev server does" and "what the receiver service does," and the two implementations drift over time.
The cost shows up in the small frictions: ngrok URLs change between sessions on the free tier, so the Stripe dashboard is reconfigured every time a developer restarts. RabbitMQ clusters need mirroring, version upgrades, and capacity planning. Signature verification gets re-implemented in three places. Per-event replay means writing your own consumer that reads from the dead-letter exchange. None of this is glamorous; all of it is engineering hours that aren't building product.
How to evaluate a unified replacement
If you're considering replacing both halves at once, useful evaluation criteria:
- Single tool covers dev → prod. Same inspector, same source library, same retry policies in development and in production.
- Webhook-aware semantics throughout. Signature verification, source typing, idempotency built into the platform — not coded per provider.
- Persistent local URL. Survives restarts. Configured once in the provider dashboard, works for the lifetime of the source.
- Production queue with backpressure and retry. Per-source policy, configurable without code changes.
- Observability that ties dev and prod. The same event you saw during testing appears in your production observability surface — same shape, same fields.
- Team workflows. Multiple developers share a source without colliding on URLs or filters.
- Cost. Replaces two paid line items (ngrok, RabbitMQ infra or CloudAMQP) with one.
Hookdeck CLI + Event Gateway — replacing both layers
Hookdeck is webhook infrastructure built as one product across both halves of the stack. The CLI is the local-development tool. Event Gateway is the production runtime. Same source configuration applies in dev and prod — provider URL changes once, when you go live.
Hookdeck CLI replaces ngrok
hookdeck listen 3000 gives you a stable webhook URL that forwards to localhost. No account required to start; the free tier includes 10,000 events per month. Inspection happens in the terminal and in the Hookdeck Console web UI. Replay any event to your local server without re-triggering it from the provider. Filter by header, body, path, or query with CLI flags (--filter-body, --filter-headers, --filter-path, --filter-query).
Multi-developer support is the part that ngrok's architecture doesn't allow: each engineer can connect to the same shared webhook source with their own filters, so two developers working on different feature branches both receive Stripe webhooks at their own localhost — no "whose ngrok URL is configured at Stripe today?" The same URL also works in CI, so integration tests get the same source treatment.
For a deeper comparison of tunneling tools, see ngrok alternatives for local tunnel webhook development.
Hookdeck Event Gateway replaces RabbitMQ
Event Gateway is a managed durable queue with backpressure tuning — no broker for you to operate. Per-source retry policies (linear, exponential, until-success, max-attempts) are configurable per connection, without code changes. Connection-based routing handles direct, fan-out, transformed (via JavaScript), or filtered delivery. It's the equivalent of RabbitMQ exchanges and bindings, but webhook-aware. Issues and alerting tie delivery failures to the original payload, integrated with Slack, PagerDuty, and OpsGenie.
The 120+ pre-configured sources handle signature verification before the webhook ever reaches your application, so the receiver service that used to sit between ngrok and RabbitMQ (the one doing HMAC verification, dedupe, and ack-then-enqueue) collapses into a thin endpoint that processes already-verified events.
One tool, two halves of the stack
The same inspector works in dev and prod. The same source configuration works in dev and prod. The same retry policies, the same provider sample library, the same observability surface. The architectural payoff is operational: one product to learn, one bill, one observability surface, one set of credentials to rotate.
| Layer | DIY (ngrok + RabbitMQ) | Hookdeck (CLI + Event Gateway) |
|---|---|---|
| Local tunnel | ngrok ($8/mo for stable URL) | Hookdeck CLI (free, stable, with inspection) |
| Production queue | RabbitMQ + worker pool (self-operate) | Event Gateway (managed) |
| Signature verification | Hand-rolled per source × 2 | 120+ pre-configured |
| Per-source retry policy | Code-defined globally | Per-connection config |
| Per-event replay | ❌ Custom DLX consumer | ✅ One-click + bulk |
| Same inspector dev / prod | ❌ | ✅ Console |
| Multi-developer workflows | ❌ | ✅ |
| Cost lines | 2 (ngrok paid, RabbitMQ infra) | 1 (Hookdeck plan) |
Replace ngrok and RabbitMQ in your webhook stack
Hookdeck CLI for local, Event Gateway for production. Try it free.
Other ways to replace ngrok and RabbitMQ separately
If you'd rather replace each half independently, the alternatives split into managed and self-hosted paths.
Cloudflare Tunnel + AWS SQS / Cloud Tasks
Cloudflare Tunnel gives you a free persistent local tunnel through Cloudflare's edge network. AWS SQS or Google Cloud Tasks handle the production queueing.
Both managed, both cheap to free at small volumes. The trade-off: two systems with no shared inspection, more glue code in your application, and no webhook-aware semantics in either. Signature verification, idempotency, and per-source retry policy are still yours to write. Right answer if you want pure cloud-native and don't mind the integration work. For the local-tunneling side, see Cloudflare Tunnel alternatives for local webhook development.
localtunnel + Postgres-backed queues
The cheapest possible setup: localtunnel for the dev tunnel and pg-boss (or LISTEN/NOTIFY) for the production queue. Both free, both minimal.
Fragile in production. localtunnel doesn't promise URL stability and sometimes simply stops working under sustained traffic. Postgres-backed queues are fine at low volume but accumulate vacuum pressure and locking issues at scale. Right answer at very low volume or for prototypes; not a destination for a real webhook setup.
Convoy
Convoy is the open-source self-hosted webhook gateway. It includes both a CLI for local development and a server for production webhook routing — the right shape for replacing ngrok-plus-RabbitMQ specifically.
The trade-off is operational: a smaller pre-configured source library than Hookdeck, no integrated full-text search across long event history, and you operate it yourself (Postgres, Redis, the Convoy service, plus your own metrics and alerting). Best fit when self-hosting is non-negotiable. See the Hookdeck Event Gateway vs Convoy comparison for a detailed look.
When to keep ngrok or RabbitMQ
Keep ngrok if:
- Extensive non-webhook usage — raw TCP tunnels, IoT gateways, demo links etc. that no webhook-shaped tool replaces.
- You're already paying for it for non-webhook reasons.
- Your team has standardised on ngrok for general remote-access workflows.
Keep RabbitMQ if:
- It's hosting a non-webhook internal event bus across many services.
- AMQP-specific features are actively used — priority queues, JMS-style fanout, RabbitMQ Streams, specific protocol bridges (STOMP, MQTT).
- Self-hosting is mandatory and the team operates the cluster anyway for non-webhook reasons.
The argument for replacement gets stronger when ngrok and RabbitMQ are only there because they were the default at the start, when the webhook-specific code in front of RabbitMQ has accumulated bugs across providers, or when the dev experience is materially different from the prod experience and that gap is causing incidents.
Migration from ngrok + RabbitMQ to Hookdeck
The shape of a typical migration:
Day 1 — replace ngrok. Install the Hookdeck CLI (npm install -g hookdeck-cli or via Homebrew). Run hookdeck listen 3000 and configure your provider to send to the resulting URL. Each developer gets a stable URL per workspace; for shared sources, the team configures filters per developer.
Day 2 — stand up Event Gateway in shadow. Create an Event Gateway source for the same provider and a connection to your existing receiver endpoint. Configure the retry policy on the connection. Run in shadow mode — both pipelines (existing RabbitMQ-backed receiver and new Hookdeck path) receive events for a few days while you verify behavior matches.
Day 3 — cut over. Update the provider URL to point to Hookdeck Event Gateway as primary. Pull the receiver service out of the RabbitMQ pipeline. Remove signature verification and retry code from the application — Event Gateway handles both.
Outcome. Dev and prod use the same inspector, same source library, same retry policies. RabbitMQ becomes optional rather than load-bearing on the webhook path. ngrok stops appearing in the developer-tools budget line.
For more background, see managed webhook gateway vs DIY queue-backed infrastructure and the conceptual event gateway vs message broker guide.
Hookdeck CLI and Event Gateway as the turnkey answer
ngrok and RabbitMQ are both well-engineered tools. They were just engineered for different problems than third-party webhook handling, and the gap between what they do and what webhooks need is filled by your engineering team, twice — once in development, once in production. The webhook-specific code drifts between the two environments, and the operational footprint accumulates because each tool is a separate primitive that someone has to keep running.
Hookdeck CLI and Event Gateway are the same product across both halves. One source configuration. One inspector. One observability surface. One bill. The dev experience and the prod runtime are continuous — events you tested locally show up in production with the same shape, the same retry trail, the same payload visibility.
Developers who start on the CLI tend to stick, because the same inspector follows the events into production. If you're testing this from a dev server, run hookdeck listen 3000 to forward webhooks to localhost. If you're running RabbitMQ in production today and wondering whether the receiver service in front of it is doing more work than it should be, Event Gateway is the managed answer.
One tool. Dev to prod. Free to start.
Hookdeck CLI replaces ngrok. Event Gateway replaces RabbitMQ. Same product, same source configuration.
FAQs
Can a single tool replace both ngrok and RabbitMQ in my webhook stack?
Yes — Hookdeck CLI replaces ngrok during development by giving you a stable webhook URL that forwards to localhost, with inspection, replay, and filtering built in. Hookdeck Event Gateway replaces RabbitMQ in production by providing durable queueing, per-source retry policies, signature verification for 120+ providers, and connection-based fan-out routing — all as a managed service. Same source configuration, same observability, dev to prod.
What about non-webhook uses of ngrok and RabbitMQ?
Hookdeck only replaces them for the webhook half of the stack. If you use ngrok for raw TCP tunnels, IoT gateways, or sharing demo links, keep it. If RabbitMQ hosts a non-webhook internal event bus across many services, keep it, and use Hookdeck Outpost's RabbitMQ destination if you still want webhook events to reach internal AMQP consumers.
How is Hookdeck CLI different from ngrok?
ngrok is a generic tunneling tool with inspection bolted on. Hookdeck CLI is a webhook development tool that includes tunneling. The CLI gives you stable URLs on the free tier (ngrok charges for stable URLs), filtering by header / body / path / query, multi-developer workflows on a shared source, and integration with the Hookdeck Console as a web inspector, none of which ngrok provides.
How is Hookdeck Event Gateway different from RabbitMQ?
RabbitMQ is a generic message broker. The receiver service in front of it owns webhook concerns: signature verification, idempotency, source-aware retries, replay tooling. Event Gateway replaces both the receiver and the broker for the webhook half of the workload — signature verification for 120+ providers, per-source retry policies, connection-based fan-out, JavaScript transformations, and observability all as a managed service.