Event Gateway vs. Message Broker: When to Use Which

Event gateways and message brokers both deal in asynchronous communication. Both accept events or messages, both decouple producers from consumers, and both provide delivery guarantees. At a glance, they look like they solve the same problem.

They don't. A message broker is infrastructure you operate (or pay to have operated) for internal service-to-service communication. An event gateway is a managed platform that handles the full lifecycle of events — ingestion, verification, filtering, transformation, routing, delivery, and recovery — with a focus on bridging the gap between external systems and your internal architecture.

Understanding the boundary between them matters because teams regularly reach for a broker like Kafka or RabbitMQ when what they actually need is event lifecycle management, or they try to bolt webhook handling onto their broker infrastructure and end up building a fragile, custom integration layer on top.

This guide covers what each technology does, how they differ architecturally, and how to decide which one your system needs — or whether you need both.

What is a message broker?

A message broker is middleware that receives messages from producers, stores them temporarily, and routes them to consumers. It decouples the services that generate data from the services that process it — the producer doesn't need to know who the consumers are, and consumers don't need to be available at the moment a message is published.

The core abstraction varies by broker. In queue-based systems like RabbitMQ, messages are placed in a queue and consumed by a single worker (or a group of competing workers). Once a message is acknowledged, it's removed from the queue. In log-based systems like Apache Kafka, messages are appended to an ordered, partitioned log. Consumers read from the log at their own pace using offsets, and the data persists according to a retention policy — enabling multiple consumers to independently read the same stream.

Message brokers were designed for internal, service-to-service communication within architectures that you control. Both the producer and consumer speak the broker's native protocol (AMQP for RabbitMQ, the Kafka wire protocol for Kafka, STOMP for ActiveMQ), and both are typically services you've written and deployed.

Core capabilities of a message broker

Message persistence and durability. Brokers store messages on disk so they survive restarts, crashes, or consumer downtime. In Kafka, messages are retained for a configurable period regardless of consumption. In RabbitMQ, durable queues persist messages until they're acknowledged by a consumer.

Pub/sub and point-to-point delivery. Brokers support multiple messaging patterns. Point-to-point (queue) semantics deliver each message to exactly one consumer. Publish/subscribe (topic) semantics deliver messages to all subscribed consumers. Kafka's consumer group model blends both: multiple consumers within a group share the load, while different groups each receive a full copy of the stream.

Ordering guarantees. Kafka provides strict ordering within a partition — messages with the same partition key are always processed in sequence. RabbitMQ provides ordering within a single queue. These guarantees are important for use cases where processing order affects correctness, like financial transactions or state machine transitions.

Consumer-driven flow control. Consumers pull messages at their own rate, which naturally prevents overwhelm. Kafka consumers advance their offset as they process. RabbitMQ supports prefetch limits to control how many unacknowledged messages a consumer can hold at once.

High throughput for internal workloads. Kafka is built for volume — commonly handling hundreds of thousands to millions of messages per second across partitioned topics. RabbitMQ handles tens of thousands of messages per second and is optimized for lower-latency task distribution. Both scale horizontally through clustering and partitioning.

Dead letter queues. When a consumer rejects a message or processing fails repeatedly, the message can be routed to a dead letter queue for inspection, alerting, or manual reprocessing. This is the broker's primary failure recovery mechanism.

Common message broker use cases

Message brokers excel at internal, high-volume, service-to-service communication:

  • Task distribution: Spreading work across a pool of workers — image processing, email sending, report generation — where each job should be handled by exactly one consumer.
  • Event streaming and analytics pipelines: Feeding real-time data from application services into analytics, search indexing, or machine learning systems. Kafka's log retention makes it especially suited to this — multiple teams can independently consume the same event stream.
  • Service decoupling: Allowing internal services to communicate without direct dependencies. The order service publishes an event, and the inventory, billing, and notification services each consume it independently.
  • CQRS and event sourcing: Using the broker's durable log as the source of truth for state changes, with read-optimized views built from the event stream.

Popular message broker platforms include Apache Kafka, RabbitMQ, Amazon SQS/SNS, Amazon EventBridge, Google Cloud Pub/Sub, Azure Service Bus, Apache Pulsar, and NATS.

What is an event gateway?

An event gateway occupies a different layer in the stack than a message broker. Rather than providing raw messaging primitives that your services build on top of, it offers a complete, managed pipeline for event ingestion, processing, and delivery — with HTTP as the default transport.

Think of the relationship this way: a message broker gives you topics, queues, and consumer group mechanics. You bring the integration code, the protocol translation, the verification logic, the payload adapters, the retry policies, and the monitoring stack. An event gateway bundles all of that into a single platform. You declare sources, destinations, filters, and transformations; the gateway handles everything in between.

This makes event gateways particularly well-suited to the boundaries of your architecture — the places where events cross between systems you control and systems you don't. External webhooks, partner integrations, customer-facing notifications, and third-party service connections are all scenarios where the operational overhead of a broker isn't justified and the integration complexity is the real problem. For a deeper look at the product category and its origins, see our introduction to the event gateway.

Core capabilities of an event gateway

HTTP as the default transport. Where brokers require producers and consumers to adopt a native protocol (AMQP, the Kafka wire protocol), an event gateway operates over HTTP natively. Webhooks, REST callbacks, and async API requests arrive without protocol translation. This eliminates the API-gateway-to-broker bridge that teams otherwise have to build and maintain when external events need to reach broker-backed consumers.

Provider-specific verification. A broker assumes internal trust — messages come from services you deployed. An event gateway assumes external distrust. It validates signatures using the scheme each provider mandates — Stripe's HMAC-SHA256, GitHub's secret header, Shopify's key rotation — at the platform level, so verification logic doesn't leak into your consumer code.

Content-aware filtering. Brokers route by topic or queue — structural channels you define ahead of time. An event gateway goes further by evaluating the content of each event against filter rules defined on headers, metadata, and payload fields. Events that don't match a destination's criteria never leave the gateway, which means downstream services don't burn compute discarding irrelevant messages.

Schema-bridging transformations. External systems don't publish to your schema. An event gateway restructures payloads between receipt and delivery — renaming fields, flattening nested structures, mapping between formats — so that consumers see a consistent shape regardless of which provider produced the event. In a broker architecture, this work falls to custom adapter services deployed alongside the broker.

Declarative routing and fan-out. You configure which events go where based on source, type, or content. Sending an event to a new destination means adding a routing rule, not deploying a new consumer or reconfiguring topic subscriptions. Fan-out to multiple destinations is a platform-level operation rather than a consumer-group coordination exercise.

Push-based delivery with managed retries. Unlike a broker where consumers pull, an event gateway pushes events to HTTP destinations and owns the delivery outcome. Failures trigger automatic retries with configurable backoff. Persistently failing deliveries surface as structured, searchable issues — a more actionable pattern than routing messages to a dead letter queue that requires custom scripts to inspect and reprocess.

First-class replay. After an outage, you select a time range and replay all affected events to the destination that was down. This is an operational action — a button click or API call — not a manual offset reset or DLQ reprocessing pipeline that you build and maintain.

Integrated lifecycle visibility. Broker observability means stitching together consumer lag metrics, application logs, and distributed traces from separate tools. An event gateway tracks every event from arrival through delivery in a single, searchable timeline: what was received, how it was filtered and transformed, where it was routed, and the outcome of each delivery attempt.

Destination-aware rate limiting. The gateway controls delivery pace per destination, shielding consumers from traffic bursts without dropping events at ingestion. This is comparable to a consumer's prefetch limit in RabbitMQ, but enforced at the platform layer and applied per-destination rather than per-consumer.

Common event gateway use cases

Event gateways are the right choice when the problem is managing the lifecycle of events across system boundaries:

  • Inbound webhook management: Ingesting webhooks from SaaS providers at scale — with queueing, signature verification, deduplication, and retry logic handled at the platform level rather than in custom application code.
  • Outbound webhook delivery: Providing your customers with webhook subscriptions, with the event gateway managing authentication, retry logic, rate limiting, and delivery visibility on your behalf.
  • Cross-system integration: Connecting third-party services to your internal architecture (or to each other) by receiving events from one system, transforming the payload, and delivering it to another — without writing and maintaining glue services.
  • Asynchronous API endpoints: Accepting data from external clients, SDKs, or devices via HTTP and reliably funneling it into your processing pipeline.
  • Serverless event routing: Serving as the central hub for event-driven communication between serverless functions and services that don't have persistent connections to a message broker.

For a side-by-side comparison of event gateway platforms, see the event gateway comparison guide.

Key differences between event gateways and message brokers

Both handle asynchronous communication, but they come at it from opposite directions: message brokers are internal plumbing you build into your architecture, while event gateways are a managed layer that handles event lifecycle across system boundaries.

Protocol and integration model

Message brokers speak their own protocols. Kafka uses a binary wire protocol. RabbitMQ uses AMQP (or STOMP, or MQTT). Producers and consumers must use client libraries that implement these protocols. If you need to accept events over HTTP — which is how webhooks, external APIs, and most third-party integrations work — you have to build a translation layer: an API endpoint that receives the HTTP request, converts it to the broker's format, and publishes it. That translation layer then needs its own scaling, monitoring, and failure handling.

An event gateway is HTTP-native. Events arrive over HTTP and are delivered over HTTP. No protocol bridging, no client libraries for external integrations to install, no custom translation services to maintain. This makes it immediately compatible with any system that can send or receive HTTP requests — which is effectively every system.

Operational overhead

Running a message broker in production is a significant operational commitment. A Kafka deployment involves managing brokers, ZooKeeper or KRaft clusters, topic partitions, replication factors, consumer group rebalancing, disk capacity planning, and monitoring for under-replicated partitions. RabbitMQ is lighter but still requires cluster management, queue mirroring, memory and disk alarm thresholds, and federation configuration for multi-region setups.

Even managed broker services (Confluent Cloud, Amazon MSK, CloudAMQP) only remove the infrastructure layer — you still own topic design, consumer group coordination, schema management, dead letter queue processing, and the monitoring stack that ties it all together.

An event gateway is fully managed. You configure sources, destinations, filters, and transformations. The platform handles scaling, durability, retry logic, and observability. There are no clusters to size, no partitions to rebalance, no consumer groups to coordinate. The operational surface area is smaller by an order of magnitude.

Filtering and transformation

Message brokers route based on structure: which topic or queue a message was published to, which routing key it carries (in RabbitMQ's exchange model), or which partition key determines its placement. Filtering based on the content of the message — its payload fields, header values, or metadata — is left to consumer code. Each consumer receives everything published to its topic or queue and decides what to ignore.

An event gateway filters and transforms at the platform level, between ingestion and delivery. Rules evaluate message content, not just routing metadata. Events that don't match a destination's filter criteria are never delivered, saving compute on the consumer side. Transformations restructure payloads before delivery, so the consumer receives data in the shape it expects. In a broker-based architecture, this logic lives in custom services — often called "enrichment" or "adapter" services — that you write, deploy, and maintain alongside the broker.

Delivery model and failure recovery

Both brokers and event gateways provide delivery guarantees, but the mechanisms are different and optimized for different failure modes.

In a broker, the consumer pulls messages and is responsible for acknowledging them. If a consumer crashes, the broker redelivers unacknowledged messages (in RabbitMQ) or the consumer resumes from its last committed offset (in Kafka). Dead letter queues capture messages that fail processing repeatedly, but reprocessing them requires custom tooling — scripts to inspect the DLQ, determine what went wrong, and republish messages. There's no built-in concept of "replay the last 6 hours of events for this consumer."

An event gateway pushes events to destinations and owns the delivery outcome. If delivery fails, the gateway retries automatically on a configurable schedule. Failed deliveries surface as trackable issues with full context — the event payload, the destination, the error response, and the delivery history. Bulk replay is a first-class operation: select a time range and replay all events to a specific destination. Recovery from an extended outage is a configuration action, not a development project.

Observability model

With a message broker, observability is assembled from pieces. Broker-level metrics (consumer lag, partition throughput, queue depth) come from the broker's monitoring interface or exporters like Prometheus. Application-level tracing (which events were processed, what they contained, what went wrong) comes from your application logs and tracing tools. Connecting broker metrics to application outcomes — "this specific Stripe payment event was consumed 3 times and ultimately failed because of a schema mismatch" — requires correlation across multiple systems.

An event gateway provides lifecycle observability as a single, integrated view. Every event is tracked from ingestion through delivery: what arrived, which filters and transformations were applied, where it was routed, how many delivery attempts were made, and the status of each. When something fails, you search for the event and see its full history in one place rather than correlating broker consumer lag graphs with application error logs.

Who owns the producer?

This is a practical distinction that often determines which technology is the right fit.

With a message broker, you typically own both sides: you write the producer, you write the consumer, and you control the message schema. This works well for internal service-to-service communication where you can mandate a protocol, enforce schemas, and version your topics.

With an event gateway, the producer is often a system you don't control. Stripe decides when and how it sends payment events. Shopify decides its webhook payload format. GitHub decides its signature scheme. Your internal services don't get a say in these decisions. An event gateway is purpose-built for this asymmetry: it handles the heterogeneous protocols, verification methods, payload formats, and retry behaviors of external producers so your internal consumers don't have to.

Feature comparison

CapabilityMessage Broker (Kafka / RabbitMQ)Event Gateway
Primary use caseInternal service-to-service messagingEvent lifecycle management across system boundaries
ProtocolNative binary protocols (Kafka wire protocol, AMQP)HTTP-native ingestion and delivery
Operational modelSelf-managed or managed infrastructureFully managed platform
Producer relationshipYou own and control the producerProducer is often an external system you don't control
Source verificationNot applicable (internal trust model)Provider-specific signature verification (HMAC, handshakes)
FilteringTopic/queue-based routing; content filtering in consumer codeContent-based filtering on payload and metadata at platform level
TransformationCustom services you build and deployConfigurable transformations between ingestion and delivery
Delivery modelConsumer pulls messagesGateway pushes to destinations
Failure recoveryDead letter queues + custom reprocessing scriptsAutomatic retries + first-class bulk replay
OrderingStrict within partition (Kafka) or queue (RabbitMQ)Delivery order not guaranteed; consumers handle idempotency
Throughput ceilingVery high (millions/sec with Kafka)Optimized for HTTP event volumes; not designed for raw log streaming
ObservabilityBroker metrics + application logs + external tracingIntegrated end-to-end event lifecycle tracking
Consumer coordinationConsumer groups, offset management, rebalancingNot applicable — gateway manages delivery

Real-world examples

Message broker in practice: an e-commerce order pipeline

Consider an e-commerce platform processing thousands of orders per minute. When a customer places an order, the order service publishes an order.created message to a Kafka topic. Five internal services consume from this topic independently, each in its own consumer group: inventory reservation, payment processing, warehouse fulfillment, email notifications, and an analytics pipeline that feeds a real-time dashboard.

Kafka's partitioned log makes this work well. Each consumer group reads the full stream at its own pace. The payment service can fall behind temporarily without affecting inventory reservation. The analytics pipeline can reprocess the last week of orders by resetting its consumer offset. Strict ordering within each partition ensures that order.updated events for the same order are always processed after the corresponding order.created.

The team manages a 6-broker Kafka cluster with 3x replication, monitors consumer group lag through Grafana dashboards connected to a Prometheus exporter, and maintains a dead letter topic with a custom reconciliation service that inspects failed messages and republishes them. Schema evolution is handled through a Confluent Schema Registry that validates Avro schemas on produce. This infrastructure requires a dedicated platform team to operate, but it handles the internal event volume reliably.

Event gateway in practice: the same platform's external integrations

Now consider the same platform's external integration layer. The platform receives payment webhooks from Stripe, shipping updates from a logistics partner's API, inventory sync events from a supplier's system, and return notifications from a third-party returns portal. It also sends order-status webhooks to merchants who've subscribed to receive them.

Each external provider has its own payload format, authentication scheme, retry behavior, and reliability characteristics. The logistics partner's API sometimes goes down for 20 minutes during maintenance. The returns portal sends duplicate events when its network is unstable. Stripe's webhook signatures use a time-sensitive HMAC scheme that requires server clock accuracy.

Without an event gateway, the platform team builds a separate HTTP endpoint for each provider, writes custom signature verification code for each one, publishes verified events into their Kafka cluster via a translation service, and builds monitoring to detect when external integrations go silent. The translation service becomes a critical path component — if it goes down, all external events stop flowing. Duplicate detection, payload normalization, and provider-specific retry handling all live in application code that accumulates complexity over time.

With an event gateway, each external provider sends events to a gateway-managed endpoint. The gateway verifies Stripe's HMAC signatures, validates the logistics partner's API key, and handles the returns portal's authentication handshake — all configured, not coded. Duplicate events from the returns portal are detected and suppressed. Payloads from different providers are transformed into a normalized format before delivery. When the logistics partner goes down and stops sending events, the gateway's alerting flags the silence. When the platform's own processing service is unavailable during a deployment, inbound events queue up and deliver automatically once it recovers. Outbound merchant webhooks are delivered with configurable retries and rate limiting, with full delivery visibility so the support team can answer merchant questions about missed events.

The Kafka cluster still handles internal service-to-service communication. The event gateway handles everything that crosses the boundary between the platform and the outside world.

When do you need each?

You need a message broker when:

Your architecture requires high-throughput, internal service-to-service messaging with strict ordering, consumer group coordination, or log-based event replay. If you're building a streaming analytics pipeline, distributing background jobs across worker pools, or implementing event sourcing, a broker like Kafka or RabbitMQ is the right tool. You control both sides of the conversation, you can mandate the protocol, and you're willing to invest in the operational overhead of running or managing the broker infrastructure.

You need an event gateway when:

Your architecture ingests, processes, or delivers events across system boundaries — particularly over HTTP. If you're receiving webhooks from external providers, sending webhooks to your customers, connecting third-party services, or routing events between systems that weren't designed to talk to each other, an event gateway handles the verification, transformation, delivery guarantees, and observability that would otherwise require layers of custom code on top of a broker. The producer is often a system you don't control, the protocol is HTTP, and the operational overhead of running broker infrastructure isn't justified for the use case.

You need both when:

Many production architectures use both. The message broker handles the high-volume internal backbone — inter-service events, streaming pipelines, task distribution. The event gateway handles the integration layer — inbound webhooks, outbound notifications, cross-system event routing. Events might flow from the gateway into the broker (external webhook arrives, gets verified and normalized, then published to an internal Kafka topic) or from the broker out through the gateway (internal event triggers an outbound webhook to a customer's endpoint).

The mistake is trying to make one do the other's job. Bolting HTTP ingestion, signature verification, payload transformation, and provider-specific retry logic onto a Kafka cluster creates a fragile, hand-built integration layer that you'll maintain indefinitely. Running a self-hosted broker to handle a few dozen webhook integrations when a managed event gateway would eliminate the operational overhead entirely is over-engineering in the other direction.

Conclusion

Message brokers and event gateways are both foundational to event-driven architectures, but they operate at different layers. A message broker is internal plumbing — high-throughput, protocol-native infrastructure for moving messages between services you own. An event gateway is an integration platform — a managed layer that handles the full event lifecycle across the boundary between your systems and the outside world.

The practical dividing line is usually the question of who owns the producer. When you control both sides and need raw throughput, ordering guarantees, and consumer group semantics, a broker is the right tool. When the producer is an external system with its own authentication scheme, payload format, and reliability characteristics — or when you need filtering, transformation, and lifecycle observability without the operational burden of running messaging infrastructure — an event gateway is the right tool.

If your architecture includes both internal event flows and external integrations, you likely need both. For the event gateway layer, Hookdeck Event Gateway provides managed ingestion, verification, filtering, transformation, and delivery with end-to-end event visibility. For a full introduction to the concept, see What is an Event Gateway?. For implementation considerations when pairing a gateway with a message queue, see our guide on message queues for webhook processing.