Author picture Phil Leggetter

Event Gateway Comparison: Amazon EventBridge, Azure Event Grid, Google Eventarc, Confluent Kafka, and Hookdeck

Published · Updated

Article image

The world is event-driven. Someone buys a product. A payment is processed. A sensor detects that a machine has stopped working. An SMS arrives. And, in response, we build systems that trigger yet more events both in our own code, via changing data (change data capture), and in external services.

Done well, event-driven architectures (EDA) make our software scalable, resilient, and easier to maintain. Choose the wrong tools, though, and it can become a mess that's both hard to reason about and costly to run. That's why, in this article, we're going to look at the role Amazon EventBridge, Azure Event Grid, Google Eventarc, managed Kafka services, and Hookdeck play in helping you to build event-driven application architectures. We'll consider how they fit together, what they do, and which other services you might need to use to plug any gaps they leave.

Before we cover all the solutions, let's start by looking at the different types of tools and capabilities required in an event-driven architecture.

Choosing tools for event-driven applications

Up until recently, there hasn't been a platform or product category that has provided all the capabilities we need to build an event-driven architecture. Instead, we assemble multiple tools and systems that together ingest, process, route, and deliver data. Finding the right combination of tools means balancing your needs as they are today with a good level of future-proofing.

The challenge, though, is that comparing technologies is rarely apples to apples. Hookdeck is the exception but most event gateway solutions do not include all of the required EDA capabilities which means you'll need to assemble a suite of supporting tools and services.

What, then, are the capabilities that we need an event gateway to offer? Here are the categories we'll use to compare event gateway tools:

CapabilityDescriptionFeatures
IntegrationsHow we get data into and out of the system.
  • Data sources: Systems that generate the events we want to process and route, also known as producers or publishers.
  • Data destinations: The tools that will make use of the data, also known as consumers or data sinks.
Processing and storageIn-flight processing capabilities, how it works with other processing tools, and what storage options it offers.
  • Messaging and routing: Determining the destination of the event based on a logical definition.
  • Transformation: Ensuring interoperability between different systems by altering event structures or formats.
  • Filtering: Selecting the events for further processing and routing based on contents and other data to conserve computational resources and promote data relevance.
  • Scheduling: Specifying when events should be delivered.
  • Data persistence: Store events for resending in case of an error or longer-term to enable clients to replay events they've missed.
Scalability and reliability Rather than ifit can scale and handle failure but more how much you need to do manually.
  • Scaling strategy: How the tool scales to meet demand and what level of manual intervention it requires. For example, manual scaling, automatic scaling, and serverless offer different approaches.
  • Observability: Obtain insights into traffic trends and receive proactive alerts about anomalies.
  • Error recovery: Replay events, whether you're the sender or the recipient, to ensure no events are lost.
  • Rate limiting: Prevent system overloads from sudden surges in event traffic and conditionally control processing speed.
Security and complianceDoes it comply with the standards you need? How does it handle authentication?
  • Authentication: How the tool handles policies and permissions for event sources and destinations.
  • Handshake negotiation: Ensure secure communication channels with services using established standards or via custom connectors with 3rd party providers.
  • Verification: Process only trustworthy events to maintain data integrity.
  • Standards certification: Comply with industry-specific regulations, such as PCI for payments and HIPAA for health, as well as more general best practice standards.

Bringing all these together, we'd expect a comprehensive event gateway solution to provide:

  • Integrations: Ingest data from and deliver data to a rich variety of external systems.
  • Filtering and routing: Using payload data and metadata to determine where events should go.
  • Complex event processing: Transform and enrich data as it passes through the system.
  • Storage: Store events for replay and auditing.
  • Scalability and resilience: Scale to meet demand and provide ways to handle failure.
  • Security and compliance: Uphold high standards of security, both when interacting with external systems and while moving events through the system. Simplify connecting securely to external systems by using standard handshake and verification protocols.
  • Observability: Provide end-to-end monitoring and visibility that helps you track failures, latency, success rates, and other metrics.

Now that we know what makes up an event gateway, let's take a look at how Amazon EventBridge, Azure Event Grid, Google Eventarc, managed Kafka services, and Hookdeck match up. And where a product doesn't add up to an event gateway by itself, we'll look at which additional capabilities you'd need from its ecosystem.

If you'd prefer a quick overview, jump straight to the comparison table.

Amazon EventBridge

Amazon describes EventBridge as a serverless event bus that lets you receive, filter, transform, and deliver events. Like many AWS services, it comes in a few different flavors:

  • Event Bus: Receives, filters, transforms, and routes events from multiple sources and to multiple destinations.
  • EventBridge Pipes: Point-to-point delivery of events, with optional in-flight transformations and filtering with advanced processing using Lambda functions and other AWS products.
  • EventBridge Scheduler: Effectively, a serverless cron that lets you schedule outbound events destined for other AWS services.

Here, we'll focus on Event Bus, as that's the closest match for the other multi-purpose event gateway tools we're comparing.

Amazon EventBridge integrations

Amazon EventBridge evolved out of the AWS application performance monitoring tool Cloudwatch. That's important for our comparison because it skews EventBridge towards integrations with other AWS services.

To ingest an event, EventBridge requires that it be in a specific JSON format:

{
  "version": "0",
  "id": "UUID",
  "detail-type": "event name",
  "source": "event source",
  "account": "ARN",
  "time": "timestamp",
  "region": "region",
  "resources": [
    "ARN"
  ],
  "detail": {
    JSON object
  }
}

For external event sources, unless there is a native integration, you have two options. The first is to use EventBridge's PutEvents API, Java SDK, or CLI. The second is to put another AWS service, such as Amazon API Gateway, between the event source and EventBridge.

When it comes to services that will consume your events, EventBridge can trigger webhooks or deliver to AWS services such as a Lambda function, Kinesis stream, or Amazon SQS queue. For external data destinations, EventBridge allows you to call third-party APIs.

Processing and storage with Amazon EventBridge

To filter and route events in EventBridge, you need to define rules. Each rule has two parts:

  • Event pattern: Using the same JSON format as events, patterns let you define the criteria incoming events should match to trigger a particular rule.
  • Target: This could be another event bus within EventBridge, an AWS service, or an API destination.

Other than simple text transformations, Event Bus by itself doesn't offer in-flight processing. Instead, you'll need to use another tool for processing, such as EventBridge Pipes, your own Lambda function, or Amazon's hosted Apache Spark service.

It's worth looking at how Event Bus and EventBridge Pipes differ. With its focus on ETL workloads, Pipes offers more advanced processing capabilities, such as enriching events with data from other sources and filtering events based on their content. However, Pipes is more limited in terms of the number of sources and destinations it can handle.

And EventBridge Scheduler lets you trigger events according to a schedule you define. That's useful if you want to delay the processing of an inbound event, as you can route it out of Event Bus and into Event Scheduler, such as by using a Lambda Function. The tradeoff is your event routing becomes more complex.

When it comes to persistence, you can optionally archive events for as long as you choose and later replay them.

Amazon Eventbridge scaling and resilience

Unlike some of Amazon's managed services, where you might need to add capacity manually, EventBridge is fully serverless and will scale automatically to meet demand. However, there are default quota limits.

If you need to rate-limit the delivery of events, either to avoid hitting your EventBridge quota limits or to prevent overwhelming your target destinations, you can set a rate limit for external HTTP destinations. However, there is a risk that you could lose events if the backlog exceeds EventBridge's 24-hour persistence limit.

It's worth noting that Amazon makes a “best effort” deliverability promise for most AWS services, meaning that “in some rare cases an event might not be delivered”.

When it comes to handling failure and errors, EventBridge requires some manual intervention. If a destination target is unreachable, EventBridge will keep trying for up to 24 hours. After that, it will either drop the event or place it in a dedicated Simple Queue Service queue called a dead-letter queue (DLQ). Each time that happens, you'll get an alert in the AWS CloudWatch tool, but it's up to you to handle what happens with the contents of that dead-letter queue.

Amazon EventBridge security and compliance

As an AWS product, access to Amazon EventBridge is handled by assigning EventBridge IAM policies to users and roles. Authentication also happens within the IAM system, either through user sign-in or by signing requests using access keys. As an administrator, you can grant granular permissions both for publishing events to EventBridge and for routing to destinations.

The Amazon EventBridge APIs also use the IAM system. Authentication for external API destinations is handled through basic auth, API keys, and OAuth.

EventBridge uses AES-256 encryption at rest and TLS in transit.

When it comes to standards and regulations, Amazon EventBridge complies with SOC 1 through 3 and PCI requirements.

Amazon EventBridge observability

EventBridge is tightly integrated with the rest of the AWS ecosystem, and that's no different when it comes to observability. That makes Cloudwatch your primary option for monitoring what happens in EventBridge.

EventBridge sends metrics to Cloudwatch once every minute, with numbers including:

  • events passing through EventBridge
  • events directed to the dead letter queue
  • failed and throttled events
  • the latency of various actions, measured in milliseconds.

To get an end-to-end view of an event's journey through your system, you'll need to combine metrics from the various AWS tools that you use alongside EventBridge, which adds complexity.

Amazon EventBridge developer experience

What's it like to work with EventBridge as a developer? Here are some of the key points:

  • Documentation: AWS offers comprehensive documentation for EventBridge, including user guides and reference guides.
  • Community: As an AWS product, EventBridge is used widely by developers across the world. That means there are blog posts, videos, and third-party resources created by people who have used EventBridge in production.
  • Local development: The third party tool LocalStack offers a way to develop against a local equivalent of EventBridge. EventBridge is also supported in version 1 of the AWS CLI.

Building an event gateway with Amazon EventBridge

Amazon EventBridge provides just one part of what we need to build an event gateway. Let's look at what it provides and how you can plug the gaps.

CapabilityHow EventBridge faresHow to plug the gaps
Data sources
  • Good for AWS data sources
  • Offers a handful of third-party integrations
  • No built-in way to accept ad-hoc data from external sources
  • Use Amazon API Gateway to accept inbound data via HTTP
  • Using the PutEvents API or SDK
Routing
  • Rules to filter events to destinations based on event content and metadata
  • Each rule routes to a single destination
  • Events are routed asynchronously, meaning you can't guarantee order of delivery
  • Can trigger outbound events on a schedule but can't schedule inbound events
  • Use Amazon SNS to route to multiple targets with one rule, albeit introducing additional complexity
  • Use Step Functions to guarantee ordering of event delivery
  • Route to a Lambda function to create a new scheduled outbound event in Event Scheduler
Complex processing
  • Simple text transformations
  • Route to Lambda, Amazon's hosted Spark, or other tools to process events
Storage
  • Event archival and replay available
  • To get more storage options, can route events to S3 and Amazon database products
Scalability
  • Serverless with some fixed quotas
Resilience
  • Promises best-effort deliverability for most AWS services
  • Retries failed deliveries for up to 24 hours and sends to dead letter queue after that for manual processing
  • Need to create a custom mechanism for handling the dead letter queue
Security
  • Authentication handled by AWS IAM system
  • Complies with SOC 1 through 3 and PCI
  • AES-256 encryption at rest and TLS in transit
  • Webhook handshaking and validation may require custom functionality
Observability
  • No built-in observability
  • Integrates with Amazon Cloudwatch
Delivery
  • Direct integrations with most AWS services
  • Call third-party APIs

Azure Event Grid

Azure Event Grid is a hosted service for building data distribution pipelines using the publish-subscribe model. Clients can push to and read from topics that you define, as well as read from system topics that carry event data relating to Azure itself.

There are two interfaces for working with Event Grid:

  • MQTT: Connect any MQTT client to Event Grid, such as a fleet of IoT devices, with the option of running MQTT over WebSocket for better interoperability.
  • HTTP: Push events into Event Grid using inbound webhooks and pull events using outbound webhooks.

By offering both MQTT and webhook integrations, Event Grid opens up the number of use cases and types of integration it can support.

Azure Event Grid integrations

Event Grid offers three types of event source:

  • System: Automatically generated events that report changes in other Azure services, such as a new device registering in the Azure IoT Hub. Each supported Azure service has its own topic.
  • Partner: A handful of authorized partners also get their own topics, into which they push events relating to your usage of their products. For example, a successful multi-factor auth verification in Auth0.
  • Custom: This could be anything that can interface with the webhooks or deliver an MQTT message, whether it's an event emitted by your own code running in an Azure Function or an external service.

Event Grid has its own proprietary event format but Microsoft's recommendation is to use the CNCF-maintained CloudEvents standard.

Event targets in Event Grid are called event handlers and there are several to choose from, including:

  • Webhooks
  • Cloud functions running in Azure Functions
  • Azure Event Hubs, an Apache Kafka-compatible data streaming tool
  • Azure Queue Storage
  • back into Event Grid.

Processing and storage with Azure Event Grid

Event Grid gives you three ways to make sure events end up where you need them:

  • Namespaces: Group together related resources, such as topics, clients, and permissions. You might have one namespace per app or environment (test, staging, production), for example.
  • Topics: Each event belongs to a topic, with event handlers subscribing to particular topics. That lets you categorize events based on attributes such as their source and their type. For example, temperature updates from IoT devices might all belong to one topic and user authentication events to another.
  • Filters: You can define event filters in Event Grid to filter events in or out depending on the contents of the event, such as the subject, the event type, and fields within the event JSON payload. That gives you finer-grained control than topics alone.

Event Grid doesn't offer complex in-flight processing, though. Instead, you'd need to route events out to an external processor and back into Event Grid. Nor does Event Grid offer event storage, beyond what it needs to enable retries. All event data is automatically deleted after no more than 24 hours.

Azure Event Grid scaling and resilience

You can tune when Event Grid sends events to particular event handlers, choosing between sending each event individually or in batches. If an endpoint is unavailable or does not acknowledge receipt, Event Grid will try to redeliver for up to 24 hours. After that time, failed events are sent to a dead-letter blob in Azure Storage.

Event Grid scales with demand, but there are some quotas you need to bear in mind, such as a maximum of 100 custom topics and 5,000 inbound events per second. Some limits can be increased on request.

Azure Event Grid security and compliance

Azure Event Grid uses Azure role-based access control, offering a number of built-in roles specifically for Event Grid, as well as the option to create custom roles for granular access. You can authenticate either through the Microsoft Entra ID system (formerly Active Directory) or by using access keys or shared access tokens.

Data is encrypted in transit using TLS and at rest using Microsoft-managed keys. Endpoint validation is available when using the CloudEvents 1.0 standard.

Event Grid complies with multiple regulations, including HIPAA and FedRAMP High.

Azure Event Grid observability

Alerting and metrics are available directly in the Event Grid dashboard. Azure's observability tool, Azure Monitor, also ingests metrics generated by Event Grid, such as the number of:

  • events in the dead letter queue
  • successful and failed event deliveries
  • active connections
  • milliseconds taken to publish an event.

For a full view of the lifecycle of an event, you'll need to pull together metrics from each constituent part of the pipeline.

You can also use external observability tools such as Datadog and New Relic.

Azure Event Grid developer experience

What is the developer experience with Event Grid like? Here are some of the key points to consider:

  • Documentation: The Event Grid documentation provides quickstarts, concept explanations, goal-based tutorials, and code samples for the product's main concepts. Microsoft Learn also has a number of training modules that cover Event Grid.
  • Community: Blog posts, videos, and Stack Overflow answers cover many aspects of Event Grid, making it easy to find resources outside Azure's official channels.
  • Local development: The Azure CLI provides access to work with topics and events from your development machine. Although there are one or two unmaintained GitHub repos that offer local Event Grid emulation, there doesn't appear to be a widely used tool for local development.

Building an event gateway with Azure Event Grid

As a pub-sub queue, Event Grid can act as the transport mechanism of your event gateway but you'll need to combine it with other tools to store events, achieve more sophisticated routing, and enrich or transform events.

CapabilityHow Event Grid faresHow to plug the gaps
Data sources
  • HTTP and MQTT data from external sources
  • Integrations with some Azure services
  • Offers a handful of third-party integrations
Routing
  • Consumers subscribe to topics
  • Namespaces group resources together
  • Filters use event data and metadata to
  • offer more control over which events go to which endpoints
Complex processing
  • Route to other tools
  • Route to Azure Cloud Functions, Azure's hosted Spark, or other processing tools
Storage
  • Stores events for up to 24 hours to enable retries
  • To get more storage options, can route events to Azure Storage or Azure database products
Scalability
  • Serverless with some fixed quotas
Resilience
  • Retries failed deliveries for up to 24 hours and sends to dead letter queue after that for manual processing
  • Need to create a custom process for processing the dead letter queue
Security
  • Authentication handled by Microsoft Entra
  • Complies with HIPAA and FedRAMP High
  • AES-256 encryption at rest and TLS in transit
Observability
  • Alerting and monitoring in the Event Grid dashboard
  • Integrates with Azure Monitor and external tools, including New Relic and Datadog
Delivery
  • Triggers external webhooks
  • Send events to Azure Functions, Azure Event Hubs, Azure Queue Storage, and back into Event Grid

Google Eventarc

Google Eventarc is a part of the Google Cloud Platform that ingests events and routes them according to rules that you've defined. It uses a point-to-point model, meaning that it routes messages according to rules you've defined rather than by topic.

Google Eventarc integrations

Google Eventarc consumes events from:

  • Google products: Eventarc integrates directly with other Google products, in one of two ways. Some products, such as Google Pub/Sub, integrate directly with Eventarc, while for others Eventarc consumes events by reading their audit log.
  • Partner products: Third-party products, such as Datadog, have their own connectors for Eventarc, although they typically use Pub/Sub as an intermediary.
  • Public API: Eventarc REST API enables you to send events into Eventarc from external systems and your own code running within the Google Cloud Platform.

Like Azure Event Grid, Google Eventarc uses the CloudEvents standard to format event data.

When it comes to event destinations, EventArc favors the Google ecosystem, with targets such as:

  • Cloud Functions: Serverless functions, which allow you to forward events onto external APIs or back into EventArc, as well as running your own application logic in response to events.
  • Cloud Run: An auto-scaling, managed container service for deploying your own code and services.
  • Google Cloud Workflows: Google's serverless automation platform that lets you create workflows visually and using code. This is one way of triggering external APIs using Eventarc.

Also available, but in a pre-GA condition with limited support, are:

  • Google Kubernetes Engine (GKE): Trigger public and private endpoints running in your Kubernetes cluster.
  • Internal HTTP endpoints in a Virtual Private Cloud (VPC) network: For example, if you're running a load balancer on Google's platform.

Processing and storage with Google Eventarc

Eventarc's focus is on delivering events rather than processing their contents. As such, Eventarc does not offer in-flight enrichment or transformations. Instead, you can define triggers that filter inbound events according to their source, type, and custom attributes, and then route them according to rules that you define.

For example, you could specify a trigger that listens for new image files in Google Cloud Storage and routes them to a Cloud Function that adds a watermark and then back into EventArc for storage in a different Cloud Storage bucket.

Eventarc stores events for a default period of 24 hours, after which they're deleted when delivered successfully or redirected to a dead letter topic.

Google Eventarc scaling and resilience

Eventarc inherits its scaling and fault tolerance from the underlying Pub/Sub system. By default, it retains messages for 24 hours and will retry unacknowledged sends at increasingly long intervals. You also have the option of sending undelivered messages to a dead letter topic in Pub/Sub. Also, like Pub/Sub, Eventarc guarantees at-least-once delivery.

As a serverless tool, Eventarc should scale with demand, but it does have some limits. The maximum event size is 512KB, for example, and the maximum number of triggers is 500.

Google Eventarc security and compliance

Eventarc complies with HIPAA, PCI DSS, SOC 1 through 3, and various ISO standards. It also encrypts data at rest.

Authenticating events from Google sources makes use of Google Cloud's IAM system. External sources will authenticate with Pub/Sub before their events are delivered into Eventarc. Eventarc uses a Google-provided service account to access destination services, such as Cloud Run and Cloud Functions. You then grant the service account appropriate roles in the destination service's IAM settings to allow Eventarc to invoke them.

Eventarc encrypts your data at rest using Google-managed keys and you can also encrypt using your own keys. Eventarc data falls under Google's standard in-transit encryption.

Google Eventarc observability

Google Cloud Logging ingests log events from Eventarc. In turn, you can use Google Cloud Monitoring to observe metrics and alert to specific events in those logs.

Google Eventarc developer experience

As a developer working with Google Eventarc, what should you expect?

  • Documentation: Google provides guides, reference documentation, training, and other resources including code samples on GitHub.
  • Community: Google Trends indicates that Eventarc generates significantly less search interest compared to EventBridge. Additionally, although there are community-built resources for Eventarc, they are considerably fewer than those available for Amazon's or Azure's alternatives.
  • Local development: The gcloud CLI supports Eventarc. At the time of writing, there is no officially supported to run or emulate Eventarc locally.

Building an event gateway with Google Eventarc

Eventarc offers basic point-to-point routing for events but no complex processing or storage. However, you can relatively easily ingest events from external sources thanks to its public API and integrations with other Google products, such as Google Pub/Sub.

CapabilityHow Eventarc faresHow to plug the gaps
Data sources
  • HTTP data from external sources
  • Integrations with some Google services
  • Offers a number of partner integrations
Routing
  • Filter and route events based on contents and metadata
  • Supplement with Google Pub/Sub for more sophisticated routing
Complex processing
  • Not offered
  • Route to Google Cloud Functions for processing
Storage
  • Stores events for up to 24 hours to enable retries
  • To get more storage options, can route events to Google Cloud Storage or Google database products
Scalability
  • Serverless with some fixed quotas
Resilience
  • Retries failed deliveries for up to 24 hours and sends to dead letter queue after that for manual processing
  • Need to create a custom process for processing the dead letter queue
Security
  • Authentication handled by Google Cloud's IAM system
  • Complies with HIPAA, PCI DSS, SOC 1 through 3 and various ISO standards
  • Google or customer key encryption at rest and standard GCP in-transit encryption
Observability
  • Integrates with Google Cloud Logging and Google Cloud Monitoring
Delivery
  • Send events to Google Cloud Functions, Google Cloud Run, Google Cloud Workflows
  • Limited support for Google Kubernetes Engine and internal HTTP endpoints within a virtual product cloud network
  • Route events to Google Cloud Functions that can then call external HTTP endpoints

Managed Kafka

Kafka is not a fully featured event gateway solution. However, since it is often the first component engineers think of when building an EDA we've included it within the event gateway comparison.

Apache Kafka is a publish-subscribe data streaming platform that ingests events by writing them to an immutable log. That log is divided into topics, and Consumers subscribe to topics by reading the log. Perhaps the most important characteristic of Kafka is that it operates in real-time at scale.

As an open source project, Apache Kafka is freely available for you to host and manage. However, the cost of running Kafka is that it is somewhat hard to configure and manage. Hosted Kafka services, such as those offered by Confluent and Amazon, promise to take care of Kafka's DevOps burden. Some hosted Kafka offerings offer additional functionality that's not available in the open source version.

For this comparison, we'll look at Confluent's managed Kafka service. It is, arguably, the best-known dedicated Kafka cloud offering and it originates from the team that created the first iteration of Kafka at LinkedIn.

Kafka integrations

Kafka has a rich ecosystem, with connectors available for other open source tools such as Apache Spark, proprietary software including Snowflake, and cloud services offered by the major public cloud providers. Confluent adds to open source Kafka's connectors, offering its own integrations with services such as Salesforce.

If there isn't already a connector for the data sources or destinations that you want to work with, you can create your own using Kafka Connect, which is a framework for creating pipelines between external systems and Kafka.

If you prefer, Confluent offers a REST proxy for ingesting events via HTTP requests.

Kafka processing and storage

Kafka's role is to get data from one place to another. However, there are a couple of ways to process data in-flight:

  • kSQL: By offering a SQL interface to Kafka, kSQL lets you query, transform, filter, and join streaming data.
  • Kafka Streams: You can use Java or Scale to create applications that process data in real-time from Kafka.

If neither of those options is suitable, many teams use Kafka solely for moving data and tools such as Apache Spark and Apache Flink for processing that data.

Thanks to the immutable log at Kafka's heart, you can store events for as long as you need and for as long as you have storage capacity.

Kafka scaling and resilience

Kafka was built to handle enormous volumes of data. It has a shared-nothing architecture, meaning that each server in a Kafka cluster operates independently without sharing storage or other resources. Kafka divides the immutable log into partitions, replicates those partitions, and then distributes them across the cluster.

The result is that Kafka can scale easily by adding more servers. Similarly, if one server goes offline, the cluster can redistribute the workload amongst those that remain. Confluent's hosted service uses that to scale your Kafka cluster automatically to meet demand.

To prevent noisy producers or consumers from overloading the Kafka cluster, you can set rate limits according to network and CPU usage.

Kafka security and compliance

Confluent's managed Kafka service meets SOC 1 through 3, HIPAA, PCI, and other regulatory standards.

You can encrypt data at rest using your own key, and data in motion is also encrypted. For added security, you can also use secure private networking to separate your data in-flight from other cloud customers.

When it comes to securing data sources and destinations (producers and consumers in Kafka's terminology), open-source Kafka lacks built-in authentication mechanisms. Confluent, however, offers SASL and mTLS.

Kafka observability

Confluent Kafka offers a monitoring dashboard, as well as additional alerting in a product called Health+.

Open source and Confluent Kafka both support the open source OpenTelemetery standard, which allows you to trace events as they pass through Kafka, as well as monitoring your Kafka infrastructure. That allows for integrations with observability tools including Honeycomb, Datadog, New Relic, and AppDynamics.

Kafka developer experience

Kafka's developer experience differs substantially from the other options we're comparing. Partly that's because Kafka's history is as an open source project that you run on your own infrastructure. However, it is also due to the complexity of Kafka itself.

  • Documentation: Confluent offers extensive documentation for its managed Kafka service, as well as for the open source Kafka project. The Confluent Community Hub offers a range of resources, including blog posts, webinars, and a forum.
  • Community: Kafka has a large and active community, with many resources available on GitHub, Stack Overflow, and other platforms. Confluent also offers a community Slack channel for users of its managed service.
  • Local development: You can run Kafka on your local machine using the Confluent CLI, which is a wrapper around the open source Kafka CLI. However, running Kafka locally can be resource-intensive and complex.

Building an event gateway with Kafka

Kafka, both in open source commercial managed guises, provides an ecosystem that can enable you to build some of the core features of a highly flexible and powerful event gateway solution.

However, Kafka's scope and sophistication mean that it can add significantly to your DevOps burden, as well as your application's complexity, and the ecosystem is mostly focused around data connectivity and not application interoperability.

CapabilityHow Confluent Kafka faresHow to plug the gaps
Data sources
  • Rich ecosystem of connectors
  • HTTP proxy
Routing
  • Pub-sub topics
  • Filtering in KSQL and Kafka Streams
Complex processing
  • SQL-based processing using kSQL
  • Java or Scala application code processing using Kafka Streams
  • Optionally route to Apache Spark or other tools if you don't want to write Java code or use kSQL
Storage
  • Tunable storage
Scalability
  • Potentially infinitely scalable
  • Confluent managed service offers autoscaling
Resilience
  • Replicates data across multiple nodes and redistributes in the event of a failure
  • Enables rate limiting by network or CPU usage and add-ons allow for other types of throttling
  • Need to create a custom way to process the dead letter queue
  • No in-built support for retries
  • Kafka adopts a pull delivery mechanism so consumption rates is down to the consumer
Security
  • Encrypt data at rest using your own key
  • Data is encrypted in-transit
  • Meets SOC 1 through 3, HIPAA, PCI DSS, and other standards
  • Data source and data delivery authentication relies on support within the ecosystem. Custom connectors may need to be built.
Observability
  • Monitoring dashboard and alerts
  • OpenTelemetery support enables integration with a large number of monitoring tools
Delivery
  • Ecosystem of connectors, however, HTTP delivery that matches event gateway requirements may require a custom implementation.

Hookdeck

Hookdeck is a point-to-point, all-in-one event gateway built to receive, process, and deliver messages across your event-driven architecture. The biggest difference between Hookdeck and the other tools we've reviewed is that Hookdeck provides you with all of the aspects you need for an event gateway.

That makes Hookdeck extremely versatile, supporting use cases including:

  • Inbound webhook infrastructure: Reliably receive and ingest webhooks at scale, without having to worry about traffic spikes or writing retry logic
  • Outbound webhook infrastructure: Trigger once and Hookdeck manages the queuing and routing of payloads to their final destinations at the rate you choose, with automatic retries.
  • Asynchronous API gateway: Ingest events at scale through REST API endpoints.
  • Route events between third parties: Take messages from external sources, transform the payloads as needed, and route them to external destinations according to the rules you define.
  • Message brokering for serverless applications: Route events between your edge functions and third-party API platforms, with greater flexibility than the native cloud provider solutions we reviewed above and reduced complexity compared to Kafka.

Hookdeck integrations

Hookdeck streamlines the process of managing events by enabling you to specify an HTTP endpoint for each event source you need to ingest. That way, any code that can make an HTTP request can also be available as a data source for use in Hookdeck.

Similarly, data destinations are also HTTP endpoints.

Processing and storage with Hookdeck

Hookdeck provides two ways to process events in-flight:

  • Transformations: Use JavaScript to perform arbitrary transformations on event data.
  • Filters: Route events based on the contents of their headers, body, query, and path.

Hookdeck's data retention depends on your plan. See Hookdeck pricing.

Hookdeck scaling and resilience

Hookdeck is a fully managed platform, meaning that it will scale to meet demand without asking you to add or configure capacity. Similarly, the platform manages uptime and failure recovery, promising 99.99% uptime.

When setting up a connection between an event source and a destination, Hookdeck lets you specify how often to retry failed deliveries and which interval to use between them. However, resilience isn't just about the event gateway. Hookdeck also allows you to specify rate limits for specific destinations or pause events all together on a destination-by-destination basis, so as not to overwhelm the systems you connect to.

Hookdeck security and compliance

There are three mechanisms of authentication available when using Hookdeck to connect to external data sources and destinations:

  • HTTP webhook handshaking and signature verification: supporting several third-party providers such as GitHub, Shopify, Twiilio, and Xero, as well as generic signing methods
  • Basic auth: With username and password
  • API key: provided in a configurable HTTP header or query parameter

Hookdeck handshake negotiation using WebSub, REST Hooks, and several vendor-specific methods. To verify payloads, Hookdeck supports HMAC signature verification.

The platform encrypts data both at rest and in-transit and meets GDPR, CCPA, CPPA, and SOC2 standards.

Hookdeck observability

As an all-in-one event gateway, Hookdeck makes it more straightforward to observe each event from source to destination than if you were constructing a solution from separate components. Through the dashboard, you can monitor metrics such as the total number of requests, error rate, and response latency. Similarly, Hookdeck enables you to export observability metrics to Datadog, with other integrations planned.

Hookdeck developer experience

Hookdeck's focus on HTTP ingress (e.g. webhooks) and HTTP egress significantly simplifies the developer experience compared to the other tools in this comparison. But what is it actually like to work with Hookdeck?

  • Documentation: Hookdeck provides extensive documentation offering quickstarts, guides, and reference materials.
  • Community: There is a growing community of developers using Hookdeck and sharing their experiences. The platform also has a community Slack channel where you can ask questions and get help.
  • Local development: Hookdeck simplifies local development by providing a CLI that you can use to interact with the platform from your development machine. The CLI allows you to develop, test, and troubleshoot your integration with Hookdeck.

Hookdeck is a complete event gateway

Each of the other tools we reviewed above provides just a part of what you need to create an event gateway. Amazon EventBridge relies on Amazon API Gateway to ingest HTTP requests, while there's no way to enrich or transform events in-flight with Azure Event Grid.

Hookdeck, on the other hand, ingests from any HTTP source and then securely processes, routes, and delivers them at scale.

CapabilityWhat Hookdeck offers
Data sources
  • Ingest events from any source that can make an HTTP request
Routing
  • Many-to-one, one-to-one, and one-to-many routing
Complex processing
  • Arbitrary processing using JavaScript
Storage
  • Depending on the plan you choose, stores events for up to 30 days
Scalability
  • Serverless, with throughput quotas based on the plan you choose
Resilience
  • Configurable retries of failed deliveries for up to one week
  • At-least-once delivery
Security
  • HTTP webhook signature verification
  • Basic auth
  • API key
  • Direct auth support for third parties
  • Complies with GDPR, CCPA, CPPA, and SOC2 standards
Observability
  • Built-in metrics dashboard and exports to metrics to Datadog
Delivery
  • Delivery by calling external HTTP endpoints

Amazon EventBridge vs. Azure Event Grid vs. Google Eventarc vs. Confluent Kafka vs. Hookdeck

The tooling you choose to build your event-driven architecture must give you access to a rich variety of data sources, allow you to make logic-based routing decisions, offer ways to process and transform data in-flight, securely and reliably deliver your events, and provide you with mechanisms for handling failure and recovery.

But, as we've seen, fulfilling each of those requirements might mean combining multiple tools, along with the operational complexity that brings. Of the five solutions we've considered in the comparison, it's clear there are three broad routes you can take:

  • Combine tools from your public cloud vendor: Amazon EventBridge, Azure Event Grid, and Google Eventarc each solves only a portion of the problem. While it can be convenient to work in the same ecosystem as other parts of your cloud infrastructure, those benefits could be outweighed by the extra work required to find and fill the gaps. For example, Eventarc offers no in-flight event transformations, so you need to write and manage custom Cloud Functions instead.
  • Use a managed streaming data platform: Managed Kafka services, and Amazon's Kinesis, offer a hugely scalable, almost infinitely configurable way to move and store vast amounts of data. But that power comes at the cost of complexity and the high likelihood of custom connectors. Configuring, operating, and writing custom Kafka Streams code can be a significant drain on your team's resources.
  • Use a dedicated, all-in-one event gateway tool: Hookdeck gives you an end-to-end tool to ingest, process, and deliver the events your application's architecture needs. As a serverless solution, it leaves you to focus on delivering value to end users rather than configuring and troubleshooting complex systems.

Let's summarize how Amazon EventBridge, Azure Event Grid, Google Eventarc, Confluent Kafka, and Hookdeck match up to what we need from an event gateway.

FeatureAmazon EventBridgeAzure Event GridGoogle EventarcConfluent KafkaHookdeck
Data sources

Direct AWS integrations

Some direct 3rd-party integrations

Requires API gateway or other intermediary for most external sources

Direct Azure integrations

Some direct 3rd-party integrations

MQTT and HTTP interfaces to ingest data

Public API

Direct Google integrations

Some direct 3rd-party integrations

Wide variety of data-focused direct connectors

Kafka Connect for creating custom connectors

Ingest from any HTTP source, including direct integration with generic and platform-specific webhooks.

Data destinations

Deliver to external HTTP endpoints

Direct AWS integrations

Deliver to external HTTP endpoints

Azure Cloud Functions

Azure Event Hubs

Azure Queue Storage

Back into Event Grid

Google Cloud Functions

Google Cloud Run

Google Cloud Workflows

Wide variety of data-focused direct connectors

Kafka Connect for creating custom connectors

Outbound webhooks

Deliver to external HTTP endpoints

API calls

Serverless function invocation

Rate limitingConfigurable when delivering to external HTTP endpoints but risks losing events over 24 hours old

Must use another service to throttle delivery

Must use another service to throttle delivery

Network bandwidth and CPU utilization thresholds

Kafka adopts a pull delivery mechanism so consumption rates is down to the consumer

User-configurable rate limits per destination

Messaging and routing

Many to many

One to many

Source and content-based filtering and routing

Pub-sub

Topic based routing

Source, subject, and content filtering

Pub-sub and point to point

Source, type, and attribute routing

Pub-sub built on immutable event log

Producers and consumers subscribe to topics

One to one

One to many

Many to one

Many to many

TransformationsRoute to Lambda functions or other AWS productsRoute to Cloud Functions or other Azure productsRoute to Cloud Functions or other Google products

kSQL provides a SQL interface to query and transform data in-flight

Kafka Streams provides a Java/Scala SDK for in-flight processing

In-flight transformations with your own JavaScript code
FilteringFilter on event data and metadataFilter on event data and metadataFilter on event data and metadataWrite Java code using Kafka Streams to filter eventsFilter on headers, body, query, or path
Scheduling

Related Event Scheduler tool allows for scheduling outbound events

Route to external service to delay or schedule deliveryRoute to external service to delay or schedule deliveryWrite Java code using Kafka Streams to schedule eventsAbility to delay event delivery by a configurable time
Data persistenceUp to 24 hours to enable retriesUp to 24 hours to enable retriesUp to 24 hours to enable retriesConfigurable depending on storage spaceUp to one week to enable retries
Scaling strategyServerless product with negotiable quotasServerless product but does have some limitations, such as a maximum of 5,000 inbound events per secondServerless product but does have some limitations, such as a maximum event size of 512KBKafka is massively scalable. Confluent's managed service includes auto-scalingServerless with some quotas
ObservabilityWorks with Amazon Cloudwatch and can output to some external observability toolsBasic monitoring in the dashboard. Also works with Azure Monitor, can output to some external observability toolsWorks with Google Cloud Logging and Google Cloud MonitoringBuilt-in metrics dashboard. Supports OpenTelemetry format to push data to multiple observability toolsBuilt-in metrics dashboard and ability to output to Datadog
Error recovery

Retries failed deliveries for up to 24 hours, sends to dead letter queue for manual processing thereafter

Retries failed deliveries for up to 24 hours, sends to dead letter queue for manual processing thereafter

Retries failed deliveries for up to 24 hours, sends to dead letter topic for manual processing thereafter

Persists data to disk, allowing you to choose how to handle failed deliveries

Dead letter queues require custom configuration

Retries require custom functionality

Retries failed deliveries on a configurable schedule up to 50 times automatically and you can retry manually thereafter for up to one week

Authentication

Relies on AWS's IAM system

Relies on Microsoft's Entra ID system

Relies Google's IAM system

Various forms of client authentication focused on data source and destination connectivity.

Multiple forms of client authentication

Handshake negotiation

Doesn't support receipt of data from webhook endpoints, though Lambdas can be setup with some specific third-party vendor support

Endpoint validation with CloudEvents v1.0

Doesn't support receipt of data from webhook endpoints

Doesn't support receipt of data from webhook endpoints

WebSub

REST Hooks

Various vendor-specific methods

Verification

Doesn't support receipt of data from webhook endpoints

Outbound webhooks can be verified with basic auth, OAuth, or API key credentials

Available through CloudEvents v1.0

Doesn't support receipt of data from webhook endpoints

Doesn't support receipt of data from webhook endpoints

HMAC signature verification and direct support for specific third-party providers

Standards certification

SOC 1 through 3, PCI

HIPAA, FedRAMP, and others depending on configuration choices

SOC 1 through 3, HIPAA, PCI, various ISO standards

SOC 1 through 3, HIPAA, PCI

GDPR, CCPA, CPPA, and SOC2

With the above information, you can make an informed decision about which event gateway is best for your event-driven architecture: Amazon EventBridge, Azure Event Grid, Google Eventarc, Confluent Kafka, or Hookdeck.

Hookdeck is an all-in-one event gateway. It's designed to handle the complexities of event-driven architectures, so you can focus on building your applications and the rest of your architecture.

Try Hookdeck for free or share your EDA requirements and book a demo

A reliable backbone for your Event-Driven Applications