How to Implement Webhook Idempotency

Most webhook providers operate on an "at least once" delivery guarantee. The key phrase here is "at least" — you will eventually get the same webhook multiple times. Your application needs to be built to handle those scenarios to maintain data integrity.

What is idempotency?

In computing, when repeating the same action results in the same outcome, we call it idempotent. One common example you have probably encountered is the HTTP PUT vs the HTTP POST methods.

The distinction between the two is that PUT denotes that the action is idempotent. Updating an inventory count, a profile first name, or assigning an order to a customer can be done multiple times in a row without the use of new or extra resources.

A POST, however, implies side effects. If you create a new order for entry, every time you call the endpoint, a new entry will be created, even if it contains the same properties.

Because webhooks are standardized around HTTP POST calls, it's up to you to figure out what is idempotent by nature, versus what needs to be built to be idempotent. In most cases, the burden will fall on you.

Never miss a webhook.

Hookdeck handles retries, deduplication, and error recovery — so failed deliveries don't become lost data.

When to build for idempotency

Generally, events that either create a new resource or cause side-effects in other systems are the trickiest to handle. You wouldn't want to create the same order multiple times because you got the same webhook from Shopify twice. You could also be causing side effects, like sending an email when a product runs out of stock, which no one wants to do multiple times.

Those are cases where you would need to carefully audit your code to look for any areas where idempotency issues could arise, and then build strategies to make those webhook events idempotent.

Idempotency strategies

Enforcing a unique constraint inherited from the event data

In many cases, you'll have some unique ID that you can leverage to know if you've already performed the action for any given webhook request. For example, if you are indexing orders from a Shopify store on the orders/created webhook topic, you can use the order_id from Shopify as a unique property in your database.

In SQL, you might do something like:

 CREATE TABLE orders (
    id text PRIMARY KEY,
    shopify_order_id text UNIQUE NOT NULL,
		[...]
);

That is the simplest solution if your database supports unique constraints. If, say, you also want to send an email to the customer, you would perform the INSERT before you send the email. Since you are using that unique constraint to check for idempotency, you will want to perform side effects afterwards. Lastly, make sure to handle the error and return a 2XX status code that corresponds to the unique constraint violation.

Tracking webhook history and handling status

In some cases the first strategy won't be available to you, which could be because you aren't storing any of the data. Nonetheless, every provider will include some identifier for the webhook itself. In Shopify, the request contains X-Shopify-Webhook-Id in the headers. You can leverage that ID to track the status of the webhooks you are receiving.

Repeated requests for the same webhook will have the same webhook identifier.

To handle those scenarios, you will want to create a processed_webhooks table with a unique constraint on the ID.

CREATE TABLE processed_webhooks (
    id text PRIMARY KEY,
		[...]
);

The first thing to do when you receive the request is to store it in the table using the webhook unique ID. Once you have successfully completed your method, you can then update the row to a status of COMPLETED. In the event that you fail to successfully handle it, just remove the row, and allow next attempts.

You can wrap your webhook calls with a generic method to verify for idempotency. Here's an example using Postgres, Express and NodeJS:

const processWebhook = async (req, handler) => {
  // Extract the unique ID, using Shopify for this example
  const unique_id = req.headers["X-Shopify-Webhook-Id"];
  // Create a new entry for that webhook id
  await client
    .query("INSERT INTO processed_webhooks (id) VALUES $1", [unique_id])
    .catch((e) => {
      // PostgreSQL code for unique violation
      if (e.code == "23505") {
        // We are already processing or processed this webhook, return silently
        return true;
      }
      throw e;
    });
  try {
    // Call you method
    await handler(req.body);
    return true;
  } catch {
    // Delete the entry on error to make sure the next one isn't ignored
    await client.query("DELETE FROM processed_webhooks WHERE id = $1", [
      unique_id,
    ]);
  }
};

app.post("/webhooks/order-created", (req, res) => {
  // Wrap your doSomething method to handle your webhook
  return processWebhook(req, doSomething).catch(() => res.sendStatus(500));
});

Retries and idempotency

Automatic retries are the primary source of duplicate webhook deliveries. When a delivery attempt fails — due to a timeout, server error, or network issue — the provider (or your webhook infrastructure) retries the same event. Without idempotent handlers, every retry triggers your business logic again.

This is why at-least-once delivery and idempotency are inseparable concepts. At-least-once delivery guarantees that every event reaches you, but the cost is potential duplicates. Idempotency makes those duplicates safe.

Key considerations for retry scenarios:

  • Timeouts are the most dangerous failure mode. If your handler processes an event successfully but takes too long to respond, the provider sees a timeout and retries. You've now processed the event, and a retry is incoming. Your idempotency check is the only thing preventing duplicate side effects.
  • Mark events as processed before executing side effects. If you send an email first and mark as processed second, a crash between those two steps means the retry will send the email again. Inserting the idempotency record first (within a transaction where possible) prevents this.
  • Set your idempotency TTL to exceed the retry window. If your provider retries for 48 hours, your deduplication cache must persist at least that long. Otherwise, late retries will slip through.

Replay and idempotency

Event replay — deliberately re-delivering events for debugging or recovery — is one of the most powerful tools in webhook infrastructure. After an outage, a bug fix, or a configuration change, you can replay all failed events to recover without data loss.

Replay is only safe when your handlers are idempotent. Without idempotency, replaying events that were partially processed could cause duplicate side effects. With idempotency, replay is always safe — your handlers detect already-processed events and skip them.

Practical implications:

  • Replay after a bug fix: You fix a handler bug and replay the events that failed. Some of those events may have been partially processed before the bug caused a failure. Idempotent handlers ensure only the unprocessed parts are completed.
  • Replay after an outage: Your service was down for an hour. Events queued during the outage are replayed. Some may have been delivered but not acknowledged. Idempotency prevents double-processing.
  • Replay for debugging: You replay a specific event to reproduce an issue in staging. Idempotency ensures no production side effects if you accidentally replay against the wrong environment.

Storage strategies for idempotency keys

The storage mechanism for your idempotency keys involves tradeoffs between speed, durability, and operational complexity.

StrategySpeedDurabilityTTL supportBest for
Database (PostgreSQL, MySQL)ModerateHigh (survives restarts)Manual (cleanup job)Transactional workloads, financial operations
Redis / MemcachedFastSemi-durable (configurable persistence)Native TTLHigh-throughput systems, most webhook use cases
In-memory (Map, Set)FastestNone (lost on restart)ManualSingle-instance apps, non-critical operations

Database storage is the right choice when idempotency must be transactional — for example, when you need to atomically insert an idempotency record and update a balance in the same database transaction. The trade-off is latency: a database lookup on every webhook adds a few milliseconds.

Redis/cache storage is the most common choice for webhook idempotency. It's fast enough for high-throughput scenarios, supports native TTL for automatic cleanup, and provides durability through persistence configuration (AOF or RDB snapshots). Use Redis when your idempotency check doesn't need to be in the same transaction as your business logic.

In-memory storage should only be used for non-critical operations where occasional duplicate processing is acceptable. A server restart clears the cache, so any events retried after a restart will be processed again. This is fine for cache invalidation or logging, but not for financial transactions.

For more on how delivery guarantees interact with idempotency, see Webhook Delivery Guarantees. For a comprehensive approach to webhook reliability including retries, replay, and observability, see Taking Control of Your Webhook Reliability.

FAQs

What is webhook idempotency?

Webhook idempotency means that processing the same webhook event multiple times produces the same result as processing it once. This is essential because webhook providers use at-least-once delivery, meaning duplicate events are expected. Idempotent handlers prevent duplicates from causing unintended side effects like double charges or duplicate notifications.

Why is idempotency important for webhooks?

Without idempotency, duplicate webhook deliveries — caused by retries, network issues, or provider behavior — can lead to double-charging customers, sending duplicate emails, or corrupting data. Idempotency ensures that no matter how many times an event is delivered, your application processes it safely.

How do I implement idempotent webhook processing?

Use a unique identifier from the event (such as a payment ID, order ID, or webhook event ID) to check whether the event has already been processed. Store processed event IDs in a database with a unique constraint or in a cache like Redis, and skip processing if the ID already exists.

What happens if I don't implement idempotency?

Without idempotency, every duplicate webhook delivery triggers your business logic again. This can result in double charges, duplicate inventory deductions, multiple notification emails, inflated analytics, and corrupted data — any of which can damage customer trust and revenue.

How do retries affect idempotency?

Retries are the primary source of duplicate webhook deliveries. When a delivery fails and is retried, your handler receives the same event again. Without idempotency, the retry causes duplicate processing. With idempotency, the retry is safely detected and skipped.

What is the best storage strategy for idempotency keys?

The best strategy depends on your requirements. Database storage (PostgreSQL, MySQL) provides durability and transactional safety. Redis provides fast lookups with configurable TTL for automatic cleanup. In-memory storage is fastest but volatile — suitable only for single-instance applications where occasional duplicate processing is acceptable.