Anatomy of a Good Webhook Payload
Webhooks are deceptively simple. An event fires, an HTTP request goes out, a consumer does something with the data. Three steps. What could go wrong?
Quite a lot, as it turns out — and most of it traces back to what you put in the payload.
The payload is the contract between your system and every integration built on top of it. Get the structure right and developers will build against your webhooks confidently, with minimal support overhead. Get it wrong and you'll spend months fielding tickets from frustrated engineers who can't figure out why their integrations keep breaking.
This post walks through the decisions that matter when designing webhook payloads: what data to include, which headers and metadata to send alongside it, how to structure the schema for long-term sanity, and the mistakes that trip up even experienced teams.
Start with the envelope
Before thinking about the event-specific data, establish a consistent wrapper that every webhook from your system will use. Think of it as the envelope around the letter — it tells the consumer how to handle the message before they even open it.
A solid envelope typically includes four things: a unique event identifier, the event type, a timestamp recording when the event actually occurred, and a nested object containing the domain-specific data. For example:
{
"id": "evt_8f3a2b1c9d4e",
"type": "invoice.payment_failed",
"occurred_at": "2026-02-16T14:23:07Z",
"data": {
"invoice_id": "inv_4k9x7m",
"customer_id": "cust_r2d2c3",
"amount_due": 4999,
"currency": "usd",
"failure_reason": "card_declined"
}
}
The separation matters. By keeping infrastructure-level metadata (the event ID, the type, the timestamp) at the top level and business data nested inside data, you let consumers route and deduplicate messages without needing to understand the shape of every event type. It also means you can add new metadata fields down the road without touching the data schema itself.
Headers that consumers actually need
The payload body gets most of the attention, but headers carry critical context that determines whether a consumer can trust, verify, and correctly process a webhook. Skimp on headers and you're forcing developers to do guesswork.
Content-Type should always be application/json (or whatever format you use). This sounds obvious, but some providers omit it or send incorrect values, which breaks automatic parsing in many frameworks.
A signature header (commonly something like X-Webhook-Signature or Webhook-Signature) is non-negotiable for production systems. The standard approach is HMAC-SHA256: compute a hash over the raw payload body using a shared secret, and send the result in this header. Consumers recompute the hash on their end and compare. If it doesn't match, they reject the request. Without this, anyone who discovers the endpoint URL can send fabricated events.
A timestamp header (e.g., X-Webhook-Timestamp) serves two purposes. First, it tells consumers when the event was dispatched, which may differ from when it occurred. Second (and more importantly) it's a defense against replay attacks. Consumers can reject requests with timestamps older than a few minutes, and you should include the timestamp in the signature computation so it can't be tampered with.
A delivery ID or idempotency key (e.g., X-Webhook-Delivery-Id) gives each delivery attempt a unique identifier. This is distinct from the event ID in the body, because the same event might be delivered multiple times due to retries. Consumers use this to detect and discard duplicate deliveries.
A schema version indicator — either as a header like X-Webhook-Version or embedded in the payload — tells consumers which version of the payload format they're receiving. This becomes essential when your payloads evolve over time.
A well-constructed webhook request, then, looks something like this from the consumer's perspective: a handful of clearly named headers that let them verify authenticity, check freshness, deduplicate, and understand format — all before they parse a single byte of the JSON body.
Deciding how much data to include
This is the most consequential design decision you'll face: should the payload contain the full resource (a "fat" payload), just an identifier pointing to the resource (a "thin" payload), or something in between?
Fat payloads send the complete object. When a customer's address changes, you send the entire customer record. The upside is self-sufficiency — consumers have everything they need to react without calling your API again. This is valuable when latency matters, when consumers process events asynchronously and your API might be temporarily unreachable, or when you want to reduce the load of follow-up GET requests on your infrastructure. Payment processors tend toward this approach because the downstream systems handling transactions often can't afford an extra network round-trip.
Thin payloads send only identifiers and the event type. For that same address change, you'd send the customer ID and customer.updated, and consumers fetch the current state via your API. This is cleaner when resources are large, when data changes rapidly between event and processing time, or when you want to guarantee that consumers always see the latest state rather than a potentially stale snapshot.
Most teams end up somewhere in the middle. A pragmatic approach is to include the fields that consumers are most likely to need for immediate routing and decision-making, while leaving the full resource available via API. If a consumer gets an order.shipped event, they probably need the order ID, the tracking number, and the shipping carrier right away — but they can fetch the full line-item breakdown later if they need it.
One hybrid pattern worth considering: include a snapshot URL in the payload — a signed, time-limited link that returns the resource state at the exact moment the event occurred. Consumers can choose whether to use the snapshot or hit your live API for the current state.
If you need more help deciding, we weigh up the pros and cons of fat vs thin events in more detail in our guide.
Fields that earn their place in every payload
Regardless of whether you lean fat or thin, certain fields should appear in every webhook you send.
Event ID (id): A globally unique identifier for the event. Consumers store these to implement idempotent processin — if they see the same event ID twice, they skip it. Use a format that's reasonably collision-resistant (UUIDs, KSUIDs, or prefixed IDs like evt_ followed by a random string).
Event type (type): A dot-notated string like invoice.paid or user.subscription.canceled that tells consumers exactly what happened. This is how consumers decide which handler to invoke. Use a consistent naming convention — resource.action in past tense is a common pattern that reads well.
Timestamp (occurred_at or created_at): When the event actually happened in your system, in ISO 8601 format with a timezone (ideally UTC). This is different from the delivery timestamp in the headers. Consumers use this for ordering, auditing, and display. Use a clear field name that distinguishes event time from delivery time.
The data object (data): The event-specific payload. For resource lifecycle events, this is typically the resource itself (or a subset). For status changes, include both the new state and enough context for the consumer to react.
Previous state (for update events): Consider including a previous or changes object that shows what the values were before the update. If a consumer only cares about email changes and the update was to a display name, they can skip processing entirely. Without this, every update event triggers the same heavy processing regardless of what actually changed.
Versioning: the decision you'll wish you'd made earlier
Your payloads will change. Fields will be added, types will shift, structures will be reorganized. If you don't have a versioning strategy from day one, you'll break consumer integrations the first time you ship a schema change — and those consumers won't be gentle about it.
There are a few workable approaches.
Global API versioning is the simplest. Consumers subscribe at a specific version (e.g., 2026-02-01), and all webhooks they receive use that version's schema until they explicitly upgrade. This is the approach Stripe uses, and it works well because it gives consumers full control over when they adopt breaking changes. The cost is that you need to maintain multiple payload formats simultaneously.
Additive-only evolution means you commit to never removing or renaming fields — only adding new ones. Consumers ignore fields they don't recognize. This avoids the overhead of maintaining multiple versions, but it accumulates dead weight over time and breaks down the moment you genuinely need to rename or restructure something.
Explicit schema versions per event type let you version individual event schemas independently. An order.created payload might be at v3 while user.updated is still at v1. This is more granular but also more complex to manage.
Whichever approach you choose, the important thing is to choose one and document it clearly before your first consumer goes live. Include your compatibility guarantees (what constitutes a breaking change?), a deprecation policy (how long do old versions stick around?), and migration guides when you do introduce new versions.
Common mistakes that frustrate developers
Having read through countless developer forum threads, support tickets, and post-mortems, certain payload design mistakes come up again and again. Here are the ones that cause the most grief.
Inconsistent naming and structure across event types
When order.created nests the order under data.order but order.updated puts it directly under data, consumers can't write generic handling logic. They end up with brittle, event-specific parsing code that breaks every time you add a new event type. Pick a structure and stick with it.
Similarly, if your API uses camelCase but your webhooks use snake_case (or worse, a mix) developers spend time on needless translation. Match whatever convention your API already uses.
No event ID, or non-unique event IDs
Without a reliable event ID, consumers have no way to implement idempotent processing. Since webhooks can and will be delivered more than once (retries are a fact of life), consumers who can't deduplicate will create duplicate records, send duplicate emails, or charge customers twice. This is the single most common source of webhook-related bugs in production systems.
Sending everything to everyone
If you fire every event type to every registered endpoint, consumers drown in irrelevant traffic. A consumer that only cares about payment.succeeded shouldn't have to receive and discard hundreds of inventory.adjusted events per hour. Let consumers subscribe to specific event types. Fewer irrelevant deliveries means less wasted compute, less noise, and better security posture (you're not streaming data to endpoints that don't need it).
Breaking changes without warning
Renaming a field from user_id to customer_id, changing an integer to a string, or restructuring nested objects — any of these will silently break consumer integrations. The worst part is that webhook consumers often don't have monitoring sophisticated enough to catch deserialization failures quickly. The breakage might go unnoticed for hours or days, causing data loss that's painful to recover.
Missing or unreliable timestamps
Some providers include a timestamp that records when the webhook was sent rather than when the event occurred. After a retry backlog clears, consumers see a burst of events that all appear to have happened in the last minute, when they actually span hours. Always include the true event time, and make the field name unambiguous.
Oversized payloads
Stuffing every related object into the payload creates multi-megabyte messages that are slow to transmit, expensive to store, and prone to timeout failures at the consumer end. If a single event includes the order, all line items, all product details for those line items, the customer record, and the customer's full address history — you've gone too far. Include what's needed for immediate decision-making and let consumers fetch the rest.
No signature or a weak signing scheme
Using a single shared signing key across all consumers is a subtle but serious vulnerability. Consumer A could receive a legitimately signed webhook and forward it to Consumer B's endpoint, and it would pass verification. Use per-consumer signing keys. And stick with HMAC-SHA256 — it's widely supported and secure enough for this use case.
No plan for secret rotation
Even teams that get signing right often neglect rotation. Secrets leak, employees leave, compliance policies mandate periodic changes — and when the day comes to rotate a signing key, you discover there's no way to do it without breaking every active integration. The fix is straightforward: support a rotation window where both the old and new secrets are valid simultaneously. When a consumer triggers a rotation, generate a new secret, move the current one to a "previous" slot, and include signatures for both keys in the header for a configurable grace period (24 hours is a sensible default). Once the window expires, drop the old key. This turns a scary coordination problem into a routine operation.
How Outpost puts this into practice
It's one thing to list best practices in the abstract. It's another to see how a real system implements them. Outpost, Hookdeck's open-source webhook delivery system, bakes many of these patterns directly into its infrastructure — so you get sensible defaults without having to build the plumbing yourself.
The event structure follows the envelope pattern. When you publish an event to Outpost, you provide a topic (the event type), a data object (the business payload), an optional id, a time timestamp, and a metadata map for arbitrary key-value pairs. The structure maps to the envelope we described earlier — infrastructure metadata at the top level, business data nested inside data.
Headers are handled automatically. When Outpost delivers a webhook, it sends x-outpost-id, x-outpost-timestamp, x-outpost-topic, and x-outpost-signature as headers — covering the event ID, timestamp, event type, and signature that we recommend. Any key-value pairs in the event's metadata field are also translated into additional headers, giving publishers a clean way to attach per-event context (like a source service name) without polluting the payload body.
Signing is per-destination with HMAC-SHA256. Each destination gets its own signing secret (auto-generated if you don't provide one). The signature is computed over ${timestamp}.${body}, which means the timestamp is included in the signed content, protecting against replay-attacks. This also avoids the shared-key vulnerability we flagged, since every destination's secret is unique.
Secret rotation is built in. When you rotate a signing secret, Outpost keeps both the old and new secrets valid during a configurable window (24 hours by default). Both signatures are included in the header during rotation, so consumers have time to switch over without any dropped deliveries. Having the dual-key grace period handled at the infrastructure level means you don't need to build the rotation machinery yourself.
Topic-based subscriptions filter irrelevant events. Destinations subscribe to specific topics, so consumers only receive what they've opted into.
Standard Webhooks mode is a toggle away. If your consumers expect the Standard Webhooks specification, Outpost can switch to it with a single environment variable. Headers become webhook-id, webhook-timestamp, and webhook-signature in the standard format, and consumers can verify with the official Standard Webhooks SDKs.
Event fanout handles multi-consumer delivery. A single published event is automatically replicated and sent to every eligible destination, with independent retry tracking per destination. This means one consumer's endpoint being down doesn't affect delivery to others.
Outpost handles the hardest parts of webhook delivery infrastructure — signing, header management, per-destination secrets, retry logic, topic filtering, and secret rotation — and does so with sensible defaults.
Putting it all together
None of this is advice is exotic. It's the boring, careful infrastructure work that separates webhook systems developers trust from the ones they dread integrating with. And the earlier you make these decisions, the less painful they are to enforce — because once consumers have built against your payload format, changing it becomes a negotiation rather than a refactor.