Best Practices for Working With Thin Events
Thin events are webhook payloads that contain minimal information—typically just an event type and a resource identifier—rather than the complete resource data. This guide provides practical guidance for implementing thin events effectively in production systems.
For a comprehensive introduction to thin events, see What Are Thin Events?.
Why and When to Use Thin Events
Benefits of Thin Events
Payload Size Reduction: Thin events significantly reduce webhook payload sizes, which can improve network performance and reduce bandwidth costs, especially when dealing with high-volume webhook traffic.
Schema Stability: By including only essential identifiers, thin events minimize the impact of API changes. Your webhook handlers remain stable even when the underlying resource structure evolves.
High Volume Handling: Smaller payloads mean faster transmission and processing, making thin events ideal for high-throughput scenarios where you're receiving thousands of webhooks per minute.
Data Freshness: Since you fetch the resource when you process it, you always get the current state rather than potentially stale data that was captured when the event was generated.
Security and Privacy: Thin events reduce the exposure of sensitive data in transit and in logs, as detailed information is only fetched when needed and over authenticated API calls.
When to Use Thin Events
Thin events are particularly well-suited for:
- High-volume event streams where payload size and processing speed matter
- Resource-heavy events where the full payload would be several kilobytes or more
- Rapidly evolving APIs where schema changes are frequent
- Security-sensitive scenarios where you want to minimize data exposure
- Eventually consistent systems where fetching fresh data is preferred
When NOT to Use Thin Events
Consider alternatives to thin events when:
- Critical events require immediate processing and an additional API call would introduce unacceptable latency
- The webhook provider's API is unreliable or has rate limits that would prevent you from fetching resources reliably
- Your application needs to process events offline or without external API access
- The event represents a deleted resource that can no longer be fetched
- The additional API calls would significantly increase costs or complexity
The Fetch-Before-Process Pattern
The fetch-before-process pattern is the core workflow for handling thin events. It consists of four key steps:
- Receive the thin event webhook
- Fetch the full resource using the identifier from the event
- Validate that the fetched resource matches your processing expectations
- Process the complete resource data
For an in-depth exploration of this pattern, see Webhooks Fetch Before Process Pattern.
Implementation Steps
Receive and Parse the Event
When your webhook endpoint receives a thin event, extract the essential information:
app.post('/webhooks/provider', async (req, res) => {
const { event_type, resource_id, resource_type } = req.body;
try {
// Queue the event for processing
await queue.enqueue({
event_type,
resource_id,
resource_type,
received_at: new Date().toISOString()
});
// Only acknowledge after successful queueing
res.status(200).send('OK');
} catch (err) {
console.error('Failed to queue event:', err);
res.status(500).send('Failed to process event');
}
});
Fetch the Full Resource
Use the resource identifier to fetch complete data from the provider's API:
async function fetchResource(resourceType, resourceId) {
const response = await fetch(
`https://api.provider.com/${resourceType}/${resourceId}`,
{
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json'
}
}
);
if (!response.ok) {
throw new Error(`Failed to fetch resource: ${response.status}`);
}
return response.json();
}
Handle Fetch Failures
Resource fetches can fail for various reasons. Implement robust error handling:
async function fetchResourceWithRetry(resourceType, resourceId, maxRetries = 3) {
let lastError;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await fetchResource(resourceType, resourceId);
} catch (error) {
lastError = error;
// Handle 404 - resource may not exist yet or was deleted
if (error.status === 404) {
if (attempt < maxRetries) {
// Wait briefly in case of eventual consistency
await sleep(Math.pow(2, attempt) * 1000);
continue;
}
// After retries, consider this a valid scenario
console.warn(`Resource ${resourceId} not found after ${maxRetries} attempts`);
return null;
}
// Handle rate limiting
if (error.status === 429) {
const retryAfter = error.headers?.get('Retry-After') || Math.pow(2, attempt);
await sleep(retryAfter * 1000);
continue;
}
// For other errors, use exponential backoff
if (attempt < maxRetries) {
await sleep(Math.pow(2, attempt) * 1000);
}
}
}
throw lastError;
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
Validate and Process
Once you've fetched the resource, validate it before processing:
async function processEvent(eventType, resourceId, resourceType) {
// For deletion events, resource may not be fetchable
if (eventType === 'resource.deleted') {
// Handle deletion with just the ID since resource is already deleted
return handleDeletion(resourceId);
}
// Fetch the resource for non-deletion events
const resource = await fetchResourceWithRetry(resourceType, resourceId);
if (!resource) {
// Log and skip if resource doesn't exist
console.warn(`Skipping event: resource ${resourceId} not found`);
return;
}
// Validate resource state matches event type
if (!isValidForEvent(resource, eventType)) {
console.warn(`Resource state mismatch for event ${eventType}`);
return;
}
// Process based on event type
switch (eventType) {
case 'resource.created':
return handleCreation(resource);
case 'resource.updated':
return handleUpdate(resource);
default:
console.warn(`Unknown event type: ${eventType}`);
}
}
Handling Multiple Fetch Attempts
In distributed systems, eventual consistency can cause issues where a resource mentioned in a webhook doesn't exist yet in the API:
async function fetchWithEventualConsistency(resourceType, resourceId, options = {}) {
const {
maxRetries = 3,
initialDelay = 1000,
maxDelay = 10000
} = options;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const resource = await fetchResource(resourceType, resourceId);
if (resource) {
return resource;
}
} catch (error) {
if (error.status !== 404 && attempt === maxRetries) {
throw error;
}
}
// Calculate delay with exponential backoff and jitter
const delay = Math.min(
initialDelay * Math.pow(2, attempt - 1) + Math.random() * 1000,
maxDelay
);
await sleep(delay);
}
return null;
}
Idempotency Considerations
Since webhooks can be delivered multiple times, ensure your fetch-before-process implementation is idempotent:
async function processEventIdempotently(eventId, eventType, resourceId, resourceType) {
// Check if this event has already been processed
const isProcessed = await checkIfProcessed(eventId);
if (isProcessed) {
console.log(`Event ${eventId} already processed, skipping`);
return;
}
// Mark as processing to prevent concurrent processing
await markAsProcessing(eventId);
try {
// Fetch and process
const resource = await fetchResourceWithRetry(resourceType, resourceId);
if (resource) {
await processResource(eventType, resource);
}
// Mark as successfully processed
await markAsProcessed(eventId);
} catch (error) {
// Mark as failed for retry
await markAsFailed(eventId, error);
throw error;
}
}
Operational Best Practices
Deduplication
Implement deduplication to handle duplicate webhook deliveries:
// Using Redis for distributed deduplication
async function isDuplicate(eventId, ttl = 86400) {
const key = `webhook:processed:${eventId}`;
const exists = await redis.exists(key);
if (exists) {
return true;
}
// Set with TTL (24 hours default)
await redis.setex(key, ttl, '1');
return false;
}
app.post('/webhooks/provider', async (req, res) => {
const { id, event_type, resource_id } = req.body;
// Check for duplicate
if (await isDuplicate(id)) {
return res.status(200).send('Already processed');
}
try {
// Queue the event for processing
await queue.enqueue({
id,
event_type,
resource_id,
received_at: new Date().toISOString()
});
// Only acknowledge after successful queueing
res.status(200).send('OK');
} catch (err) {
console.error('Failed to queue event:', err);
res.status(500).send('Failed to process event');
}
});
Idempotent Handlers
Design your processing logic to be idempotent:
async function handleUpdate(resource) {
// Use upsert operations that are naturally idempotent
await db.collection('resources').updateOne(
{ id: resource.id },
{
$set: {
...resource,
updatedAt: new Date()
}
},
{ upsert: true }
);
}
async function handleCreation(resource) {
// Use insertOne with error handling for duplicates
try {
await db.collection('resources').insertOne({
...resource,
createdAt: new Date()
});
} catch (error) {
if (error.code === 11000) { // Duplicate key error
// Already exists, treat as update
return handleUpdate(resource);
}
throw error;
}
}
Handling Downstream API Failures
When the fetch fails, implement appropriate error handling and retry logic:
async function processWithCircuitBreaker(eventType, resourceId, resourceType) {
// Check circuit breaker state
if (circuitBreaker.isOpen()) {
throw new Error('Circuit breaker is open, skipping fetch');
}
try {
const resource = await fetchResourceWithRetry(resourceType, resourceId);
circuitBreaker.recordSuccess();
return resource;
} catch (error) {
circuitBreaker.recordFailure();
// Implement fallback strategies
if (circuitBreaker.isOpen()) {
// Queue for later processing
await queueForRetry(eventType, resourceId, resourceType);
}
throw error;
}
}
// Simple circuit breaker implementation
class CircuitBreaker {
constructor(threshold = 5, timeout = 60000) {
this.failureCount = 0;
this.threshold = threshold;
this.timeout = timeout;
this.openedAt = null;
}
recordSuccess() {
this.failureCount = 0;
this.openedAt = null;
}
recordFailure() {
this.failureCount++;
if (this.failureCount >= this.threshold) {
this.openedAt = Date.now();
}
}
isOpen() {
if (!this.openedAt) return false;
if (Date.now() - this.openedAt > this.timeout) {
// Half-open state: allow one request
this.openedAt = null;
this.failureCount = this.threshold - 1;
return false;
}
return true;
}
}
const circuitBreaker = new CircuitBreaker();
Observability
Implement comprehensive logging and monitoring:
async function processEvent(eventType, resourceId, resourceType) {
const startTime = Date.now();
const logger = {
eventType,
resourceId,
resourceType
};
try {
console.log('Processing event', logger);
// Fetch resource
const fetchStart = Date.now();
const resource = await fetchResourceWithRetry(resourceType, resourceId);
const fetchDuration = Date.now() - fetchStart;
// Log fetch metrics
console.log('Resource fetched', {
...logger,
fetchDuration,
resourceFound: !!resource
});
if (!resource) {
console.warn('Resource not found', logger);
return;
}
// Process resource
await processResource(eventType, resource);
// Log success metrics
const totalDuration = Date.now() - startTime;
console.log('Event processed successfully', {
...logger,
totalDuration,
fetchDuration
});
// Send metrics to monitoring system
metrics.increment('webhooks.processed.success', {
eventType,
resourceType
});
metrics.timing('webhooks.processing.duration', totalDuration);
metrics.timing('webhooks.fetch.duration', fetchDuration);
} catch (error) {
const totalDuration = Date.now() - startTime;
console.error('Event processing failed', {
...logger,
error: error.message,
stack: error.stack,
totalDuration
});
// Send error metrics
metrics.increment('webhooks.processed.error', {
eventType,
resourceType,
errorType: error.name
});
throw error;
}
}
Rate Limiting Considerations
Be mindful of rate limits when fetching resources:
// Token bucket rate limiter
class RateLimiter {
constructor(tokensPerSecond, bucketSize) {
this.tokensPerSecond = tokensPerSecond;
this.bucketSize = bucketSize;
this.tokens = bucketSize;
this.lastRefill = Date.now();
}
async acquire() {
this.refill();
if (this.tokens < 1) {
const waitTime = (1 - this.tokens) * (1000 / this.tokensPerSecond);
await sleep(waitTime);
this.refill();
}
this.tokens -= 1;
}
refill() {
const now = Date.now();
const timePassed = (now - this.lastRefill) / 1000;
const newTokens = timePassed * this.tokensPerSecond;
this.tokens = Math.min(this.bucketSize, this.tokens + newTokens);
this.lastRefill = now;
}
}
const rateLimiter = new RateLimiter(10, 50); // 10 requests/sec, burst of 50
async function fetchResource(resourceType, resourceId) {
await rateLimiter.acquire();
const response = await fetch(
`https://api.provider.com/${resourceType}/${resourceId}`,
{
headers: {
'Authorization': `Bearer ${API_KEY}`,
'Content-Type': 'application/json'
}
}
);
if (!response.ok) {
throw new Error(`Failed to fetch resource: ${response.status}`);
}
return response.json();
}
Generic Code Examples
Basic Handler Structure
A complete example of a webhook handler for thin events:
const express = require('express');
const app = express();
app.use(express.json());
// In-memory store for processed events (use Redis in production)
const processedEvents = new Set();
app.post('/webhooks/provider', async (req, res) => {
const { id, event_type, resource_id, resource_type, timestamp } = req.body;
// Validate webhook signature (implementation depends on provider)
if (!validateSignature(req)) {
return res.status(401).send('Invalid signature');
}
// Acknowledge receipt immediately
res.status(200).send('OK');
// Process asynchronously
try {
await processWebhook(id, event_type, resource_id, resource_type);
} catch (error) {
console.error('Webhook processing error:', error);
// Implement dead letter queue for failed events
await sendToDeadLetterQueue({ id, event_type, resource_id, error });
}
});
async function processWebhook(eventId, eventType, resourceId, resourceType) {
// Check for duplicate
if (processedEvents.has(eventId)) {
console.log(`Event ${eventId} already processed`);
return;
}
// Mark as processing
processedEvents.add(eventId);
try {
// Fetch the resource
const resource = await fetchResourceWithRetry(resourceType, resourceId);
if (!resource) {
console.warn(`Resource ${resourceId} not found`);
return;
}
// Process based on event type
await processResource(eventType, resource);
console.log(`Successfully processed event ${eventId}`);
} catch (error) {
// Remove from processed set on failure to allow retry
processedEvents.delete(eventId);
throw error;
}
}
app.listen(3000, () => {
console.log('Webhook server listening on port 3000');
});
Fetch Pattern with Error Handling
A robust fetch implementation:
async function fetchResourceWithRetry(resourceType, resourceId, options = {}) {
const {
maxRetries = 3,
timeout = 5000,
retryableStatuses = [408, 429, 500, 502, 503, 504]
} = options;
let lastError;
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), timeout);
const response = await fetch(
`https://api.provider.com/${resourceType}/${resourceId}`,
{
headers: {
'Authorization': `Bearer ${process.env.API_KEY}`,
'Content-Type': 'application/json'
},
signal: controller.signal
}
);
clearTimeout(timeoutId);
// Success case
if (response.ok) {
return await response.json();
}
// Handle 404 - resource doesn't exist
if (response.status === 404) {
if (attempt < maxRetries) {
// Might be eventual consistency, retry with backoff
await sleep(Math.pow(2, attempt) * 1000);
continue;
}
return null; // Resource truly doesn't exist
}
// Handle rate limiting
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After');
const delay = retryAfter ? parseInt(retryAfter) * 1000 : Math.pow(2, attempt) * 1000;
console.log(`Rate limited, retrying after ${delay}ms`);
await sleep(delay);
continue;
}
// Handle other retryable errors
if (retryableStatuses.includes(response.status) && attempt < maxRetries) {
await sleep(Math.pow(2, attempt) * 1000);
continue;
}
// Non-retryable error
throw new Error(`HTTP ${response.status}: ${await response.text()}`);
} catch (error) {
lastError = error;
// Don't retry abort errors on last attempt
if (error.name === 'AbortError' && attempt === maxRetries) {
throw new Error(`Request timeout after ${timeout}ms`);
}
// Retry on network errors
if (attempt < maxRetries) {
await sleep(Math.pow(2, attempt) * 1000);
continue;
}
}
}
throw lastError;
}
Error and Retry Scenarios
Handling common error scenarios:
async function handleEventWithErrorRecovery(event) {
const { id, event_type, resource_id, resource_type } = event;
try {
// Attempt to process
const resource = await fetchResourceWithRetry(resource_type, resource_id);
if (!resource) {
return handleMissingResource(event);
}
await processResource(event_type, resource);
} catch (error) {
return handleProcessingError(event, error);
}
}
async function handleMissingResource(event) {
const { id, event_type, resource_id } = event;
// Check if this is a deletion event
if (event_type.endsWith('.deleted')) {
console.log(`Resource ${resource_id} deleted as expected`);
return await processResourceDeletion(resource_id);
}
// For other events, this might indicate eventual consistency
// Queue for retry after a delay
console.warn(`Resource ${resource_id} not found, queueing for retry`);
await queueForRetry(event, { delay: 30000, maxRetries: 3 });
}
async function handleProcessingError(event, error) {
console.error(`Failed to process event ${event.id}:`, error);
// Categorize error
if (isTransientError(error)) {
// Queue for retry
await queueForRetry(event, {
delay: 60000,
maxRetries: 5,
backoff: 'exponential'
});
} else {
// Permanent error - send to dead letter queue
await sendToDeadLetterQueue({
event,
error: error.message,
timestamp: new Date().toISOString()
});
}
}
function isTransientError(error) {
const transientErrors = [
'ETIMEDOUT',
'ECONNREFUSED',
'ECONNRESET',
'ENOTFOUND'
];
return transientErrors.some(code => error.code === code) ||
error.message.includes('timeout') ||
error.message.includes('rate limit');
}
// Simple queue implementation (use a proper queue service in production)
const retryQueue = [];
async function queueForRetry(event, options = {}) {
const {
delay = 60000,
maxRetries = 3,
backoff = 'exponential'
} = options;
const retryCount = event.retryCount || 0;
if (retryCount >= maxRetries) {
console.error(`Max retries exceeded for event ${event.id}`);
return await sendToDeadLetterQueue(event);
}
const retryDelay = backoff === 'exponential'
? delay * Math.pow(2, retryCount)
: delay;
setTimeout(async () => {
console.log(`Retrying event ${event.id} (attempt ${retryCount + 1})`);
await handleEventWithErrorRecovery({
...event,
retryCount: retryCount + 1
});
}, retryDelay);
}
async function sendToDeadLetterQueue(data) {
// Implement based on your infrastructure
// Could be: database table, S3, dedicated queue service, etc.
console.error('Sending to dead letter queue:', data);
// await deadLetterQueue.send(data);
}
Simplifying Thin Events with Hookdeck
As demonstrated throughout this guide, implementing thin events requires building and maintaining several complex operational components:
- Rate limiting infrastructure to prevent overwhelming provider APIs with fetch requests
- Queue systems for decoupling webhook ingestion from processing
- Deduplication logic to filter out redundant events before they trigger unnecessary API fetches
- Circuit breakers to handle provider API outages gracefully
- Observability tooling to monitor fetch failures and processing metrics
- Retry mechanisms with exponential backoff and dead letter queues
While these patterns are essential for production-grade thin event handling, they represent significant engineering effort to build, test, and maintain.
Hookdeck provides these capabilities out-of-the-box through its Event Gateway, eliminating the need to implement them yourself:
| Challenge | DIY Implementation | Hookdeck Solution |
|---|---|---|
| Rate Limiting | Build token bucket rate limiter, manage state, handle burst capacity | Configure max delivery rate on Destination - Event Gateway queues and throttles automatically |
| Deduplication | Implement Redis-backed deduplication, manage TTLs, handle field-based comparison | Configure deduplication rules with field inclusion/exclusion - no infrastructure needed |
| Queue Management | Deploy and maintain message queue infrastructure, handle backpressure | Built-in durable queue with automatic backpressure handling |
| Observability | Build logging, metrics collection, dashboards for fetch failures | Complete request timeline, metrics dashboard, and event replay built-in |
| Error Recovery | Implement retry logic, dead letter queues, manual replay tooling | Automatic retries, bulk replay, and issue tracking included |
| Circuit Breaking | Code and maintain circuit breaker pattern for provider APIs | Rate limiting prevents circuit breaker scenarios; automatic retry handles transient failures |
For a complete guide on using Hookdeck to handle thin events at scale, see Handling Thin Events with the Hookdeck Event Gateway.
When to Use Hookdeck vs. DIY Implementation
Choose Hookdeck when:
- You want to implement thin events without building queue and rate limiting infrastructure
- You need production-ready deduplication, observability, and retry capabilities immediately
- You're scaling to thousands or millions of webhooks and want proven infrastructure
- Your team should focus on business logic rather than webhook infrastructure
Consider DIY implementation when:
- You have unique requirements that require custom queueing logic
- You already have robust webhook infrastructure and want to extend it
- You need complete control over every aspect of the processing pipeline
For most teams implementing thin events, Hookdeck significantly reduces time-to-production while providing enterprise-grade reliability and observability.
Summary
Thin events offer significant benefits for webhook-based systems, but require careful implementation to handle correctly. By following the fetch-before-process pattern and implementing robust error handling, deduplication, and observability, you can build reliable webhook handlers that scale with your application.
Key takeaways:
- Always acknowledge webhooks immediately and process asynchronously
- Implement retry logic with exponential backoff for fetch operations
- Handle eventual consistency with appropriate retry delays
- Make your handlers idempotent to safely handle duplicate deliveries
- Implement comprehensive observability to monitor performance and errors
- Use circuit breakers and rate limiters to protect downstream APIs
- Design error handling strategies for both transient and permanent failures
For more information on related topics, see:
Gain control over your webhooks
Try Hookdeck to handle your webhook security, observability, queuing, routing, and error recovery.