Gareth Wilson Gareth Wilson

Guide to SparkPost Webhooks Features and Best Practices

Published


SparkPost (now Bird Email) is one of the most widely used email delivery platforms, trusted by companies sending billions of emails per month. Beyond sending, SparkPost provides a comprehensive event system that lets you track every stage of an email's lifecycle, from injection through delivery, engagement, and beyond. Webhooks are the primary real-time mechanism for receiving this event data.

This guide covers everything you need to know about SparkPost webhooks: their features, how to configure them, best practices for production deployments, and the common pain points developers face along with solutions to address them.

What are SparkPost webhooks?

SparkPost webhooks (officially called Event Webhooks) are HTTP callbacks that deliver email event data to your endpoints in real time. Whenever a message is injected, delivered, bounced, opened, clicked, or triggers any other trackable event, SparkPost batches the event data and POSTs it to the URLs you configure.

SparkPost offers two ways to consume event data:

  • Event Webhooks (push) — SparkPost proactively streams event batches to your HTTP endpoint as events occur. This is the real-time option.
  • Events API (pull) — You query SparkPost's API to retrieve event data on your own schedule, with a 10-day retention window.

This guide focuses on Event Webhooks, as they're the primary integration for real-time email event processing.

SparkPost webhook features

FeatureDetails
Webhook configurationSparkPost UI, REST API
Authentication methodsBasic Auth, OAuth 2.0, custom headers
Payload signingNot available (no HMAC)
Timeout10-second hard timeout
Retry logicLogarithmic backoff for up to 8 hours
Batch size1–350+ events per batch
Batch frequencyEvery 30 seconds to 1 minute
Manual retryNot available
Event logBatch status available for 24 hours; Events API retains data for 10 days
Port supportPort 80 (HTTP) and 443 (HTTPS) only
Custom headersUp to 5 custom headers (max 3,000 bytes total)

Supported event types

SparkPost webhooks cover the full email lifecycle across several event categories:

Message events

EventDescription
injectionMessage received by or injected into SparkPost
deliveryRemote MTA acknowledged receipt of the message
bounceRemote MTA permanently rejected the message
delayRemote MTA temporarily rejected the message (4xx response)
out_of_bandRemote MTA initially accepted but later reported non-delivery
spam_complaintMessage classified as spam by the recipient or their provider
policy_rejectionSparkPost rejected the message due to policy

Generation events

EventDescription
generation_failureMessage generation failed for an intended recipient
generation_rejectionSparkPost rejected message generation due to policy

Engagement events

EventDescription
openRecipient opened the message (tracking pixel rendered)
initial_openFirst open of the message by the recipient
clickRecipient clicked a tracked link
amp_openRecipient opened the AMP version of the message
amp_initial_openFirst AMP open by the recipient
amp_clickRecipient clicked a link in the AMP version

Unsubscribe events

EventDescription
list_unsubscribeRecipient used the List-Unsubscribe header in their mail client
link_unsubscribeRecipient clicked an unsubscribe link in the message body

Relay events

EventDescription
relay_injectionInbound relayed message received by SparkPost
relay_rejectionSparkPost rejected the relayed message
relay_deliveryRemote HTTP endpoint acknowledged receipt of relayed message
relay_tempfailRemote HTTP endpoint temporarily failed to accept relayed message
relay_permfailRelayed message exceeded maximum retry threshold

A/B testing events

EventDescription
ab_test_completedA/B test campaign completed
ab_test_cancelledA/B test campaign was cancelled

Webhook payload structure

SparkPost delivers events as JSON arrays, with each event wrapped in a type-specific envelope under the msys key. A single batch may contain anywhere from 1 to 350+ events.

[
  {
    "msys": {
      "message_event": {
        "type": "delivery",
        "event_id": "92356927693813856",
        "timestamp": "1727193624",
        "message_id": "000443ee5bf013ef5a44",
        "rcpt_to": "recipient@example.com",
        "friendly_from": "sender@yourdomain.com",
        "msg_from": "msprvs1=abc123@yourdomain.com",
        "subject": "Your order confirmation",
        "campaign_id": "order_confirmations",
        "ip_address": "10.0.0.1",
        "ip_pool": "shared",
        "sending_ip": "10.0.0.1",
        "mailbox_provider": "Gmail",
        "mailbox_provider_region": "US",
        "num_retries": "0",
        "msg_size": "2048",
        "click_tracking": true,
        "open_tracking": true,
        "rcpt_meta": {},
        "rcpt_tags": [],
        "transmission_id": "65832150921904138"
      }
    }
  },
  {
    "msys": {
      "track_event": {
        "type": "open",
        "event_id": "92356927693813900",
        "timestamp": "1727193724",
        "message_id": "000443ee5bf013ef5a44",
        "rcpt_to": "recipient@example.com",
        "user_agent": "Mozilla/5.0",
        "ip_address": "203.0.113.1",
        "geo_ip": {
          "country": "US",
          "region": "CA",
          "city": "San Francisco"
        }
      }
    }
  }
]

Key payload fields

FieldDescription
event_idUnique identifier for this specific event
typeEvent type (e.g., delivery, bounce, open, click)
timestampUnix timestamp when the event occurred
message_idSparkPost's unique identifier for the message
rcpt_toLowercase recipient email address
friendly_fromDisplay-friendly sender address
campaign_idCampaign identifier (if set during transmission)
rcpt_metaCustom metadata attached to the recipient
rcpt_tagsTags associated with the recipient
transmission_idIdentifier of the transmission that generated this message
bounce_classBounce classification code (bounce events only)
error_codeSMTP error code from the remote MTA (bounce/delay events)
user_agentMail client user agent string (engagement events)
geo_ipGeographic location derived from IP (engagement events)

Batch headers

Every webhook POST from SparkPost includes an X-MessageSystems-Batch-ID header that uniquely identifies each batch. This is critical for deduplication. If you receive a batch you've already processed, return HTTP 200 to stop SparkPost from retrying it.

Authentication options

SparkPost supports multiple authentication methods for webhook deliveries:

MethodConfiguration
Basic AuthUsername and password included in the HTTP Authorization header
OAuth 2.0Client credentials grant; SparkPost requests tokens from your authorization server
Custom HeadersUp to 5 custom headers (e.g., X-API-Key) included with each delivery
NoneDefault setting — no authentication (not recommended)

Basic Auth is the simplest option. SparkPost includes your configured username and password in the Authorization header of each webhook POST.

OAuth 2.0 is the more robust option. You provide SparkPost with your authorization server's URL, client ID, and client secret. SparkPost requests a token before each delivery and automatically refreshes it if your endpoint returns a 400 or 401 response.

Custom headers can supplement either method. You can configure up to 5 key-value pairs (with a combined size limit of 3,000 bytes) that SparkPost includes with every delivery. These are useful for passing API keys or tenant identifiers to multi-tenant endpoints.

SparkPost does not support HMAC payload signing. Unlike providers such as Stripe or GitHub that sign the payload body with a shared secret, SparkPost relies on transport-level authentication (Basic Auth, OAuth 2.0) and TLS to secure webhook deliveries.

Setting up SparkPost webhooks

Via the SparkPost UI

  1. Navigate to the Webhooks tab in the SparkPost dashboard.
  2. Click the New Webhook icon in the upper-right corner.
  3. Enter a descriptive name for the webhook.
  4. Set the Target URL — this must use port 80 (HTTP) or 443 (HTTPS).
  5. Choose event types:
    • Send all event types to receive every event
    • Let me choose event types to select specific events
  6. Configure authentication (Basic Auth, OAuth 2.0, or None).
  7. Optionally add custom headers for additional security.
  8. Click Save — SparkPost will send a test POST to validate your endpoint.

Your endpoint must respond with HTTP 200 to the test POST, or the webhook creation will fail. Once created, events begin flowing within approximately 1 minute.

Via the REST API

curl -X POST https://api.sparkpost.com/api/v1/webhooks \
  -H "Authorization: $SPARKPOST_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "My Event Webhook",
    "target": "https://your-endpoint.com/webhooks/sparkpost",
    "events": ["delivery", "bounce", "open", "click", "spam_complaint"],
    "auth_type": "basic",
    "auth_request_details": {
      "url": "",
      "body": {
        "username": "webhook_user",
        "password": "your_secure_password"
      }
    }
  }'

SparkPost will validate the target URL during creation. If the endpoint doesn't return HTTP 200, the API responds with HTTP 400 and the webhook is not created.

Best practices when working with SparkPost webhooks

Respond immediately, process asynchronously

SparkPost has a strict 10-second timeout. If your endpoint doesn't respond within that window, the batch is marked as failed and enters the retry queue. Always acknowledge the batch immediately and defer processing to a background worker.

Implement deduplication using batch and event IDs

SparkPost may redeliver batches during retries, so your handler must be idempotent. Use the X-MessageSystems-Batch-ID header for batch-level deduplication and the event_id field for event-level deduplication.

async function handleSparkPostWebhook(req) {
  const batchId = req.headers['x-messagesystems-batch-id'];

  // Batch-level deduplication
  const batchSeen = await redis.get(`sp:batch:${batchId}`);
  if (batchSeen) {
    console.log(`Duplicate batch ${batchId}, skipping`);
    return; // Already processed, return 200 to stop retries
  }

  for (const event of req.body) {
    const eventData = event.msys?.message_event || event.msys?.track_event;
    if (!eventData) continue;

    // Event-level deduplication
    const eventSeen = await redis.get(`sp:event:${eventData.event_id}`);
    if (eventSeen) continue;

    await processEvent(eventData);
    await redis.setex(`sp:event:${eventData.event_id}`, 86400, '1');
  }

  // Mark batch as processed
  await redis.setex(`sp:batch:${batchId}`, 86400, '1');
}

Verify authentication credentials

Since SparkPost doesn't support HMAC signing, verifying the configured authentication credentials is your primary defense against spoofed webhook deliveries.

function verifySparkPostAuth(req) {
  const authHeader = req.headers['authorization'];
  if (!authHeader) return false;

  // Verify Basic Auth credentials
  const encoded = authHeader.replace('Basic ', '');
  const decoded = Buffer.from(encoded, 'base64').toString('utf8');
  const [username, password] = decoded.split(':');

  return (
    username === process.env.SPARKPOST_WEBHOOK_USER &&
    password === process.env.SPARKPOST_WEBHOOK_PASSWORD
  );
}

app.post('/webhooks/sparkpost', (req, res) => {
  if (!verifySparkPostAuth(req)) {
    return res.status(401).send('Unauthorized');
  }

  res.status(200).send('OK');
  processEventsAsync(req.body);
});

Handle variable batch sizes

SparkPost batches range from 1 to 350+ events and can contain mixed event types. Design your handler to process each event individually while extracting the correct payload from the type-specific envelope.

function extractEventData(event) {
  const msys = event.msys || {};
  const envelope = msys.message_event
    || msys.track_event
    || msys.unsubscribe_event
    || msys.relay_event
    || msys.gen_event
    || msys.ab_test_event;

  return envelope || null;
}

async function processBatch(events) {
  const results = { processed: 0, skipped: 0, errors: 0 };

  for (const event of events) {
    const data = extractEventData(event);
    if (!data) {
      results.skipped++;
      continue;
    }

    try {
      switch (data.type) {
        case 'bounce':
        case 'out_of_band':
          await handleBounce(data);
          break;
        case 'delivery':
          await handleDelivery(data);
          break;
        case 'open':
        case 'initial_open':
          await handleOpen(data);
          break;
        case 'click':
          await handleClick(data);
          break;
        case 'spam_complaint':
          await handleComplaint(data);
          break;
        case 'list_unsubscribe':
        case 'link_unsubscribe':
          await handleUnsubscribe(data);
          break;
        default:
          await handleGenericEvent(data);
      }
      results.processed++;
    } catch (err) {
      console.error(`Error processing ${data.type} event ${data.event_id}:`, err);
      results.errors++;
    }
  }

  return results;
}

Use the Events API as a safety net

SparkPost's Events API retains data for 10 days, compared to the webhook retry window of just 8 hours. Use it to backfill any gaps caused by endpoint downtime or missed batches.

async function backfillMissedEvents(fromTimestamp, toTimestamp) {
  const params = new URLSearchParams({
    from: fromTimestamp,
    to: toTimestamp,
    per_page: '1000'
  });

  let cursor = null;
  do {
    if (cursor) params.set('cursor', cursor);

    const response = await fetch(
      `https://api.sparkpost.com/api/v1/events/message?${params}`,
      { headers: { 'Authorization': process.env.SPARKPOST_API_KEY } }
    );

    const data = await response.json();

    for (const event of data.results) {
      const eventSeen = await redis.get(`sp:event:${event.event_id}`);
      if (!eventSeen) {
        await processEvent(event);
        await redis.setex(`sp:event:${event.event_id}`, 86400, '1');
      }
    }

    cursor = data.links?.next ? new URL(data.links.next).searchParams.get('cursor') : null;
  } while (cursor);
}

SparkPost webhook limitations and pain points

Strict 10-second timeout

The Problem: SparkPost webhook deliveries have a hardcoded 10-second timeout that cannot be configured. If your endpoint doesn't respond with HTTP 200 within 10 seconds, the entire batch is marked as failed and enters the retry queue.

Why It Happens: The timeout is set at the SparkPost infrastructure level and is not exposed as a configurable parameter in the UI or API. It's designed to keep SparkPost's delivery pipeline moving efficiently at scale, but it leaves very little room for endpoints that perform any synchronous processing.

Workarounds:

  • Always return HTTP 200 immediately and process events asynchronously in a background worker or message queue.
  • Write raw batch data to disk or object storage (e.g., S3) before processing, so you have a durable copy even if processing fails.
  • Avoid making external API calls, database writes, or any blocking operations before responding.

How Hookdeck Can Help: Hookdeck provides configurable delivery timeouts, giving your endpoint significantly more time to respond. It also queues and buffers incoming webhooks, so even if your endpoint is temporarily slow, events are preserved and delivered reliably.

8-hour retry window with permanent data loss

The Problem: When a webhook batch fails delivery, SparkPost retries using logarithmic backoff for a maximum of 8 hours. After that, the batch is permanently discarded with no way to recover the data. If your endpoint experiences an outage longer than 8 hours, you will lose event data.

Why It Happens: SparkPost processes billions of events and cannot retain failed webhook data indefinitely. The 8-hour window with logarithmic backoff is a compromise between reliability and infrastructure cost. There is no dead-letter queue or manual replay mechanism.

Workarounds:

  • Architect your webhook endpoint for high availability with redundancy and failover.
  • Use the Events API (which retains data for 10 days) to backfill any gaps after endpoint recovery.
  • Implement deduplication using event_id so that backfill operations don't create duplicate records.
  • Set up external monitoring that alerts you quickly when your endpoint goes down.

How Hookdeck Can Help: Hookdeck automatically preserves failed webhooks indefinitely, functioning as a dead-letter queue. When your endpoint recovers, you can inspect, debug, and replay failed deliveries from Hookdeck's dashboard, eliminating the 8-hour data loss window entirely.

No HMAC payload signing

The Problem: SparkPost does not sign webhook payloads with a cryptographic hash. Instead, SparkPost relies solely on transport-level authentication (Basic Auth or OAuth 2.0). This means there's no way to cryptographically verify that a webhook payload hasn't been tampered with in transit.

Why It Happens: SparkPost's webhook system predates the widespread adoption of HMAC signing as a webhook security standard. The platform supports Basic Auth and OAuth 2.0 for authentication, but these verify the identity of the sender, not the integrity of the payload itself.

Workarounds:

  • Always use HTTPS (port 443) to encrypt webhook traffic in transit.
  • Configure Basic Auth or OAuth 2.0 — never leave authentication set to "None".
  • Use custom headers with secret values as an additional verification layer.
  • Whitelist SparkPost's webhook egress hostname (wh.egress.sparkpost.com) at the firewall level.

How Hookdeck Can Help: Hookdeck sits between SparkPost and your endpoint, verifying source authenticity and adding an additional layer of security. Hookdeck can also apply HMAC signatures to the forwarded requests, giving your endpoint the cryptographic verification that SparkPost doesn't natively provide.

Port 80 and 443 only

The Problem: SparkPost webhooks only support standard ports 80 (HTTP) and 443 (HTTPS). Non-standard ports like 3000, 8080, or 9000 are rejected during webhook creation. This makes local development and testing significantly more difficult, as most development servers run on non-standard ports.

Why It Happens: SparkPost restricts ports as a security measure to ensure webhook targets are production-grade HTTP services rather than ad-hoc development endpoints.

Workarounds:

  • Use a tunneling service like Hookdeck CLI to expose your local development server on a standard HTTPS URL.
  • Deploy a reverse proxy (e.g., nginx) that listens on port 443 and forwards to your application's internal port.
  • Use SparkPost's built-in test feature to send sample payloads during initial development.

How Hookdeck Can Help: Hookdeck provides a stable public URL that SparkPost delivers to, and the Hookdeck CLI tunnels those deliveries to your local development server on any port. This eliminates the port restriction entirely during development and testing.

Unpredictable batch sizes and timing

The Problem: SparkPost delivers events in batches ranging from 1 to 350+ events, with delivery intervals of roughly 30 seconds to 1 minute. Both the batch size and timing are unpredictable — they vary based on your sending volume and SparkPost's internal load. Batches may also arrive from multiple SparkPost servers simultaneously, making the delivery pattern irregular.

Why It Happens: SparkPost streams raw event data as it's generated across its distributed infrastructure. Batch size and frequency depend on real-time message volume and internal load balancing. There is no configuration to control batch size or delivery cadence.

Workarounds:

  • Design your endpoint to handle both small (1 event) and large (350+ event) batches efficiently.
  • Implement a queue-based architecture where webhooks write to a buffer and workers consume at a controlled rate.
  • Avoid setting maximum request body size limits too low on your server, as truncated batches cause data loss.
  • Monitor batch sizes over time to understand your typical patterns and plan capacity accordingly.

How Hookdeck Can Help: Hookdeck normalizes webhook delivery by queuing incoming batches and delivering them to your endpoint at a controlled rate. This smooths out traffic spikes and gives you predictable throughput regardless of SparkPost's variable batch patterns.

No guaranteed event ordering

The Problem: SparkPost does not guarantee the order of events within or across batches. Because batches are generated by multiple servers in SparkPost's distributed infrastructure, events may arrive out of chronological order. A bounce event might arrive before the corresponding delivery event, or a click event might appear before the open event.

Why It Happens: SparkPost's event pipeline is distributed across multiple servers for scalability. Each server generates and batches events independently, so the order in which batches arrive at your endpoint doesn't necessarily match the chronological order of the underlying email events.

Workarounds:

  • Use the timestamp field on each event to determine chronological order rather than relying on delivery order.
  • Design your data model to handle out-of-order events gracefully (e.g., allow a bounce to be recorded before its corresponding delivery).
  • Implement event buffering with a short delay window to reorder events before final processing.
  • Use message_id to correlate related events for the same message.

How Hookdeck Can Help: Hookdeck can buffer and reorder incoming webhooks, delivering them to your endpoint in a more predictable sequence. Combined with its queue-based architecture, this reduces the complexity of handling out-of-order events in your application code.

Limited visibility into delivery status

The Problem: SparkPost provides minimal observability into webhook delivery health. Batch status information is available for only 24 hours via the UI or API, and there are no built-in alerts for delivery failures. If your webhook endpoint starts failing silently, you may not discover the issue until critical email events have been lost.

Why It Happens: SparkPost's webhook system was designed primarily as a data delivery mechanism, not a full observability platform. Batch status is retained for a short window, and the UI provides limited drill-down into individual delivery attempts, response codes, or error patterns.

Workarounds:

  • Implement your own monitoring by tracking received batches and alerting when gaps are detected.
  • Set up health checks on your webhook endpoint and alert on failures.
  • Periodically reconcile webhook data against the Events API to identify missed events.
  • Use SparkPost's batch status API to poll for delivery issues within the 24-hour window.

How Hookdeck Can Help: Hookdeck's dashboard provides complete, real-time visibility into webhook delivery status, including latency, success/failure rates, response codes, and error details. You can configure alerts for delivery anomalies and inspect individual request/response pairs for debugging.

Duplicate events require client-side handling

The Problem: SparkPost uses at-least-once delivery semantics, meaning duplicate batches and individual events can and do occur. During retries, network issues, or edge cases in SparkPost's distributed pipeline, your endpoint may receive the same event multiple times. There is no built-in deduplication mechanism.

Why It Happens: At-least-once delivery is a deliberate design choice that prioritizes data completeness over exactness. Ensuring exactly-once delivery in a distributed system is significantly more complex and would add latency. SparkPost provides the X-MessageSystems-Batch-ID header and event_id field as deduplication primitives, but the implementation is left to you.

Workarounds:

  • Track X-MessageSystems-Batch-ID values to detect and skip duplicate batches.
  • Track event_id values to detect and skip duplicate individual events.
  • Use a persistent store (e.g., Redis with TTL) to maintain your deduplication set.
  • Return HTTP 200 for duplicate batches to prevent further retries.

How Hookdeck Can Help: Hookdeck's automatic deduplication filters duplicate webhooks before they reach your endpoint. You can configure deduplication rules based on headers, payload content, or custom identifiers, eliminating the need to build and maintain your own deduplication infrastructure.

Schema changes without strict versioning

The Problem: SparkPost may add new fields to webhook payloads at any time without versioning the payload schema. While SparkPost commits to providing "reasonable notice" before removing fields or making breaking changes, the definition of "reasonable" is not specified. New fields can appear without warning, and the data types and structure vary between event types.

Why It Happens: SparkPost's additive changes policy is common among webhook providers — it allows the platform to evolve without requiring consumers to upgrade. However, the lack of formal schema versioning means that strict schema validation in your webhook handler can break unexpectedly.

Workarounds:

  • Design your webhook handler to ignore unknown fields rather than rejecting payloads that don't match an expected schema.
  • Use separate processing logic for different event types, since payload structure is consistent within an event type but not between types.
  • Monitor SparkPost's changelog and documentation for payload changes.
  • Write defensive parsing code that uses optional field access and sensible defaults.

How Hookdeck Can Help: Hookdeck's transformation capabilities let you normalize and reshape incoming payloads before they reach your endpoint. If SparkPost adds new fields or changes structure, you can update your Hookdeck transformation without modifying your application code, providing a buffer against schema changes.

Testing SparkPost webhooks

Use the built-in test feature

The SparkPost UI includes a test option for each webhook. From the Webhooks tab, click the dropdown icon on the webhook you want to test and select "Test Webhook." SparkPost will send a sample event batch to your endpoint and report whether it responded with HTTP 200.

Keep in mind that test payloads may differ from real event data in size and content. Some issues, particularly around authentication, batch size handling, and timeout behavior, only surface with real production traffic.

Use a request inspector

Before building your handler, inspect real SparkPost payloads to understand their structure. Services like Hookdeck Console provide a temporary URL you can configure as your webhook target to capture and inspect live payloads without building any infrastructure.

  1. Create a temporary endpoint URL.
  2. Configure it as your SparkPost webhook target.
  3. Send a test email to trigger real events.
  4. Inspect the payload structure, headers, and event types.

Get sample payloads from the API

SparkPost provides an endpoint to retrieve sample payloads for any event type:

curl https://api.sparkpost.com/api/v1/events/message/samples?events=bounce \
  -H "Authorization: $SPARKPOST_API_KEY"

This returns representative payloads with sample values, useful for building and testing your event processing logic without waiting for real events.

Validate in staging

Test your webhook integration with realistic scenarios before deploying to production:

  • Single message delivery through the full lifecycle (injection → delivery → open → click).
  • Bounce handling for both hard bounces and soft delays.
  • High-volume batches to verify your endpoint handles large payloads within the 10-second timeout.
  • Authentication failures to confirm your endpoint correctly rejects unauthorized requests.
  • Extended endpoint downtime to verify your Events API backfill logic works correctly.

Conclusion

SparkPost webhooks provide real-time visibility into the full email lifecycle, from injection through delivery, engagement, and unsubscription. The comprehensive event taxonomy, covering message events, generation events, engagement tracking, and relay events, makes them a powerful foundation for building email analytics, automated response systems, and deliverability monitoring.

However, the 10-second timeout, 8-hour retry window, lack of HMAC signing, and unpredictable batching behavior mean production deployments require careful engineering. Implementing asynchronous processing, robust deduplication, and proper authentication verification will address most common issues. Using the Events API as a backfill mechanism provides a safety net against data loss.

For teams with moderate email volumes and reliable infrastructure, SparkPost's built-in webhook capabilities combined with the best practices above work well. For high-volume senders, mission-critical event pipelines, or teams that need stronger delivery guarantees and observability, webhook infrastructure like Hookdeck can address SparkPost's limitations by providing configurable timeouts, automatic deduplication, dead-letter queues, payload transformation, and comprehensive delivery monitoring without modifying your SparkPost configuration.


Gareth Wilson

Gareth Wilson

Product Marketing

Multi-time founding marketer, Gareth is PMM at Hookdeck and author of the newsletter, Community Inc.