Gareth Wilson Gareth Wilson

How to Test and Replay OpenAI Webhooks Locally

Published


OpenAI webhooks notify your application when long-running async work finishes — a Batch job completes, a fine-tuning run succeeds, a background Responses call finishes, a Deep Research result lands, a Realtime SIP call comes in. Testing those events end-to-end during development is awkward without a public HTTPS endpoint, and OpenAI doesn't ship a first-party CLI for forwarding webhooks to localhost the way Stripe does.

This guide walks you through setting up a local webhook endpoint, wiring it to OpenAI through Hookdeck, and replaying events on demand using the following steps:

  1. Set up a local webhook endpoint
  2. Create a Hookdeck connection
  3. Configure the webhook endpoint in the OpenAI Dashboard
  4. Test the integration with a sample event
  5. Replay error or failed events

Relaying your OpenAI webhooks through Hookdeck makes it easy to consume, monitor, troubleshoot, and replay events while you're building.

Step 1: Set up a local webhook endpoint

You need a server running on your local machine that OpenAI's webhooks can POST to. Any HTTP server that listens on a port and logs the request body will do. A minimal Express handler:

mkdir openai-webhook-test && cd openai-webhook-test
npm init -y
npm install express

Create index.js:

cat > index.js << 'EOF'
const express = require('express');
const app = express();
// IMPORTANT: capture the raw body — JSON.parse will break signature verification
app.use(express.text({ type: 'application/json' }));
app.post('/webhooks/openai', (req, res) => {
  console.log('Received:', req.body);
  res.sendStatus(200);
});
app.listen(3000, () => console.log('Listening on :3000'));
EOF

Start it:

node index.js

Your handler is now reachable at http://localhost:3000/webhooks/openai — but OpenAI can't reach localhost. That's where Hookdeck comes in.

Step 2: Create a Hookdeck connection

Hookdeck sits between OpenAI and your local handler. It gives you a stable public HTTPS URL to register with OpenAI, queues every incoming event durably, and lets you replay any event with one click.

Create the Hookdeck Connection with the Hookdeck CLI

Install the Hookdeck CLI:

  npm install hookdeck-cli -g
  
  
  yarn global add hookdeck-cli
  
  
    brew install hookdeck/hookdeck/hookdeck
    
    
  1.     scoop bucket add hookdeck https://github.com/hookdeck/scoop-hookdeck-cli.git
        
        
  2.   scoop install hookdeck
      
      
  1. Download the latest release's tar.gz file.

  2.     tar -xvf hookdeck_X.X.X_linux_x86_64.tar.gz
        
        
  3.   ./hookdeck
      
      

Authenticate (this opens a browser to log in or create a free Hookdeck account):

hookdeck login

Now run hookdeck listen against your local port and pick a name for the source — openai is fine:

hookdeck listen 3000 openai

The CLI prints a public HTTPS URL that points at your local server. It looks like https://hkdk.events/abc123xyz. Anything OpenAI POSTs to that URL is forwarded to localhost:3000, queued in Hookdeck, and visible in the Hookdeck dashboard for inspection and replay.

Step 3: Configure the webhook endpoint in the OpenAI Dashboard

Open the OpenAI Dashboard, navigate to Settings > Project > Webhooks, and click Create.

  • Name: anything you'll recognise later
  • URL: paste the Hookdeck public URL from Step 2
  • Event types: subscribe to the events you want to test — batch.completed, fine_tuning.job.succeeded, response.completed, eval.run.succeeded, realtime.call.incoming, etc.

Save the endpoint. OpenAI generates a whsec_-prefixed signing secret and shows it once — copy it now, because you can't view it again. Store it as OPENAI_WEBHOOK_SECRET in your environment.

Step 4: Test the integration with a sample event

The OpenAI Dashboard webhook settings page lets you trigger a test event with sample data — that's the fastest way to confirm the wiring without kicking off a real Batch job or fine-tuning run. Click Send test event and pick an event type.

In the terminal where you ran hookdeck listen, you'll see the test event arrive. In the Hookdeck dashboard you'll see the same event with full request and response details, signature headers, and forwarding status. Your local handler logs the raw body.

If your local handler returns a non-2xx response, Hookdeck records the failure and keeps the event in the queue, ready to retry.

Step 5: Replay error or failed events

Where Hookdeck pays for itself during development is replay. OpenAI retries failed deliveries for up to 72 hours, but waiting that long while you're iterating on a handler is painful. Open any event in the Hookdeck dashboard and click Retry to re-deliver it to your local server. You can do this as many times as you want, with whatever code changes you've made in between — no need to trigger another live event.

You can also replay any event from the CLI:

hookdeck listen 3000 openai --replay

This is the loop that makes development genuinely productive — trigger once, replay forever, fix bugs without waiting on long-running async OpenAI jobs to complete.

Conclusion

OpenAI webhooks are easy to receive in production but tricky to develop against locally because they require a public HTTPS endpoint, signed payloads (using the Standard Webhooks spec), and an at-least-once delivery model that's hard to debug without observability. OpenAI's docs recommend ngrok or a cloud dev environment, but neither gives you replay or durable inspection. Hookdeck gives you a stable URL, durable queueing, full request inspection, and one-click replay — all without changing your handler code.

Try Hookdeck for free and start building reliable OpenAI webhook integrations from day one. For deeper background, see our guide to OpenAI webhook features and best practices and how to secure and verify OpenAI webhooks.


Gareth Wilson

Gareth Wilson

Product Marketing

Multi-time founding marketer, Gareth is PMM at Hookdeck and author of the newsletter, Community Inc.