Building sentiment-aware support replies with Intercom and OpenAI
A customer opens a new conversation in Intercom. Within a second of the message landing, an AI step has classified the sentiment as "frustrated, escalation risk," tagged the conversation as "stability complaint," cross-referenced the customer's account to confirm they're on the Pro plan with two prior tickets this week, drafted a reply that acknowledges the loss of work explicitly (no corporate-speak) and offers a specific next step, and assigned the conversation to the senior agent who handles retention cases — not the front-line queue. By the time the senior agent logs in at 7am, the draft is waiting in Intercom, the context is summarised, and the customer hasn't been left in the standard 12-hour overnight queue alongside "how do I change my password?".
That's the workflow most support teams want to build with Intercom and an LLM — and the part that's deceptively hard isn't the AI step, it's the event flow that triggers it. Intercom fires webhooks generously (new conversations, replies, assignments, tag changes, ratings), volumes spike during product incidents, and the cost of mishandling a single message during an outage is much higher than the cost of mishandling one during a quiet afternoon.
This guide walks through the glue layer end to end: the architecture, the seven concrete steps to wire it up, and the production concerns that show up the moment you're running real support volume.
The flow
flowchart TB
A[Customer message<br/>in Intercom] --> B[Intercom webhook<br/>conversation.user.created<br/>or .replied]
B -->|POST JSON| C[Hookdeck<br/>inbound source]
C -->|filter + transform<br/>+ queue + rate limit| D[OpenAI<br/>sentiment + reply handler]
D -.->|chat completion| E[GPT-4o]
D -->|POST triage result| F[Hookdeck<br/>callback source]
F -->|priority routing| G1[Intercom API<br/>draft reply + tags]
F -->|escalation routing| G2[Slack<br/>senior agent ping]
F -->|metrics routing| G3[Data warehouse<br/>sentiment log]
There are two webhook flows that need to be reliable:
- Intercom's message events into the AI step — must survive incident spikes. When your product breaks, the conversation volume goes up tenfold and your AI step's per-second cost gets compressed into a few minutes. That's exactly when you can't afford to drop messages.
- The AI's outputs back into Intercom and other systems — must reach Intercom even when Intercom's own API is rate-limiting you, which is more likely during the same spikes that triggered the work in the first place.
Most teams build this with a POST /webhooks/intercom endpoint that calls OpenAI inline, then calls Intercom's API to update the conversation. That's enough to demo. The reasons it isn't enough to run are the reasons Hookdeck sits in this picture.
What you'll need
- An Intercom workspace with a developer app and webhook topics enabled
- An OpenAI API key
- A Hookdeck Event Gateway account — the free tier covers this workflow at low volume
- Hookdeck CLI installed:
npm install hookdeck-cli -gorbrew install hookdeck/hookdeck/hookdeck - A handler endpoint that receives the conversation payload, calls OpenAI, and POSTs the result back
Step 1: Create the Hookdeck Event Gateway source for Intercom
In the Hookdeck Event Gateway dashboard:
- Create Connection → New Source
- Type: HTTP (or Intercom if pre-configured in your Hookdeck Event Gateway account)
- Name:
intercom-conversation-events - Provide your Intercom client secret so Hookdeck can verify the
X-Hub-Signatureheader
Copy the generated source URL.
Step 2: Subscribe Intercom webhooks to the Hookdeck URL
In Intercom's Developer Hub:
- Open your app → Webhooks → Configure
- Endpoint URL: paste the Hookdeck source URL
- Topics: enable
conversation.user.created,conversation.user.replied, and any others you want triaged (conversation.rating.createdis useful for closing the feedback loop) - Save
Send a test message into your workspace from a test contact. The full event payload should appear in the Hookdeck Event Gateway dashboard within a second.
Step 3: Add the destination — your sentiment + reply handler
The handler:
- Receives the parsed conversation event
- Calls OpenAI with a prompt that returns structured JSON: sentiment, intent, priority, suggested reply, and escalation flag
- POSTs the triage result to a second Hookdeck source for fan-out
A minimal handler:
export default {
async fetch(request, env) {
const event = await request.json();
const lastMessage = event.conversation_parts.slice(-1)[0];
const completion = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'authorization': `Bearer ${env.OPENAI_API_KEY}`,
'content-type': 'application/json',
},
body: JSON.stringify({
model: 'gpt-4o-mini',
response_format: { type: 'json_object' },
messages: [
{ role: 'system', content: SENTIMENT_PROMPT },
{
role: 'user',
content: JSON.stringify({
customer_plan: event.customer_plan,
prior_ticket_count: event.prior_ticket_count,
conversation_history: event.conversation_history,
latest_message: lastMessage,
}),
},
],
}),
});
const result = await completion.json();
const triage = JSON.parse(result.choices[0].message.content);
await fetch(env.HOOKDECK_CALLBACK_URL, {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({
conversation_id: event.conversation_id,
customer_email: event.customer_email,
triage,
}),
});
return new Response('ok', { status: 200 });
},
};
Configure the Hookdeck Event Gateway destination:
- Type: HTTP
- URL: your handler URL
- Authentication: an HTTP header carrying a shared secret
Step 4: Add filter, transformation, and rate-limit rules
Transformation — flatten Intercom's nested event into a canonical conversation payload. You may also want to enrich here with customer data from your own database:
addHandler('transform', async (request, context) => {
const event = request.body;
if (event.topic !== 'conversation.user.created' &&
event.topic !== 'conversation.user.replied') {
return null; // drop
}
const convo = event.data.item;
const customer = convo.contacts.contacts[0];
// Optional: enrich with internal customer data
const customerData = await fetch(
`${context.secrets.INTERNAL_API}/customers/${customer.email}`,
{ headers: { authorization: `Bearer ${context.secrets.INTERNAL_TOKEN}` } }
).then(r => r.json());
request.body = {
conversation_id: convo.id,
customer_email: customer.email,
customer_plan: customerData.plan,
prior_ticket_count: customerData.tickets_last_30d,
conversation_history: convo.conversation_parts.conversation_parts.map(p => ({
author: p.author.type,
body: p.body,
created_at: p.created_at,
})),
received_at: new Date().toISOString(),
};
return request;
});
Filter — drop events that don't need AI triage. Conversations on certain topics (e.g. self-service flows), or conversations already assigned to a human, can skip the AI step:
{
"body": {
"conversation_history": {
"$exists": true,
"$not": { "$size": 0 }
}
}
}
Rate limit — keep OpenAI happy during incident spikes:
- Rate:
10per second - Burst:
25
Hookdeck Event Gateway queues the rest. The AI step keeps running through the spike; nothing is dropped.
Retry policy — handle OpenAI and handler transients:
- Initial delay:
30 seconds - Max attempts:
15 - Max age:
24 hours - Apply on status codes:
408, 429, 500, 502, 503, 504
Step 5: Test the inbound leg locally with the CLI
Route the inbound connection to the CLI:
hookdeck login
hookdeck listen 3000 intercom-conversation-events
A local inspector:
// inspect.js
const http = require('http');
http.createServer((req, res) => {
let body = '';
req.on('data', chunk => body += chunk);
req.on('end', () => {
console.log('Canonical conversation:', JSON.parse(body));
res.writeHead(200);
res.end('ok');
});
}).listen(3000);
Trigger a real test conversation in Intercom. Verify the canonical payload looks right and that your customer enrichment lookup worked. Press r to replay events while iterating on the transformation.
Step 6: Wire triage results back through Hookdeck
The triage result fans out:
- Intercom — draft a reply, add tags, set priority
- Slack — ping the on-call senior agent if
triage.escalate == true - Data warehouse — log sentiment and priority for reporting
Create a second connection with the source support-triage-results and one destination per downstream.
For the Intercom write-back, a transformation builds the API call:
addHandler('transform', (request, context) => {
const { conversation_id, triage } = request.body;
request.url = `https://api.intercom.io/conversations/${conversation_id}`;
request.method = 'PUT';
request.headers = {
...request.headers,
authorization: `Bearer ${context.secrets.INTERCOM_TOKEN}`,
'content-type': 'application/json',
accept: 'application/json',
'intercom-version': '2.11',
};
request.body = {
custom_attributes: {
ai_sentiment: triage.sentiment,
ai_priority: triage.priority,
ai_intent: triage.intent,
},
};
return request;
});
A second connection from the same source posts the draft reply via Intercom's notes API; a third routes urgent cases to Slack with a filter on triage.escalate:
{
"body": {
"triage": {
"escalate": true
}
}
}
Retry policy — aggressive:
- Initial delay:
15 seconds - Max attempts:
15 - Max age:
72 hours
Step 7: Run the full chain end to end
Send a deliberately frustrated test message from a test contact. You should see:
- Intercom fires
conversation.user.createdintointercom-conversation-events - Hookdeck Event Gateway verifies, transforms, filters, and delivers to your handler
- Your handler calls OpenAI and POSTs to
support-triage-results - The Intercom conversation gets tagged and the draft appears as a note
- A Slack message lands in the senior agent channel
- The sentiment row appears in your data warehouse
If anything fails, the Hookdeck Event Gateway dashboard tells you exactly where, with payload and response visibility at every hop.
Why Hookdeck and not just a try/except in your app server?
Three properties of support workflows make a direct integration the wrong choice once you're past the demo:
Support events spike during incidents. This is the defining property of support volume — it's not uniform, and the spikes correlate exactly with the moments your own infrastructure is under stress. A direct Intercom-to-OpenAI handler that runs on the same app server as your product becomes the second thing that falls over during an outage. Hookdeck Event Gateway queues messages during the spike on infrastructure that's independent of yours, so the triage keeps running even when your app is partially down.
AI APIs throttle when you most need them. OpenAI and Anthropic both rate-limit per workspace. During an incident spike, you'll hit those limits faster than usual, exactly when you need every triage to land. Hookdeck respects retry-after, retries with exponential backoff, and keeps queueing until the rate limit recovers — without losing messages.
Audit trail matters in support. When a customer asks "why didn't my urgent ticket get prioritzed?", you need an answer. Hookdeck Event Gateway stores every event, every transformation, every retry attempt, and every response. The support manager (not just the engineer) gets a single dashboard that explains the path a specific conversation took through the AI step.
You can build all of this on your own — a queue, a retry worker, a transformation step, an audit log, an observability layer, a replay tool. That's the work Hookdeck collapses into a connection in a dashboard. The hours you don't spend on that are hours you spend on the triage prompt and the customer experience.
Going to production
Observability for support leadership. Hookdeck's Issues feature surfaces failure patterns — repeated retries, signature failures, payload spikes — that map directly onto things support leadership care about: "are our customers being triaged right now?".
Tune retries against SLAs. If your first-response SLA is 30 minutes, you want aggressive early retries and a fallback to a human queue after, say, three failed attempts. Don't silently retry for 24 hours while a customer waits.
Replay deliberately when the prompt changes. When you update the sentiment categories or the priority rules, Hookdeck's replay lets you re-run the last 24 hours through the new prompt — useful for regression-testing before flipping it on for live traffic.
Handle PII responsibly. Conversation bodies contain account numbers, addresses, occasionally payment details. Configure Hookdeck's payload redaction on sensitive fields before going live so the support team can use the dashboard without seeing PII.
What to build next
This pattern generalizes: Swap Intercom for Front, Help Scout, or Drift. Swap OpenAI for Claude or any other model. Extend to sentiment-aware proactive messaging based on product events.
If you're building any of this, the fastest way to get past the demo phase is to stop maintaining your own webhook infrastructure. Start with the Hookdeck free tier (you can run this entire workflow without paying anything until you hit real volume) and use the CLI to keep your development loop fast.