How to Control Throughput and Queue Events to Your Destination
Receiving Events at high volumes can lead to spikes that overwhelm your systems. With Hookdeck you can configure a Destination delivery rate, which helps you control the rate at which events are delivered to your destination, thus preventing overload and ensuring smooth operation during peak times or unexpected surges in traffic. Events received beyond the configured rate are queued and processed at a later time, ensuring steady and predictable delivery.
Why Controlling Throughput is Important
Variable or large spikes in events can lead to various issues, including:
- Server Overload: Sudden influxes of traffic can overwhelm servers, leading to slow response times or crashes.
- Dependancies limits: Applications often have dependancies such as database connections or third-party API that have upper limit on concurrent capacity. Exceeding the limits of third-party APIs for instance can result in throttled or rejected requests, disrupting your service.
Controlling throughput ensures a smooth and consistent flow of events, maintaining system reliability and performance. Offloading the responsability of queueing events to Hookdeck.
Setting a Max Delivery Rate
To control throughput to your destination, you can set a maximum delivery rate for your events.
The max delivery rate can set either to a concurrent limit or a per second, per minute, or per hour basis. When a concurrency limit is set, the effective delivery rate will vary based on your destination response latency. A concurrency limit of 1 is effectively the same as "serial" delivery.
Setting a Max Delivery Rate ->
Monitoring Delivery Rate
Monitoring your delivery rate is important to make sure your limit is correctly set. Hookdeck provides various Metrics to help you track and analyze the delivery performance of your events:
- Delivery Rate: Tracks the number of events delivered per second, minute, or hour.
- Queue Length: Monitors the number of events waiting in the queue.
- Error Rate: Measures the percentage of events that encountered errors during delivery.
- Response Latency: Measures the time it takes for your destination to respond to incoming events.
- Pending Attempts: Monitors the number of events that are pending (in queue).
- Oldest Attempts: Indicates the age of the oldest event in the queue, which helps identify backpressure.
To monitor these metrics, visit the Metrics page in your Hookdeck dashboard. Those metrics can also be exported to Datadog.
Regular monitoring allows you to proactively address potential issues and maintain a steady and reliable flow of events.
Backpressure Management
Backpressure occurs when the receiving system cannot handle the incoming rate of events. If Hookdeck receives more events that your destination max delivery rate allows for, it will hold those events in a queue. The backpressure is the amount of time that the oldest events in the queue is expected to wait before being delivered. Keeping backpressure in check helps you avoid large delays in your event delivery.
How Hookdeck Handles Backpressure
When backpressure is detected, Hookdeck can automatically open an Issue to notify your team of the problem. This allows you to take action on the issue and prevent further backpressure. You can also configure notification channels to receive alerts when backpressure issues are detected.
Hookdeck allows you to define issue triggers to configure the sensitivity of the backpressure and the conditions for the issue to open.
Create an Issue Trigger ->
Example Scenarios
Scenario 1: Smoothing Out Request Throughput During Traffic Spikes
In this scenario, you experience sudden spikes in traffic. To smooth out the throughput:
- Set Max Delivery Rate: Configure the max delivery rate to handle the average expected traffic.
- Monitor Traffic Patterns: Use Hookdeck’s monitoring tools to identify peak traffic times.
- Adjust Settings: Fine-tune the delivery rate to ensure smooth throughput during spikes.
Scenario 2: Managing Rate-Limited API Calls
If you depend on an API with strict rate limits:
- Identify Rate Limits: Determine the API’s rate limits (e.g., 60 requests per minute).
- Configure Delivery Rate: Set the max delivery rate below the API’s limit to avoid throttling.
- Monitor API Responses: Ensure the success rate remains high and adjust settings as needed.