Skip to content
  • There are no suggestions because the search field is empty.

Understanding and Resolving the “Trigger Queue Exceeded Maximum Size” Error 

The error:

Trigger queue exceeded maximum size for pipeline <pipeline_name>

means your event-driven trigger is producing events faster than the Pipeline can process them. Event-driven triggers place incoming events into an inbound trigger queue. If the pipeline can’t keep up, the queue grows until it reaches its maximum size, and the pipeline reports this error as part of “exceeding the maximum inbound event queue size.”

This queue limit is hard-coded to 10,000 events and is not user-configurable

Applies to

  • Event Triggers (event-driven subscriptions). User Guide

  • Flow Triggers configured in event mode (Flows can operate in event or polled mode; the queue behavior applies when operating in event mode). User Guide

Why this happens

A pipeline processes triggered events in sequence. If any stage becomes a bottleneck (often write or write new stages, but it can also be other stages.), the pipeline’s processing rate drops below the incoming event rate. Over time, the inbound trigger queue can grow until it hits the maximum. When the maximum is hit, incoming events will be dropped until there is room in the queue. 

Step 1: Identify the bottleneck (don’t assume)

Before changing the design, determine which stage is taking the longest:

  • Utilize the Statistics data of each stage to inspect pipeline throughput. Statistics can be found in the right nav by clicking an individual stage in a pipeline. 

  • Review pipeline troubleshooting guidance for identifying pipeline vs stage errors (including inbound queue errors). User Guide

What to look for

  • A specific stage consistently taking the longest

  • Repeated errors/timeouts/retries in a stage

Step 2: Choose the right fix

Option A (Best Practice): Help the pipeline drain faster (batch before the write)

If the write stage is the bottleneck, add a Size Buffer immediately before the Write stage to batch events and write in larger chunks.

The Size Buffer stage accumulates a maximum number of events and emits them as an array—useful when the target system is more efficient processing chunks rather than one-by-one events. User Guide

Best practices

  • Set a Window Size to batch under load.

  • Set a Timeout so low-traffic periods still flush regularly. User Guide

Option B: Split or load-balance ingest across multiple pipelines

Load balance across parallel pipelines:

  • Split the trigger/watchlist so each pipeline handles a subset.

  • Or use an ingest pipeline that does minimal work and routes events to multiple worker pipelines 

This reduces queue pressure by increasing parallel processing capacity.

Option C: Reduce incoming event volume (when you don’t need every message)

If processing every event is not required:

  • Downsample (e.g., process 1 out of N)

  • Publish only meaningful changes (“on change”) rather than high-frequency full state

  • Reduce upstream noise where possible

This is the cleanest fix when the event rate is higher than the business need.

Option D: Optimize the slowest stage

If transforms/enrichment are the bottleneck:

  • Simplify expressions/transforms

  • Reduce per-event external calls

  • Avoid expensive per-event operations when the result can be computed less frequently or cached

Recommended troubleshooting workflow

  1. Confirm the trigger is event-driven (Event Trigger or Flow Trigger in event mode). User Guide+1

  2. Use Statistics to find which stage is slowing execution. User Guide

  3. Apply fixes:

    • Batch before write (Size Buffer) User Guide

    • Split/load-balance across pipelines

    • Reduce event volume if acceptable

 

Related Articles