releases.shpreview
Home/Trigger.dev
Trigger.dev

Trigger.dev

$npx @buildinternet/releases get trigger-dev
May 1, 2026
trigger.dev v4.4.5

Upgrade

npx trigger.dev@latest update  # npm
pnpm dlx trigger.dev@latest update  # pnpm
yarn dlx trigger.dev@latest update  # yarn
bunx trigger.dev@latest update  # bun

Self-hosted Docker image: ghcr.io/triggerdotdev/trigger.dev:v4.4.5

Release notes

Read the full release notes: https://trigger.dev/changelog/v4-4-5

What's changed

Breaking changes

  • Add server-side deprecation gate for deploys from v3 CLI versions (gated by DEPRECATE_V3_CLI_DEPLOYS_ENABLED). v4 CLI deploys are unaffected. (#3415)

Improvements

  • Add --no-browser flag to init and login to skip auto-opening the browser during authentication. Also error loudly when init is run without --yes under non-TTY stdin (previously default-and-exited silently, leaving the project half-initialized). Both commands now show an Examples section in --help. (#3483)
  • Add isReplay boolean to the run context (ctx.run.isReplay), derived from the existing replayedFromTaskRunFriendlyId database field. Defaults to false for backwards compatibility. (#3454)
  • Redact the resolveWaitpoint runtime log so it only emits id and type instead of the full completed waitpoint. Previously the log printed the entire waitpoint (including output) to stdout in production runs, which could leak sensitive payloads. The value returned by wait.forToken() is unchanged. (#3490)
  • Add SessionId friendly ID generator and schemas for the new durable Session primitive. Exported from @trigger.dev/core/v3/isomorphic alongside RunId, BatchId, etc. Ships the CreateSessionStreamWaitpoint request/response schemas alongside the main Session CRUD. (#3417)
  • Truncate large error stacks and messages to prevent OOM crashes. Stack traces are capped at 50 frames (keeping top 5 + bottom 45 with an omission notice), individual stack lines at 1024 chars, and error messages at 1000 chars. Applied in parseError, sanitizeError, and OTel span recording. (#3405)

Server changes

These changes affect the self-hosted Docker image and Trigger.dev Cloud:

  • Add a "Back office" tab to /admin and a per-organization detail page at /admin/back-office/orgs/:orgId. The first action available on that page is editing the org's API rate limit: admins can save a tokenBucket override (refill rate, interval, max tokens) and see a plain-English preview of the resulting sustained rate and burst allowance. Writes are audit-logged via the server logger. (#3434)

  • Optional DEPLOY_REGISTRY_ECR_DEFAULT_REPOSITORY_POLICY env var to apply a default repository policy when the webapp creates new ECR repos (#3467)

  • Ship the Errors page to all users, with a polish + bug-fix pass: pinned "No channel" item in the Slack alert channel picker, viewer-timezone alert timestamps via Slack's <!date^> token, Activity sparkline peak tooltip, centered loading spinner and bug-icon empty state on the error detail page, ellipsis on the Configure alerts trigger. (#3477)

  • Configure the set of machine presets to build boot snapshots for at deploy time via COMPUTE_TEMPLATE_MACHINE_PRESETS (CSV of preset names, default small-1x). Use COMPUTE_TEMPLATE_MACHINE_PRESETS_REQUIRED (CSV, default = full PRESETS list) to scope which preset failures fail a required-mode deploy. Optional preset failures are logged and don't block the deploy. (#3492)

  • Regenerating a RuntimeEnvironment API key no longer invalidates the previous key immediately. The old key is recorded in a new RevokedApiKey table with a 24 hour grace window, and findEnvironmentByApiKey falls back to it when the submitted key doesn't match any live environment. The grace window can be ended early (or extended) by updating expiresAt on the row. (#3420)

  • Add the Session primitive — a durable, task-bound, bidirectional I/O channel that outlives a single run and acts as the run manager for chat.agent. Ships the Postgres Session + SessionRun tables, ClickHouse sessions_v1 + replication service, the sessions JWT scope, and the public CRUD + realtime routes (/api/v1/sessions, /realtime/v1/sessions/:session/:io) including end-and-continue for server-orchestrated run handoffs and session-stream waitpoints. (#3417)

  • Add KUBERNETES_POD_DNS_NDOTS_OVERRIDE_ENABLED flag (off by default) that overrides the cluster default and sets dnsConfig.options.ndots on runner pods (defaulting to 2, configurable via KUBERNETES_POD_DNS_NDOTS). Kubernetes defaults pods to ndots: 5, so any name with fewer than 5 dots — including typical external domains like api.example.com — is first walked through every entry in the cluster search list (<ns>.svc.cluster.local, svc.cluster.local, cluster.local) before being tried as-is, turning one resolution into 4+ CoreDNS queries (×2 with A+AAAA). Using a lower ndots value reduces DNS query amplification in the cluster.local zone.

    Note: before enabling, make sure no code path relies on search-list expansion for names with dots ≥ the configured value — those names will hit their as-is form first and could resolve externally before falling back to the cluster search path. (#3441)

  • Vercel integration option to disable auto promotions (#3376)

  • Make it clear in the admin that feature flags are global and should rarely be changed. (#3408)

  • Admin worker groups API: add GET loader and expose more fields on POST. (#3390)

  • Add 60s fresh / 60s stale SWR cache to getEntitlement in platform.v3.server.ts. Eliminates a synchronous billing-service HTTP round trip on every trigger. Reuses the existing platformCache (LRU memory + Redis) pattern already used for limits and usage. Cache key is ${orgId}. Errors return a permissive { hasAccess: true } fallback (existing behavior) and are also cached to prevent thundering-herd on billing outages. (#3388)

  • Show a MicroVM badge next to the region name on the regions page. (#3407)

  • Increase default maximum project count per organization from 10 to 25 (#3409)

  • Merge execution snapshot creation into the dequeue taskRun.update transaction, reducing 2 DB commits to 1 per dequeue operation (#3395)

  • Add per-worker Node.js heap metrics to the OTel meter — nodejs.memory.heap.used, nodejs.memory.heap.total, nodejs.memory.heap.limit, nodejs.memory.external, nodejs.memory.array_buffers, nodejs.memory.rss. Host-metrics only publishes RSS, which overstates V8 heap by the external + native footprint; these give direct heap visibility per cluster worker so NODE_MAX_OLD_SPACE_SIZE can be sized against observed heap peaks rather than RSS. (#3437)

  • Tag Prisma spans with db.datasource: "writer" | "replica" so monitors and trace queries can distinguish the writer pool from the replica pool. Applies to all prisma:engine:* spans (including prisma:engine:connection used by the connection-pool monitors) and the outer prisma:client:operation span. (#3422)

  • Clarify the cross-region intent in the Terraform and AI-prompt helpers on the Add Private Connection page. Both already default supported_regions to ["us-east-1", "eu-central-1"]; added an inline comment / parenthetical so the user understands why both regions are listed (Trigger.dev runs in both, so the service must be consumable from either). (#3465)

  • Add RUN_ENGINE_READ_REPLICA_SNAPSHOTS_SINCE_ENABLED flag (default off) to route the Prisma reads inside RunEngine.getSnapshotsSince through the read-only replica client. Offloads the snapshot polling queries (fired by every running task runner) from the primary. When disabled, behavior is unchanged. (#3423)

  • Stop creating TaskRunTag records and _TaskRunToTaskRunTag join table entries during task triggering. The denormalized runTags string array on TaskRun already stores tag names, making the M2M relation redundant write overhead. (#3369)

  • Stop writing per-tick state (lastScheduledTimestamp, nextScheduledTimestamp, lastRunTriggeredAt) on TaskSchedule and TaskScheduleInstance. The schedule engine now carries the previous fire time forward via the worker queue payload, eliminating ~270K dead-tuple-driven autovacuums per year on these hot tables and the associated IO:XactSync mini-spikes on the writer. Customer-facing payload.lastTimestamp semantics are unchanged. (#3476)

  • Replace the expensive DISTINCT query for task filter dropdowns with a dedicated TaskIdentifier registry table backed by Redis. Environments migrate automatically on their next deploy, with a transparent fallback to the legacy query for unmigrated environments. Also fixes duplicate dropdown entries when a task changes trigger source, and adds active/archived grouping for removed tasks. Moves BackgroundWorkerTask reads in the trigger hot path to the read replica. (#3368)

  • Public Access Tokens (PATs) minted before an API key rotation now keep working during the 24h grace window. validatePublicJwtKey falls back to any non-expired RevokedApiKey rows for the signing environment when the primary signature check against the env's current apiKey fails. The fallback query only runs on the failure path, so the hot success path is unchanged. (#3464)

  • Batch items that hit the environment queue size limit now fast-fail without retries and without creating pre-failed TaskRuns. (#3352)

  • Show the cancel button in the runs list for runs in DEQUEUED status. DEQUEUED was missing from NON_FINAL_RUN_STATUSES so the list hid the button even though the single run page allowed it. (#3421)

  • Reduce 5xx feedback loops on hot debounce keys by quantizing delayUntil, adding an unlocked fast-path skip, and gracefully handling redlock contention in handleDebounce so the SDK no longer retries into a herd. (#3453)

  • Fix RSS memory leak in the realtime proxy routes. /realtime/v1/runs, /realtime/v1/runs/:id, and /realtime/v1/batches/:id called fetch() into Electric with no abort signal, so when a client disconnected mid long-poll, undici kept the upstream socket open and buffered response chunks that would never be consumed — retained only in RSS, invisible to V8 heap tooling. Thread getRequestAbortSignal() through RealtimeClient.streamRun/streamRuns/streamBatch to longPollingFetch and cancel the upstream body in the error path. Isolated reproducer showed ~44 KB retained per leaked request; signal propagation releases it cleanly. (#3442)

  • Fix memory leak where every aborted SSE connection pinned the full request/response graph on Node 20, caused by AbortSignal.any() in sse.ts retaining its source signals indefinitely (see nodejs/node#54614, nodejs/node#55351). Also clear the setTimeout(abort) timer in entry.server.tsx so successful HTML renders don't pin the React tree for 30s per request. (#3430)

  • Preserve filters on the queues page when submitting modal actions. (#3471)

  • Fix Redis connection leak in realtime streams and broken abort signal propagation.

    Redis connections: Non-blocking methods (ingestData, appendPart, getLastChunkIndex) now share a single Redis connection instead of creating one per request. streamResponse still uses dedicated connections (required for XREAD BLOCK) but now tears them down immediately via disconnect() instead of graceful quit(), with a 15s inactivity fallback.

    Abort signal: request.signal is broken in Remix/Express due to a Node.js undici GC bug (nodejs/node#55428) that severs the signal chain when Remix clones the Request internally. Added getRequestAbortSignal() wired to Express res.on("close") via httpAsyncStorage, which fires reliably on client disconnect. All SSE/streaming routes updated to use it. (#3399)

  • Prevent dashboard crash (React error #31) when span accessory item text is not a string. Filters out malformed accessory items in SpanCodePathAccessory instead of passing objects to React as children. (#3400)

  • Upgrade Remix packages from 2.1.0 to 2.17.4 to address security vulnerabilities in React Router (#3372)

  • Fix Vercel integration settings page (remove redundant section toggles) and improve the Vercel onboarding flow so the modal closes after connecting a GitHub repo and the marketplace next URL is preserved across the GitHub app install redirect. (#3424)

All packages: v4.4.5

@trigger.dev/build, @trigger.dev/core, @trigger.dev/python, @trigger.dev/react-hooks, @trigger.dev/redis-worker, @trigger.dev/rsc, @trigger.dev/schema-to-json, @trigger.dev/sdk, trigger.dev

Contributors

Eric Allam, @nicktrn, @isshaddad, devin-ai-integration[bot], Matt Aitken, Oskar Otwinowski, @D-K-P, @ThullyoCunha, github-actions[bot], Saadi Myftija

Full changelog: https://github.com/triggerdotdev/trigger.dev/compare/v4.4.4...v4.4.5

Apr 13, 2026
trigger.dev v4.4.4

Upgrade

npx trigger.dev@latest update  # npm
pnpm dlx trigger.dev@latest update  # pnpm
yarn dlx trigger.dev@latest update  # yarn
bunx trigger.dev@latest update  # bun

Self-hosted Docker image: ghcr.io/triggerdotdev/trigger.dev:v4.4.4

Release notes

Read the full release notes: https://trigger.dev/changelog/v4-4-4

What's changed

Highlights

  • Add support for setting TTL (time-to-live) defaults at the task level and globally in trigger.config.ts, with per-trigger overrides still taking precedence (#3196)
  • Large run outputs can use the new API which allows switching object storage providers. (#3275)

CLI Improvements

  • Add platform notifications support to the CLI. The trigger dev and trigger login commands now fetch and display platform notifications (info, warn, error, success) from the server. Includes discovery-based filtering to conditionally show notifications based on project file patterns, color markup rendering for styled terminal output, and a non-blocking display flow with a spinner fallback for slow fetches. Use --skip-platform-notifications flag with trigger dev to disable the notification check. (#3254)

MCP Server improvements

  • Add get_span_details MCP tool for inspecting individual spans within a run trace. (#3255)
  • Span IDs now shown in get_run_details trace output for easy discovery
  • New API endpoint GET /api/v1/runs/:runId/spans/:spanId
  • get_query_schema — discover available TRQL tables and columns
  • query — execute TRQL queries against your data
  • list_dashboards — list built-in dashboards and their widgets
  • run_dashboard_query — execute a single dashboard widget query
  • whoami — show current profile, user, and API URL
  • list_profiles — list all configured CLI profiles
  • switch_profile — switch active profile for the MCP session
  • start_dev_server — start trigger dev in the background and stream output
  • stop_dev_server — stop the running dev server
  • dev_server_status — check dev server status and view recent logs
  • GET /api/v1/query/schema — query table schema discovery
  • GET /api/v1/query/dashboards — list built-in dashboards
  • --readonly flag hides write tools (deploy, trigger_task, cancel_run) so the AI cannot make changes
  • read:query JWT scope for query endpoint authorization
  • get_run_details trace output is now paginated with cursor support
  • MCP tool annotations (readOnlyHint, destructiveHint) for all tools
  • get_query_schema now requires a table name and returns only one table's schema (was returning all tables)
  • get_current_worker no longer inlines payload schemas; use new get_task_schema tool instead
  • Query results formatted as text tables instead of JSON (~50% fewer tokens)
  • cancel_run, list_deploys, list_preview_branches formatted as text instead of raw JSON
  • Schema and dashboard API responses cached to avoid redundant fetches
  • Adapted the CLI API client to propagate the trigger source via http headers. (#3241)
  • --readonly flag hides write tools (deploy, trigger_task, cancel_run) so the AI cannot make changes
  • get_run_details trace output is now paginated with cursor support
  • MCP tool annotations (readOnlyHint, destructiveHint) for all tools
  • get_query_schema now requires a table name and returns only one table's schema (was returning all tables)
  • get_current_worker no longer inlines payload schemas; use new get_task_schema tool instead
  • Query results formatted as text tables instead of JSON (~50% fewer tokens)
  • cancel_run, list_deploys, list_preview_branches formatted as text instead of raw JSON
  • Schema and dashboard API responses cached to avoid redundant fetches

Bug fixes

  • Fix dev CLI leaking build directories on rebuild, causing disk space accumulation. Deprecated workers are now pruned (capped at 2 retained) when no active runs reference them. The watchdog process also cleans up .trigger/tmp/ when the dev CLI is killed ungracefully (e.g. SIGKILL from pnpm). (#3224)
  • Fix --load flag being silently ignored on local/self-hosted builds. (#3114)
  • Fixed search_docs tool failing due to renamed upstream Mintlify tool (SearchTriggerDevsearch_trigger_dev)
  • Fixed list_deploys failing when deployments have null runtime/runtimeVersion fields (#3139)
  • Fixed list_preview_branches crashing due to incorrect response shape access
  • Fixed metrics table column documented as value instead of metric_value in query docs
  • Fixed dev CLI leaking build directories on rebuild — deprecated workers now clean up their build dirs when their last run completes
  • Fixed search_docs tool failing due to renamed upstream Mintlify tool (SearchTriggerDevsearch_trigger_dev)
  • Fixed list_deploys failing when deployments have null runtime/runtimeVersion fields (#3139)
  • Fixed list_preview_branches crashing due to incorrect response shape access
  • Fixed metrics table column documented as value instead of metric_value in query docs
  • Fixed dev CLI leaking build directories on rebuild — deprecated workers now clean up their build dirs when their last run completes

Server changes

These changes affect the self-hosted Docker image and Trigger.dev Cloud:

  • Add admin UI for viewing and editing feature flags (org-level overrides and global defaults). (#3291)

Other improvements:

  • Add allowRollbacks query param to the promote deployment API to enable version downgrades (#3214)

  • Add automatic LLM cost calculation for spans with GenAI semantic conventions. When a span arrives with gen_ai.response.model and token usage data, costs are calculated from an in-memory pricing registry backed by Postgres and dual-written to both span attributes (trigger.llm.*) and a new llm_metrics_v1 ClickHouse table that captures usage, cost, performance (TTFC, tokens/sec), and behavioral (finish reason, operation type) metrics. (#3213)

  • Add API endpoint GET /api/v1/runs/:runId/spans/:spanId that returns detailed span information including properties, events, AI enrichment (model, tokens, cost), and triggered child runs. (#3255)

  • Multi-provider object storage with protocol-based routing for zero-downtime migration (#3275)

  • Add IAM role-based auth support for object stores (no access keys required). (#3275)

  • Add platform notifications to inform users about new features, changelogs, and platform events directly in the dashboard. (#3254)

  • Add private networking support via AWS PrivateLink. Includes BillingClient methods for managing private connections, org settings UI pages for connection management, and supervisor changes to apply privatelink pod labels for CiliumNetworkPolicy matching. (#3264)

  • Reduce run start latency by skipping the intermediate queue when concurrency is available. This optimization is rolled out per-region and enabled automatically for development environments. (#3299)

  • Extended the search filter on the environment variables page to match on environment type (production, staging, development, preview) and branch name, not just variable name and value. (#3302)

  • Set application_name on Prisma connections from SERVICE_NAME so DB load can be attributed by service (#3348)

  • Fix transient R2/object store upload failures during batchTrigger() item streaming.

    • Added p-retry (3 attempts, 500ms–2s exponential backoff) around uploadPacketToObjectStore in BatchPayloadProcessor.process() so transient network errors self-heal server-side rather than aborting the entire batch stream.
    • Removed x-should-retry: false from the 500 response on the batch items route so the SDK's existing 5xx retry path can recover if server-side retries are exhausted. Item deduplication by index makes full-stream retries safe. (#3331)
  • Concurrency-keyed queues now use a single master queue entry per base queue instead of one entry per key. Prevents high-CK-count tenants from consuming the entire parentQueueLimit window and starving other tenants on the same shard. (#3219)

  • Reduce lock contention when processing large batchTriggerAndWait batches. Previously, each batch item acquired a Redis lock on the parent run to insert a TaskRunWaitpoint row, causing LockAcquisitionTimeoutError with high concurrency (880 errors/24h in prod). Since blockRunWithCreatedBatch already transitions the parent to EXECUTING_WITH_WAITPOINTS before items are processed, the per-item lock is unnecessary. The new blockRunWithWaitpointLockless method performs only the idempotent CTE insert without acquiring the lock. (#3232)

  • Strip secure query parameter from QUERY_CLICKHOUSE_URL before passing to ClickHouse client. This was already done for the main and logs ClickHouse clients but was missing for the query client, causing a startup crash with Error: Unknown URL parameters: secure. (#3204)

  • Fix OrganizationsPresenter.#getEnvironment matching the wrong development environment on teams with multiple members. All dev environments share the slug "dev", so the previous find by slug alone could return another member's environment. Now filters DEVELOPMENT environments by orgMember.userId to ensure the logged-in user's dev environment is selected. (#3273)

All packages: v4.4.4

@trigger.dev/build, @trigger.dev/core, @trigger.dev/python, @trigger.dev/react-hooks, @trigger.dev/redis-worker, @trigger.dev/rsc, @trigger.dev/schema-to-json, @trigger.dev/sdk, trigger.dev

Contributors

Eric Allam, Oskar Otwinowski, Matt Aitken, James Ritchie, @nicktrn, Saadi Myftija, @D-K-P, @isshaddad, github-actions[bot], @chengzp, Dinko Osrecki

Full changelog: https://github.com/triggerdotdev/trigger.dev/compare/v4.4.3...v4.4.4

Mar 10, 2026
trigger.dev v4.4.3

Upgrade

npx trigger.dev@latest update  # npm
pnpm dlx trigger.dev@latest update  # pnpm
yarn dlx trigger.dev@latest update  # yarn
bunx trigger.dev@latest update  # bun

Self-hosted Docker image: ghcr.io/triggerdotdev/trigger.dev:v4.4.3

Release notes

Read the full release notes: https://trigger.dev/changelog/v4-4-3

What's changed

Improvements

  • Add syncSupabaseEnvVars to pull database connection strings and save them as trigger.dev environment variables (#3152)
  • Auto-cancel in-flight dev runs when the CLI exits, using a detached watchdog process that survives pnpm SIGKILL (#3191)

Server changes

These changes affect the self-hosted Docker image and Trigger.dev Cloud:

  • A new Errors page for viewing and tracking errors that cause runs to fail

    • Errors are grouped using error fingerprinting
    • View top errors for a time period, filter by task, or search the text
    • View occurrences over time
    • View all the runs for an error and bulk replay them (#3172)
  • Add sidebar tabs (Options, AI, Schema) to the Test page for schemaTask payload generation and schema viewing. (#3188)

All packages: v4.4.3

@trigger.dev/build, @trigger.dev/core, @trigger.dev/python, @trigger.dev/react-hooks, @trigger.dev/redis-worker, @trigger.dev/rsc, @trigger.dev/schema-to-json, @trigger.dev/sdk, trigger.dev

Contributors

Eric Allam, Matt Aitken, James Ritchie, Oskar Otwinowski

Full changelog: https://github.com/triggerdotdev/trigger.dev/compare/v4.4.2...v4.4.3

Mar 4, 2026
trigger.dev v4.4.2

Upgrade

npx trigger.dev@latest update
pnpm dlx trigger.dev@latest update
yarn dlx trigger.dev@latest update
bunx trigger.dev@latest update 

Self-hosted Docker image: ghcr.io/triggerdotdev/trigger.dev:v4.4.2

Release notes

Read the full release notes: https://trigger.dev/changelog/v4-4-2

What's changed

Improvements

  • Add input streams for bidirectional communication with running tasks. Define typed input streams with streams.input<T>({ id }), then consume inside tasks via .wait() (suspends the process), .once() (waits for next message), or .on() (subscribes to a continuous stream). Send data from backends with .send(runId, data) or from frontends with the new useInputStreamSend React hook. (#3146)
  • Add PAYLOAD_TOO_LARGE error to handle graceful recovery of sending batch trigger items with payloads that exceed the maximum payload size (#3137)

Bug fixes

  • Fix slow batch queue processing by removing spurious cooloff on concurrency blocks and fixing a race condition where retry attempt counts were not atomically updated during message re-queue. (#3079)
  • fix(sdk): batch triggerAndWait variants now return correct run.taskIdentifier instead of unknown (#3080)

Server changes

These changes affect the self-hosted Docker image and Trigger.dev Cloud:

  • Two-level tenant dispatch architecture for batch queue processing. Replaces the single master queue with a two-level index: a dispatch index (tenant → shard) and per-tenant queue indexes (tenant → queues). This enables O(1) tenant selection and fair scheduling across tenants regardless of queue count. Improves batch queue processing performance. (#3133)

  • Add input streams with API routes for sending data to running tasks, SSE reading, and waitpoint creation. Includes Redis cache for fast .send() to .wait() bridging, dashboard span support for input stream operations, and s2-lite support with configurable S2 endpoint, access token skipping, and S2-Basin headers for self-hosted deployments. Adds s2-lite to Docker Compose for local development. (#3146)

  • Speed up batch queue processing by disabling cooloff and increasing the batch queue processing concurrency limits on the cloud:

    • Pro plan: increase to 50 from 10.
    • Hobby plan: increase to 10 from 5.
    • Free plan: increase to 5 from 1. (#3079)
  • Move batch queue global rate limiter from FairQueue claim phase to BatchQueue worker queue consumer for accurate per-item rate limiting. Add worker queue depth cap to prevent unbounded growth that could cause visibility timeouts. (#3166)

  • Fix a race condition in the waitpoint system where a run could be blocked by a completed waitpoint but never be resumed because of a PostgreSQL MVCC issue. This was most likely to occur when creating a waitpoint via wait.forToken() at the same moment as completing the token with wait.completeToken(). Other types of waitpoints (timed, child runs) were not affected. (#3075)

  • Fix metrics dashboard chart series colors going out of sync and widgets not reloading stale data when scrolled back into view (#3126)

  • Gracefully handle oversized batch items instead of aborting the stream.

    When an NDJSON batch item exceeds the maximum size, the parser now emits an error marker instead of throwing, allowing the batch to seal normally. The oversized item becomes a pre-failed run with PAYLOAD_TOO_LARGE error code, while other items in the batch process successfully. This prevents batchTriggerAndWait from seeing connection errors and retrying with exponential backoff.

    Also fixes the NDJSON parser not consuming the remainder of an oversized line split across multiple chunks, which caused "Invalid JSON" errors on subsequent lines. (#3137)

  • Require the user is an admin during an impersonation session. Previously only the impersonation cookie was checked; now the real user's admin flag is verified on every request. If admin has been revoked, the session falls back to the real user's ID. (#3078)

All packages: v4.4.2

@trigger.dev/build, @trigger.dev/core, @trigger.dev/python, @trigger.dev/react-hooks, @trigger.dev/redis-worker, @trigger.dev/rsc, @trigger.dev/schema-to-json, @trigger.dev/sdk, trigger.dev

Full changelog: https://github.com/triggerdotdev/trigger.dev/compare/v4.4.1...v4.4.2

Patch Changes

  • Updated dependencies:
    • @trigger.dev/sdk@4.4.1
    • @trigger.dev/build@4.4.1
    • @trigger.dev/core@4.4.1

Patch Changes

  • Add OTEL metrics pipeline for task workers. Workers collect process CPU/memory, Node.js runtime metrics (event loop utilization, event loop delay, heap usage), and user-defined custom metrics via otel.metrics.getMeter(). Metrics are exported to ClickHouse with 10-second aggregation buckets and 1m/5m rollups, and are queryable through the dashboard query engine with typed attribute columns, prettyFormat() for human-readable values, and AI query support. (#3061)
  • Updated dependencies:
    • @trigger.dev/core@4.4.1

Patch Changes

  • Updated dependencies:
    • @trigger.dev/core@4.4.1

Patch Changes

  • Updated dependencies:
    • @trigger.dev/core@4.4.1

Patch Changes

  • Add OTEL metrics pipeline for task workers. Workers collect process CPU/memory, Node.js runtime metrics (event loop utilization, event loop delay, heap usage), and user-defined custom metrics via otel.metrics.getMeter(). Metrics are exported to ClickHouse with 10-second aggregation buckets and 1m/5m rollups, and are queryable through the dashboard query engine with typed attribute columns, prettyFormat() for human-readable values, and AI query support. (#3061)
  • Updated dependencies:
    • @trigger.dev/build@4.4.1
    • @trigger.dev/core@4.4.1
    • @trigger.dev/schema-to-json@4.4.1

Patch Changes

  • Updated dependencies:
    • @trigger.dev/core@4.4.1

Patch Changes

  • Updated dependencies:
    • @trigger.dev/core@4.4.1

Patch Changes

  • Updated dependencies:
    • @trigger.dev/core@4.4.1
Feb 19, 2026

Minor Changes

  • Added query.execute() which lets you query your Trigger.dev data using TRQL (Trigger Query Language) and returns results as typed JSON rows or CSV. It supports configurable scope (environment, project, or organization), time filtering via period or from/to ranges, and a format option for JSON or CSV output. (#3060)

    import { query } from "@trigger.dev/sdk";
    import type { QueryTable } from "@trigger.dev/sdk";
    
    // Basic untyped query
    const result = await query.execute("SELECT run_id, status FROM runs LIMIT 10");
    
    // Type-safe query using QueryTable to pick specific columns
    const typedResult = await query.execute<QueryTable<"runs", "run_id" | "status" | "triggered_at">>(
      "SELECT run_id, status, triggered_at FROM runs LIMIT 10"
    );
    typedResult.results.forEach((row) => {
      console.log(row.run_id, row.status); // Fully typed
    });
    
    // Aggregation query with inline types
    const stats = await query.execute<{ status: string; count: number }>(
      "SELECT status, COUNT(*) as count FROM runs GROUP BY status",
      { scope: "project", period: "30d" }
    );
    
    // CSV export
    const csv = await query.execute("SELECT run_id, status FROM runs", {
      format: "csv",
      period: "7d",
    });
    console.log(csv.results); // Raw CSV string

Patch Changes

  • Add maxDelay option to debounce feature. This allows setting a maximum time limit for how long a debounced run can be delayed, ensuring execution happens within a specified window even with continuous triggers. (#2984)

    await myTask.trigger(payload, {
      debounce: {
        key: "my-key",
        delay: "5s",
        maxDelay: "30m", // Execute within 30 minutes regardless of continuous triggers
      },
    });
  • Aligned the SDK's getRunIdForOptions logic with the Core package to handle semantic targets (root, parent) in root tasks. (#2874)

  • Export AnyOnStartAttemptHookFunction type to allow defining onStartAttempt hooks for individual tasks. (#2966)

  • Fixed a minor issue in the deployment command on distinguishing between local builds for the cloud vs local builds for self-hosting setups. (#3070)

  • Updated dependencies:

    • @trigger.dev/core@4.4.0

Patch Changes

  • Updated dependencies:
    • @trigger.dev/core@4.4.0

Patch Changes

  • Updated dependencies:
    • @trigger.dev/core@4.4.0
    • @trigger.dev/sdk@4.4.0
    • @trigger.dev/build@4.4.0

Patch Changes

  • Updated dependencies:
    • @trigger.dev/core@4.4.0

Patch Changes

  • Fix runner getting stuck indefinitely when execute() is called on a dead child process. (#2978)
  • Add optional timeoutInSeconds parameter to the wait_for_run_to_complete MCP tool. Defaults to 60 seconds. If the run doesn't complete within the timeout, the current state of the run is returned instead of waiting indefinitely. (#3035)
  • Fixed a minor issue in the deployment command on distinguishing between local builds for the cloud vs local builds for self-hosting setups. (#3070)
  • Updated dependencies:
    • @trigger.dev/core@4.4.0
    • @trigger.dev/build@4.4.0
    • @trigger.dev/schema-to-json@4.4.0

Patch Changes

  • Updated dependencies:
    • @trigger.dev/core@4.4.0

Patch Changes

  • Updated dependencies:
    • @trigger.dev/core@4.4.0
Last Checked
5h ago
Domain
trigger.dev
Accounts
triggerdotdev
Tracking since Nov 19, 2024