Langfuse rolled out a simplified architecture with significantly faster product performance at scale. The main table is now observations — every LLM call, tool execution, and agent step is a row you can query directly, replacing the previous trace-centric view.
Key Changes:
- Unified observations table: all inputs, outputs, and context attributes live directly on observations for faster filtering and aggregation
- Observations API v2 and Metrics API v2 designed for faster querying at scale
- Observation-level evaluations now execute in seconds
- Charts, filters, and APIs are much faster across Langfuse Cloud
- trace_id works like any other filter column (session_id, user_id, score) to group related observations
What's Faster:
- Chart loading time significantly shorter; confident loading over large time ranges
- Faster browsing of traces, users, and sessions
- Quicker filter responses in large projects
- Faster evaluation workflows
Migration Required:
- Upgrade to Python SDK v4 and JS/TS SDK v5 to avoid delays and see data in real time
- Use saved views and filters to focus on key operations across your observations table
See Fast Preview Docs for rollout details, access, and migration steps.