app_version field (relevant for Connect).https://github.com/inngest/inngest-py/compare/inngest@0.5.17...inngest@0.5.18
https://github.com/inngest/inngest-py/compare/inngest@0.5.16...inngest@0.5.17
before_send_events errorhttps://github.com/inngest/inngest-py/compare/inngest@0.5.15...inngest@0.5.16
https://github.com/inngest/inngest-py/compare/inngest@0.5.14...inngest@0.5.15
"authentication_succeeded": false from GET responseFull Changelog: https://github.com/inngest/inngest-py/compare/inngest@0.5.12...inngest@0.5.13
max_worker_concurrency field for Connect to allow maximum number of simultaneous requests for a worker.Full Changelog: https://github.com/inngest/inngest-py/compare/inngest@0.5.11...inngest@0.5.12
Full Changelog: https://github.com/inngest/inngest-py/compare/inngest@0.5.10...inngest@0.5.11
Server-Timing response header.Full Changelog: https://github.com/inngest/inngest-py/compare/inngest@0.5.9...inngest@0.5.10
Full Changelog: https://github.com/inngest/inngest-py/compare/inngest@0.5.8...inngest@0.5.9
Full Changelog: https://github.com/inngest/inngest-py/compare/inngest@0.5.7...inngest@0.5.8
Timeouts class (as inngest.Timeouts).Full Changelog: https://github.com/inngest/inngest-py/compare/inngest@0.5.6...inngest@0.5.7
if support in the batch config. This conditional boolean expression, when provided, would determine which events get batched together for execution.https://github.com/inngest/inngest-py/compare/inngest@0.5.5...inngest@0.5.6
parallel_mode kwarg to group.parallel. Setting it to ParallelMode.RACE will allow sequential steps in a parallel group to execute independent of other parallel groups at the expense of more requests to your app.https://github.com/inngest/inngest-py/compare/inngest@0.5.4...inngest@0.5.5
PydanticSerializerpublic_path arg to serve. This is useful when behind a path-rewriting proxyanyio.WouldBlock error when streaming is enabledFull Changelog: https://github.com/inngest/inngest-py/compare/inngest@0.5.0...inngest@0.5.1
Release blog post here
Full migration guide here
step.infer (docs) -- currently experimentalstep into ctxThe step object will be moved to ctx.step.
Before:
@inngest_client.create_function(
fn_id="provision-user",
trigger=inngest.TriggerEvent(event="user.signup"),
)
async def fn(ctx: inngest.Context, step: inngest.Step) -> None:
await step.run("create-user", create_db_user)
After:
@inngest_client.create_function(
fn_id="provision-user",
trigger=inngest.TriggerEvent(event="user.signup"),
)
async def fn(ctx: inngest.Context) -> None:
await ctx.step.run("create-user", create_db_user)
step.parallel will be removed in favor of a new ctx.group.parallel method. This method will behave the same way, so it's a drop-in replacement for step.parallel.
@client.create_function(
fn_id="my-fn",
trigger=inngest.TriggerEvent(event="my-event"),
)
async def fn(
ctx: inngest.Context,
step: inngest.Step,
) -> None:
user_id = ctx.event.data["user_id"]
await ctx.group.parallel(
(
lambda: step.run("update-user", update_user, user_id),
lambda: step.run("send-email", send_email, user_id),
)
)
event.userWe're sunsetting event.user. It's already incompatible with some features (e.g. function run replay).
Setting an async on_failure on a non-async Inngest function will throw an error:
async def on_failure(ctx: inngest.Context) -> None:
pass
@client.create_function(
fn_id="foo",
trigger=inngest.TriggerEvent(event="foo"),
on_failure=on_failure,
)
def fn(ctx: inngest.ContextSync) -> None:
pass
Setting a non-async on_failure on an async Inngest function will throw an error:
def on_failure(ctx: inngest.ContextSync) -> None:
pass
@client.create_function(
fn_id="foo",
trigger=inngest.TriggerEvent(event="foo"),
on_failure=on_failure,
)
async def fn(ctx: inngest.Context) -> None:
pass
step.runWhen passing a non-async callback to an async step.run, it will work at runtime but there will be a static type error.
@client.create_function(
fn_id="foo",
trigger=inngest.TriggerEvent(event="foo"),
)
async def fn(ctx: inngest.Context) -> None:
# Type error because `lambda: "hello"` is non-async.
msg = await step.run("step", lambda: "hello")
# Runtime value is "hello", as expected.
print(msg)
inngest.Function is genericThe inngest.Function class is now a generic that represents the return type. So if an Inngest function returns str then it would be inngest.Function[str].
Use LIFO for the "after" hooks. In other words, when multiple middleware is specified then the "after" hooks are run in reverse order.
For example, let's say the following middleware is defined and used:
class A(inngest.MiddlewareSync):
def before_execution(self) -> None:
# ...
def after_execution(self) -> None:
# ...
class B(inngest.MiddlewareSync):
def before_execution(self) -> None:
# ...
def after_execution(self) -> None:
# ...
inngest.Inngest(
app_id="my-app",
middleware=[A, B],
)
The middleware will be executed in the following order for each hook:
before_execution -- A then B.after_execution -- B then A.The "before" hooks are:
before_execution
before_response
before_send_events
transform_input
The "after" hooks are:
after_execution
after_send_events
transform_output
before_memoizationafter_memoizationinngest.experimental.encryption_middleware (it's now the inngest-encryption package).experimental_execution option on functions. We won't support native asyncio methods (e.g. asyncio.gather) going forward.Drop support for Python 3.9.
Bump dependency minimum versions:
httpx>=0.26.0
pydantic>=2.11.0
typing-extensions>=4.13.0
Bump peer dependency minimum versions:
Django>=5.0
Flask>=3.0.0
fastapi>=0.110.0
tornado>=6.4
Full Changelog: https://github.com/inngest/inngest-py/compare/inngest@0.4.21...inngest@0.4.22
inngest.experimental.connect package for Connect. It's experimental, but we don't anticipate the API changing much.ctx.group to Inngest function args. This is the new recommended approach for parallel steps. We'll remove step.parallel in a future release.Full Changelog: https://github.com/inngest/inngest-py/compare/inngest@0.4.20...inngest@0.4.21