Ingestion Strategies
Ingestion strategies for Usage Based Billing
Polar offers an ingestion framework to work with Polar’s event ingestion API.
Want to report events regarding Large Language Model usage, S3 file uploads or something else? Our Ingestion strategies are customized to make it as seamless as possible to fire ingestion events for complex needs.
Javascript SDK
LLM Strategy
Wrap any LLM model from the @ai-sdk/*
library, to automatically fire prompt- & completion tokens used by every model call.
Ingestion Payload
S3 Strategy
Wrap the official AWS S3 Client with our S3 Ingestion Strategy to automatically ingest bytes uploaded.
Ingestion Payload
Stream Strategy
Wrap any Readable or Writable stream of choice to automatically ingest the bytes consumed.
Ingestion Payload
DeltaTime Strategy
Ingest delta time of arbitrary execution. Bring your own now-resolver.
Ingestion Payload
Help us improve
We’re always looking for ways to improve our ingestion strategies. Feel free to contribute — Polar Ingestion SDK.
Python SDK
Our Python SDK includes an ingestion helper and strategies for common use cases. It’s installed as part of the Polar SDK.
Ingestion helper
The ingestion helper is a simple wrapper around the Polar events ingestion API. It takes care of batching and sending events to Polar in the background, without blocking your main thread.
PydanticAI Strategy
PydanticAI is an AI agent framework for Python. A common use-case with AI applications is to track the usage of LLMs, like the number of input and output tokens, and bill the customer accordingly.
With our PydanticAI strategy, you can easily track the usage of LLMs and send the data to Polar for billing.
This example is inspired from the Pydantic Model example of PydanticAI documentation.
Ingestion Payload
Was this page helpful?