Javascript SDK

LLM Strategy

Wrap any LLM model from the @ai-sdk/* library, to automatically fire prompt- & completion tokens used by every model call.

pnpm add @polar-sh/ingestion ai @ai-sdk/openai
import { Ingestion } from "@polar-sh/ingestion";
import { LLMStrategy } from "@polar-sh/ingestion/strategies/LLM";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

// Setup the LLM Ingestion Strategy
const llmIngestion = Ingestion({ accessToken: process.env.POLAR_ACCESS_TOKEN })
  .strategy(new LLMStrategy(openai("gpt-4o")))
  .ingest("openai-usage");

export async function POST(req: Request) {
  const { prompt }: { prompt: string } = await req.json();

  // Get the wrapped LLM model with ingestion capabilities
  // Pass Customer Id to properly annotate the ingestion events with a specific customer
  const model = llmIngestion.client({
    customerId: request.headers.get("X-Polar-Customer-Id") ?? "",
  });

  const { text } = await generateText({
    model,
    system: "You are a helpful assistant.",
    prompt,
  });

  return Response.json({ text });
}

Ingestion Payload

{
  "customerId": "123",
  "name": "openai-usage",
  "metadata": {
    "promptTokens": 100,
    "completionTokens": 200
  }
}

Python SDK

Our Python SDK includes an ingestion helper and strategies for common use cases. It’s installed as part of the Polar SDK.

pip install polar-sdk

Ingestion helper

The ingestion helper is a simple wrapper around the Polar events ingestion API. It takes care of batching and sending events to Polar in the background, without blocking your main thread.

import os
from polar_sdk.ingestion import Ingestion

ingestion = Ingestion(os.getenv("POLAR_ACCESS_TOKEN"))

ingestion.ingest({
    "name": "my-event",
    "external_customer_id": "CUSTOMER_ID",
    "metadata": {
        "usage": 13.37,
    }
})

PydanticAI Strategy

PydanticAI is an AI agent framework for Python. A common use-case with AI applications is to track the usage of LLMs, like the number of input and output tokens, and bill the customer accordingly.

With our PydanticAI strategy, you can easily track the usage of LLMs and send the data to Polar for billing.

import os
from polar_sdk.ingestion import Ingestion
from polar_sdk.ingestion.strategies import PydanticAIStrategy
from pydantic import BaseModel
from pydantic_ai import Agent


ingestion = Ingestion(os.getenv("POLAR_ACCESS_TOKEN"))
strategy = ingestion.strategy(PydanticAIStrategy, "ai_usage")


class MyModel(BaseModel):
    city: str
    country: str


agent = Agent("gpt-4.1-nano", output_type=MyModel)

if __name__ == '__main__':
    result = agent.run_sync("The windy city in the US of A.")
    print(result.output)
    strategy.ingest("CUSTOMER_ID", result)

This example is inspired from the Pydantic Model example of PydanticAI documentation.

Ingestion Payload

{
  "name": "ai_usage",
  "external_customer_id": "CUSTOMER_ID",
  "metadata": {
    "requests": 1,
    "total_tokens": 78,
    "request_tokens": 58,
    "response_tokens": 20
  }
}