Manual capture

Last updated:

|

If you're using a different SDK or the API, you can manually capture the data by calling the capture method or using the capture API.

A generation is a single call to an LLM.

Event name: $ai_generation

Core properties

PropertyDescription
$ai_trace_idThe trace ID (a UUID to group AI events) like conversation_id
Must contain only letters, numbers, and special characters: -, _, ~, ., @, (, ), !, ', :, |
Example: d9222e05-8708-41b8-98ea-d4a21849e761
$ai_span_id(Optional) Unique identifier for this generation
$ai_span_name(Optional) Name given to this generation
Example: summarize_text
$ai_parent_id(Optional) Parent span ID for tree view grouping
$ai_modelThe model used
Example: gpt-5-mini
$ai_providerThe LLM provider
Example: openai, anthropic, gemini
$ai_inputList of messages sent to the LLM
Example: [{"role": "user", "content": [{"type": "text", "text": "What's in this image?"}, {"type": "image", "image": "https://example.com/image.jpg"}, {"type": "function", "function": {"name": "get_weather", "arguments": {"location": "San Francisco"}}}]}]
$ai_input_tokensThe number of tokens in the input (often found in response.usage)
$ai_output_choicesList of response choices from the LLM
Example: [{"role": "assistant", "content": [{"type": "text", "text": "I can see a hedgehog in the image."}, {"type": "function", "function": {"name": "get_weather", "arguments": {"location": "San Francisco"}}}]}]
$ai_output_tokensThe number of tokens in the output (often found in response.usage)
$ai_latency(Optional) The latency of the LLM call in seconds
$ai_http_status(Optional) The HTTP status code of the response
$ai_base_url(Optional) The base URL of the LLM provider
Example: https://api.openai.com/v1
$ai_request_url(Optional) The full URL of the request made to the LLM API
Example: https://api.openai.com/v1/chat/completions
$ai_is_error(Optional) Boolean to indicate if the request was an error
$ai_error(Optional) The error message or object

Cost properties

Cost properties are optional as we can automatically calculate them from model and token counts. If you want, you can provide your own cost property instead.

PropertyDescription
$ai_input_cost_usd(Optional) The cost in USD of the input tokens
$ai_output_cost_usd(Optional) The cost in USD of the output tokens
$ai_total_cost_usd(Optional) The total cost in USD (input + output)

Cache properties

PropertyDescription
$ai_cache_read_input_tokens(Optional) Number of tokens read from cache
$ai_cache_creation_input_tokens(Optional) Number of tokens written to cache (Anthropic-specific)

Model parameters

PropertyDescription
$ai_temperature(Optional) Temperature parameter used in the LLM request
$ai_stream(Optional) Whether the response was streamed
$ai_max_tokens(Optional) Maximum tokens setting for the LLM response
$ai_tools(Optional) Tools/functions available to the LLM
Example: [{"type": "function", "function": {"name": "get_weather", "parameters": {...}}}]

Example API call

Terminal
curl -X POST "https://us.i.posthog.com/i/v0/e/" \
-H "Content-Type: application/json" \
-d '{
"api_key": "<ph_project_api_key>",
"event": "$ai_generation",
"properties": {
"distinct_id": "user_123",
"$ai_trace_id": "d9222e05-8708-41b8-98ea-d4a21849e761",
"$ai_model": "gpt-4o",
"$ai_provider": "openai",
"$ai_input": [{"role": "user", "content": [{"type": "text", "text": "Analyze this data and suggest improvements"}]}],
"$ai_input_tokens": 150,
"$ai_output_choices": [{"role": "assistant", "content": [{"type": "text", "text": "Based on the data, here are my suggestions..."}]}],
"$ai_output_tokens": 280,
"$ai_latency": 2.45,
"$ai_http_status": 200,
"$ai_base_url": "https://api.openai.com/v1",
"$ai_request_url": "https://api.openai.com/v1/chat/completions",
"$ai_is_error": false,
"$ai_temperature": 0.7,
"$ai_stream": false,
"$ai_max_tokens": 500,
"$ai_tools": [{"type": "function", "function": {"name": "analyze_data", "description": "Analyzes data and provides insights", "parameters": {"type": "object", "properties": {"data_type": {"type": "string"}}}}}],
"$ai_cache_read_input_tokens": 50,
"$ai_span_name": "data_analysis_chat"
},
"timestamp": "2025-01-30T12:00:00Z"
}'

Questions? Ask Max AI.

It's easier than reading through 814 pages of documentation

Community questions

Was this page useful?

Next article

Troubleshooting and FAQs

How much does LLM analytics cost? Your first 100,000 $ai_model events each month are free – i.e. if you never exceed this number, you can use LLM analytics for free. After this, we charge a small amount for each $ai_* event you send. Go to the pricing page to use our calculator to get an estimate. You can also view an estimate on your billing page . Why can't I see any of my LLM events? There are a few reasons why you might not be seeing any LLM events: There might be a delay in the…

Read next article