Privacy mode

Last updated:

|

To avoid storing potentially sensitive prompt and completion data, you can enable privacy mode. This excludes the $ai_input and $ai_output_choices properties from being captured.

SDK config

This can be done by setting the privacy_mode config option in the SDK like this:

from posthog import Posthog
posthog = Posthog(
"<ph_project_api_key>",
host="https://us.i.posthog.com",
privacy_mode=True
)

Request parameter

It can also be set at the request level by setting the privacy_mode parameter to True in the request. The exact setup depends on the LLM platform you're using:

client.responses.create(
model="gpt-4o-mini",
input=[...],
posthog_privacy_mode=True
)

Questions? Ask Max AI.

It's easier than reading through 814 pages of documentation

Community questions

Was this page useful?

Next article

Generations

Generations are events that capture LLM calls and their responses. They represent interactions and conversations with an AI model. Generations are tracked as $ai_generation events and can be used to create and visualize insights just like other PostHog events. The LLM analytics > Generations tab displays a list of generations, along with a preview of key autocaptured properties. You can filter and search for generations by various properties. What does each generation capture? A…

Read next article