Local evaluation in distributed or stateless environments
When using local evaluation, the SDK fetches feature flag definitions and stores them in memory. This works well for single-instance applications, but in distributed or stateless environments (multiple servers, edge workers, lambdas), each instance fetches its own copy, wasting API calls and adding latency on cold starts.
An external cache provider lets you store flag definitions in shared storage (Redis, database, Cloudflare KV, etc.) so all instances can read from a single source.
This enables you to:
- Share flag definitions across workers to reduce API calls
- Coordinate fetching so only one worker polls at a time
- Pre-cache definitions for ultra-low-latency flag evaluation
Note: External cache providers are currently available in Node.js and Python SDKs only. This feature is experimental and may change in minor versions.
When to use an external cache
| Scenario | Recommendation |
|---|---|
| Single server instance | SDK's built-in memory cache is sufficient |
| Multiple workers (same process) | SDK's built-in memory cache is sufficient |
| Multiple servers/containers | Use Redis or database caching with distributed locks |
| Edge workers (Cloudflare, Vercel Edge) | Use KV storage with split read/write pattern |
Installation
Import the interface from the SDK:
The interface
To create a custom cache, implement the FlagDefinitionCacheProvider interface:
When the SDK fetches flag definitions from the API, it passes a FlagDefinitionCacheData object to onFlagDefinitionsReceived() for you to store:
Method details
| Method | Purpose | Return value |
|---|---|---|
getFlagDefinitions() | Retrieve cached definitions. Called when the poller refreshes. | Cached data, or undefined if cache is empty |
shouldFetchFlagDefinitions() | Decide if this instance should fetch. Use for distributed coordination (e.g., locks). | true to fetch, false to skip |
onFlagDefinitionsReceived(data) | Store definitions after a successful API fetch. | void |
shutdown() | Release locks, close connections, clean up resources. | void |
Note: All methods may throw errors. The SDK catches and logs them gracefully, ensuring cache provider errors never break flag evaluation.
Using your cache provider
Pass your cache provider when initializing PostHog:
Common patterns
Shared caches with locking
When running multiple server instances with a shared cache like Redis, coordinate fetching so only one instance polls PostHog at a time.
The recommended pattern:
- One instance owns the lock for its entire lifetime, not just during a single fetch
- Refresh the lock TTL each polling cycle to maintain ownership
- Release on shutdown, but only if you own the lock
- Let locks expire if a process crashes, so another instance can take over
Redis example
A complete working example written in Python using Redis with distributed locking is available in the posthog-python repository. It implements the locking pattern described above.
Caches without locking
Some storage backends like Cloudflare KV don't support atomic locking operations. In these cases, use a split read/write pattern:
- A scheduled job (cron) periodically fetches flag definitions and writes to the cache
- Request handlers read from the cache and evaluate flags locally, with no API calls
This separates the concerns entirely. One process writes, all others read.
Cloudflare Workers example
A complete working example written in TypeScript is available in the posthog-js repository. It uses the split read/write pattern described above. The worker's scheduled job writes flag definitions to KV, and request handlers read from it.
This pattern is ideal for high-traffic edge applications where flag evaluation must be extremely fast and you can tolerate flag updates being slightly delayed.