Does Redis Work With Sanity?
Redis and Sanity work excellently together—Redis caches Sanity content to reduce API calls and improve response times.
Quick Facts
How Redis Works With Sanity
Redis is commonly used as a caching layer in front of Sanity's APIs. When your application fetches content from Sanity, you can store frequently accessed documents in Redis with an appropriate TTL. On subsequent requests, you hit Redis first, dramatically reducing latency and API quota usage. This is especially valuable for high-traffic sites where Sanity's API rate limits might become a concern.
The typical pattern involves checking Redis before querying Sanity's client library. If a cache miss occurs, you fetch from Sanity and populate Redis with the result. For content updates, you can use Sanity's real-time listener webhooks or listeners to invalidate Redis keys when content changes. This keeps your cache fresh without periodic invalidation.
Architecturally, Redis sits between your application layer and Sanity's APIs. For serverless environments like Vercel or Netlify, you'd use a managed Redis service (upstash.com, redis.com, etc.). The developer experience is straightforward—just add a cache-check wrapper around your Sanity queries. No complex setup required.
Best Use Cases
Quick Setup
npm install redis @sanity/clientimport { createClient } from '@sanity/client';
import { createClient as createRedisClient } from 'redis';
const sanity = createClient({
projectId: 'YOUR_PROJECT_ID',
dataset: 'production',
useCdn: false,
});
const redis = createRedisClient();
await redis.connect();
async function getCachedContent(id: string) {
const cacheKey = `sanity:${id}`;
const cached = await redis.get(cacheKey);
if (cached) return JSON.parse(cached);
const content = await sanity.fetch(`*[_id == "${id}"][0]`);
await redis.setEx(cacheKey, 3600, JSON.stringify(content));
return content;
}
// Usage
const post = await getCachedContent('blog-post-123');Known Issues & Gotchas
Cache invalidation timing: Updates in Sanity may not immediately reflect in Redis if TTL hasn't expired
Fix: Implement webhook listeners from Sanity that actively invalidate Redis keys on document publish/update events instead of relying solely on TTL
Memory limits: Redis stores data in RAM; caching entire Sanity datasets can exceed instance limits
Fix: Cache selectively—prioritize hot data, set appropriate TTLs (300-3600 seconds), and monitor memory usage with Redis INFO commands
Stale data in distributed systems: Multiple instances may have inconsistent cache states
Fix: Use Redis as a centralized cache (not local process cache) and implement consistent key naming conventions across all instances
Cold starts on serverless: Redis connection overhead can impact performance on Vercel/Lambda cold starts
Fix: Use connection pooling libraries like redis or ioredis, or keep-alive connections with managed Redis services optimized for serverless
Alternatives
- •Sanity's built-in CDN + Cloudflare Cache: Use Sanity's edge-cached API responses with Cloudflare Workers for automatic geographic caching
- •Vercel KV (built on Redis) + Sanity: Leverage Vercel's native Redis integration for simpler serverless deployments without managing separate infrastructure
- •Elasticsearch + Sanity: Index Sanity content in Elasticsearch for advanced search and filtering with persistent caching capabilities
Resources
Related Compatibility Guides
Explore more compatibility guides