Skip to content

Caching Strategies

Caching reduces latency, offloads the database, and increases throughput by storing previously computed results.

Application manages the cache directly - read from cache first, on miss read from DB then write to cache.

Read: Cache → miss → DB → write to Cache → return
Write: DB → invalidate Cache

Pros: Only caches data that’s actually used Cons: Cache miss penalty, potential stale data

Write to both cache and DB simultaneously.

Write: Cache + DB (simultaneously)
Read: Cache → always hit (after first write)

Pros: Data always consistent Cons: Higher write latency

Write to cache first, async write to DB later.

Write: Cache → async → DB
Read: Cache → hit

Pros: Low write latency Cons: Risk of data loss if cache crashes

StrategyRead speedWrite speedConsistencyComplexity
Cache-AsideMediumFastEventualLow
Write-ThroughFastSlowStrongMedium
Write-BehindFastFastEventualHigh
  • LRU (Least Recently Used) - Most common
  • LFU (Least Frequently Used)
  • TTL (Time To Live) - Expire after fixed duration
  • FIFO (First In First Out)
  • Redis - In-memory, supports many data structures
  • Memcached - Simple, fast, key-value only
  • CDN - Cache static assets at edge locations