Caching Strategies
Why Caching?
Section titled “Why Caching?”Caching reduces latency, offloads the database, and increases throughput by storing previously computed results.
Common Strategies
Section titled “Common Strategies”Cache-Aside (Lazy Loading)
Section titled “Cache-Aside (Lazy Loading)”Application manages the cache directly - read from cache first, on miss read from DB then write to cache.
Read: Cache → miss → DB → write to Cache → returnWrite: DB → invalidate CachePros: Only caches data that’s actually used Cons: Cache miss penalty, potential stale data
Write-Through
Section titled “Write-Through”Write to both cache and DB simultaneously.
Write: Cache + DB (simultaneously)Read: Cache → always hit (after first write)Pros: Data always consistent Cons: Higher write latency
Write-Behind (Write-Back)
Section titled “Write-Behind (Write-Back)”Write to cache first, async write to DB later.
Write: Cache → async → DBRead: Cache → hitPros: Low write latency Cons: Risk of data loss if cache crashes
Comparison
Section titled “Comparison”| Strategy | Read speed | Write speed | Consistency | Complexity |
|---|---|---|---|---|
| Cache-Aside | Medium | Fast | Eventual | Low |
| Write-Through | Fast | Slow | Strong | Medium |
| Write-Behind | Fast | Fast | Eventual | High |
Cache Eviction Policies
Section titled “Cache Eviction Policies”- LRU (Least Recently Used) - Most common
- LFU (Least Frequently Used)
- TTL (Time To Live) - Expire after fixed duration
- FIFO (First In First Out)
Popular Tools
Section titled “Popular Tools”- Redis - In-memory, supports many data structures
- Memcached - Simple, fast, key-value only
- CDN - Cache static assets at edge locations