- Регистрация
- 1 Мар 2015
- Сообщения
- 1,481
- Баллы
- 155
Why I Switched from HashiCorp LRU to Ristretto for High-Performance Caching in Go
While working on , I implemented a caching layer to speed up repeated file reads and downloads. I started with , which was simple and easy to integrate. But as the system scaled and concurrency increased, it became clear that it couldn't keep up.
This post highlights the issues I encountered and why ended up being a much better fit.
Issues with HashiCorp LRU
While the lru package is solid and predictable, I ran into a few limitations:
Why Ristretto?
, created by Dgraph Labs, offers:
cache, _ := ristretto.NewCache(&ristretto.Config[string, []byte]{
NumCounters: 1e7, // Frequency-tracking keys
MaxCost: 1 << 30, // 1GB total cost
BufferItems: 64, // Set buffer size
})
? Improvements After the Switch
Caveats
type FileCache struct {
cache *ristretto.Cache[string, []byte]
hits atomic.Uint64
misses atomic.Uint64
}
func (fc *FileCache) Get(key string) ([]byte, bool) {
val, ok := fc.cache.Get(key)
if ok {
fc.hits.Add(1)
} else {
fc.misses.Add(1)
}
return val, ok
}
? Final Thoughts
HashiCorp’s LRU cache is great for simple use cases, but when performance and scalability matter (especially with concurrent file reads/writes) Ristretto is a better fit.
Highly recommend it for high-performance Go applications.
? Resources
If you've used Ristretto let me know and in what way.
While working on , I implemented a caching layer to speed up repeated file reads and downloads. I started with , which was simple and easy to integrate. But as the system scaled and concurrency increased, it became clear that it couldn't keep up.
This post highlights the issues I encountered and why ended up being a much better fit.
While the lru package is solid and predictable, I ran into a few limitations:
- Blocking Writes - All Add() operations lock the cache, leading to bottlenecks under concurrent load.
- Fixed Capacity, Not Cost-Based - It evicts based on item count, not memory usage — inefficient when storing large items like files.
- No Native Metrics - Hit/miss tracking must be implemented manually.
, created by Dgraph Labs, offers:
Non-Blocking Writes - Writes are buffered and processed asynchronously, preventing contention during high loads.
Cost-Aware Eviction - You can assign a "cost" (e.g., byte size), and the cache evicts based on total cost rather than item count.
TinyLFU Eviction Strategy - More efficient and accurate for real-world usage patterns than basic LRU.
Built-In Concurrency Support - Designed to scale with multiple goroutines hitting the cache simultaneously.
cache, _ := ristretto.NewCache(&ristretto.Config[string, []byte]{
NumCounters: 1e7, // Frequency-tracking keys
MaxCost: 1 << 30, // 1GB total cost
BufferItems: 64, // Set buffer size
})
? Improvements After the Switch
- Requests are faster, especially repeated ones — no disk I/O.
- Concurrency issues disappeared — no lock contention on write.
- Memory usage is under control with cost-based eviction.
- Avoid calling cache.Wait() in critical paths — it blocks until the write buffer is flushed.
- Eviction is probabilistic, so results may vary slightly.
- You must define the cost meaningfully for your use case.
type FileCache struct {
cache *ristretto.Cache[string, []byte]
hits atomic.Uint64
misses atomic.Uint64
}
func (fc *FileCache) Get(key string) ([]byte, bool) {
val, ok := fc.cache.Get(key)
if ok {
fc.hits.Add(1)
} else {
fc.misses.Add(1)
}
return val, ok
}
? Final Thoughts
HashiCorp’s LRU cache is great for simple use cases, but when performance and scalability matter (especially with concurrent file reads/writes) Ristretto is a better fit.
Highly recommend it for high-performance Go applications.
? Resources
If you've used Ristretto let me know and in what way.