- Регистрация
- 1 Мар 2015
- Сообщения
- 1,481
- Баллы
- 155
- ?
- ?
Copy, customize, publish. Perfect for your blog, newsletter, or content pipeline.
Imagine transforming a function that used to take seconds—or even minutes—to execute into one that returns instantly on repeated calls. Enter functools.lru_cache, Python’s built‐in memoization decorator. By caching results for given inputs, it prevents redundant computations, leading to dramatic speedups for many common tasks. In this guide, we’ll explore everything from basic usage to advanced patterns, so you can harness the full power of lru_cache in your projects.
How lru_cache Works
At its heart, lru_cache (Least‐Recently Used cache) wraps your function with an in-memory store that:
- Keys on arguments
Generates a unique, hashable key from the positional and keyword arguments you pass.
- Stores results
Keeps results in a dictionary for O(1) retrieval.
- Tracks usage
Maintains a doubly‐linked list to know which entries are least recently used.
- Evicts old items
When the cache exceeds its maxsize, it drops the oldest entry automatically.
- Exposes stats & control
Use .cache_info() to see hits/misses, and .cache_clear() to reset.
from functools import lru_cache
@lru_cache(maxsize=128)
def expensive_operation(x, y):
# Imagine heavy computation here
return x ** y
print(expensive_operation(2, 10))
print(expensive_operation.cache_info()) # Cache stats
Core Use Cases
Optimizing Recursive Algorithms
A classic example is the Fibonacci sequence:
@lru_cache(maxsize=None)
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
- Naïve recursion re-calculates subproblems many times.
- With caching, each distinct n computes once; all others are instant lookups.
For functions that hit rate-limited or costly APIs:
import requests
from functools import lru_cache
@lru_cache(maxsize=32)
def get_weather(city):
resp = requests.get(f"{city}")
return resp.json()
# First call fetches, second call returns cached data
data1 = get_weather("London")
data2 = get_weather("London")
This saves bandwidth, reduces latency, and avoids hitting API limits.
Speeding Up Database Queries
In web apps, identical queries often repeat:
@lru_cache(maxsize=64)
def fetch_user_profile(user_id):
# Example ORM call
return User.objects.get(id=user_id)
- Before: Every request to the profile page triggers a new DB hit.
- After: The first lookup hits the DB; subsequent calls come from cache.
| Scenario | Without Cache | With lru_cache | Speedup |
|---|---|---|---|
| Fibonacci(35) naive recursion | ~1.2 seconds | ~20 microseconds | ~60,000× |
| Repeated API call (10 calls) | ~500 ms per call | ~0.05 ms per call | ~10,000× |
| Identical DB query (100 calls) | ~10 ms per query | ~0.01 ms per call | ~1,000× |
Advanced Techniques & CustomizationPro tip: Always measure with tools like timeit or your framework’s profiling to get real numbers in your environment.
Tuning maxsize
- maxsize=None = unbounded cache (beware memory growth)
- maxsize=small int = controlled memory, more misses
Typed caching
- @lru_cache(maxsize=128, typed=True) treats 1 and 1.0 as distinct keys.
Handling mutable args
- Convert lists/dicts to tuples/frozensets before passing.
Manual cache management
expensive_operation.cache_clear()
Common Pitfalls & Solutions
- High memory usage
Solution: Set a reasonable maxsize.
- Unhashable arguments
Solution: Pre-transform mutable arguments into hashable types.
- Hidden side effects
Solution: Cache only pure functions; avoid on I/O or state-mutating calls.
- Debugging complexity
Solution: Temporarily disable or clear cache during tests.
- Disk-based persistence: Combine with or your own file-backed store to survive restarts.
- Async functions: Use to cache coroutines.
- Web frameworks: Layer lru_cache inside view functions or service layers for micro-caching.
- Real Python: Caching in Python Using the LRU Cache Strategy
- Medium: Supercharge Your Python Functions with @functools.cache
- Stack Overflow: How does lru_cache from functools work?
functools.lru_cache is more than a convenience—it’s a strategic tool for improving throughput, reducing latency, and scaling your Python applications gracefully. Start by identifying hot-spot functions in your code, apply caching judiciously, and measure the gains. With the advanced tips above, you’ll be ready to tackle even the most demanding performance challenges.
? Done-for-You Kits to Make Money Online (Minimal Effort Required)
When you’re ready to scale your content without writing everything from scratch, these done-for-you article kits help you build authority, attract traffic, and monetize—fast.
Each bundle includes ready-to-publish articles, so you can:
Here’s what’s available:
- ?
- ?️
- ?
- ?
- ?️
- …and plenty more. []
→ Use these to build blogs, info products, niche sites, or social content—without spending weeks writing.
? Featured Kit
Here’s one you can grab directly:
? 85+ Ready-to-Publish Articles on DIY Computing, Hardware Hacking, and Bare-Metal Tech Once upon a time, computers weren’t bought—they were built. This is your blueprint to rediscover that lost world. This collection takes you deep into the hands-on world of crafting computers from the ground up.From transistors to assembly language, it’s a roadmap for tinkerers, educators, and digital pioneers who want to understand (or teach) how machines really work.? What You Get