MintCache is a caching engine for django that allows you to get by with stale data while you freshen your breath, so to speak.
The purpose of this caching scheme is to avoid the dog-pile effect. Dog-piling is what normally happens when your data for the cache takes more time to generate than your server is answering requests per second. In other words if your data takes 5 seconds to generate and you are serving 10 requests per second, then when the data expires the normal cache schemes will spawn 50 attempts a regenerating the data before the first request completes. The increased load from the 49 redundant processes may further increase the time it takes to generate the data. If this happens then you are well on your way into a death spiral
MintCache works to prevent this scenario by using memcached to to keep track of not just an expiration date, but also a stale date The first client to request data past the stale date is asked to refresh the data, while subsequent requests are given the stale but not-yet-expired data as if it were fresh, with the undertanding that it will get refreshed in a 'reasonable' amount of time by that initia request
I don't think django has a mechanism for registering alternative cache engines, or if it does I jumped past it somehow. Here's an excerpt from my cache.py where I'v just added it alongside the existing code. You'll have to hook it in yourself for the time being. ;-)
More discussion here.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | try:
import memcache
except ImportError:
_MintCache = None
else:
class _MintCache(_Cache):
"Memcached cache backend the sequel."
def __init__(self, server, params):
_Cache.__init__(self, params)
self._cache = memcache.Client(server.split(';'))
def get(self, key, default=None):
key = self.scrub_key( key )
val = self._cache.get(key)
if val is None:
val = default
else:
try:
stale_after,val = pickle.loads(val)
now = time.time()
if now > stale_after:
cache_log( "stale, refreshing" )
self.set( key, val, 60 ) # the old val will now last for 60 additional secs
val = default
except:
pass
return val
def set(self, key, value, timeout=0):
key = self.scrub_key( key )
if timeout is 0:
timeout = self.default_timeout
now = time.time()
val = pickle.dumps( ( now + timeout, value ), 2)
self._cache.set(key, val, 7*86400)
def delete(self, key):
key = self.scrub_key( key )
self._cache.delete(key)
|
More like this
- Template tag - list punctuation for a list of items by shapiromatron 11 months, 1 week ago
- JSONRequestMiddleware adds a .json() method to your HttpRequests by cdcarter 11 months, 2 weeks ago
- Serializer factory with Django Rest Framework by julio 1 year, 6 months ago
- Image compression before saving the new model / work with JPG, PNG by Schleidens 1 year, 7 months ago
- Help text hyperlinks by sa2812 1 year, 7 months ago
Comments
Note that on line 23, val is assigned the default value.
This makes the caller who gets a near miss think it's an actual miss so that it takes the branch to re-fill the cache.
e.g.
spam = cache.get('key'): if spam is None: spam = "SPAM"*1000 cache.set('key', spam, 120)
The caller who gets a near miss (but only that caller) takes the "spam is None" branch.
#
Do you reckon this should get put either as an option or as default in trunk? Has anyone submitted a ticket?
#
Please login first before commenting.