Login

Tag "cache"

Snippet List

JSON instead of pickle for memcached

Standard memcache client uses pickle as a serialization format. It can be handy to use json, especially when another component (e.g. backend) doesn't know pickle, but json yes.

  • memcache
  • cache
  • json
  • memcached
  • pickle
Read More

Cached model property decorator (like @property)

This is a nice decorator for using the cache to save the results of expensive-to-calculate but static-per-instance model properties. There is also a decorator for when the property value is another model, and the contents of the other model should not be cached across requests. 3 levels of caching implemented: * outermost: django cache * middle: obj._cache (for multiple properties on a single object) * innermost: replace the attribute on this object, so we can entirely avoid running this function a second time.

  • property
  • cache
  • decorator
Read More

Get the Django decorator/middleware cache key for given URL

This mimicks the keys used internally by the @cache_page decorators and site-wide CacheMiddleware. Now you can poke them, prod them, delete them, do what you like with them (eg, delete after you update some content and you want a specific URL refreshed).

  • cache
  • key
Read More
Author: s29
  • 2
  • 2

Property Attributes in Memcache

Memcached Property Attributes ----------------------------- Setting the attribute will store the value in memcached; accessing the attribute will retrieve from memcached. Most useful as a way of simulating memcached "fields" on model instances. See the docstring for details.

  • cache
  • memcached
  • descriptors
Read More
Author: ori
  • 0
  • 0

Cache view by user (and anonymous)

Use this decorator in your views to cache HttpResponse per user, so each user has his own cache, instead of a shared one as `from django.views.decorators.cache.cache_page` does. Add this to use: from somewhere import cache_per_user @cache_per_user(ttl=3600, cache_post=False) def my_view(request): return HttpResponse("LOL %s"%(request.user)) All documentation inside the decorator are in brazilian portuguese, feel free to translate to english

  • views
  • cache
  • decorator
Read More

Effective content caching for mass-load site using redirect feature

Hi All, I would like to share my idea and experience in using http redirect feature to organize effective content caching and avoid site blocking on mass requests to not yet cached content. First of all, I should explain that I don't mean usual template-based pages, whose caching in django is enough in almost all cases. My speech is concerning to content such as dynamically created images, external files and other content which may require almost unpredictable long time period to be ready, up to about 10 seconds. Anyway, the content which is already in cache, can be returned immediately. The question is - what *should* happen if the requested content is *not yet present* in the cache? The HTML content is not a problem in this case - we can return intermediate page with meta information to ask a client to retry the content after some period of time. But non-HTML content, such as image, or json, is a problem. The usual *blocking* behavior of the django caching subsystem is not appropriate in a case when the content preparation process takes *more than several fractions* of a second. The whole period while the content is preparing (painting, getting from the external resource and so on) *hundreds* of users will *ask not yet cached* content until all available application server connections will be *occupied* by waiting requests and new user will wait for the content even if the content requested by him is actually ready to be returned. The celery package resolves this problem *partially*, allowing to schedule cache update before cache expiration in asynchronous manner (I am using threadpool package for this purpose instead), and avoiding blocking on content preparation procedure, but the problem with not yet available content remains unresolved. We might schedule periodic cache update to avoid such a situation, but this way leads to unnecessary resource load and may be inappropriate at all in some cases, f.e. for the service like CloudMade or Google "staticmap" painting map content basing on the request parameters. My solution is *using HTTP Redirect feature* . Ideal solution would be available if most of HTTP clients supported Retry-After header in the HTTP Redirect response. Unfortunately, no one of most popular clients (neither Mozilla, nor IE, nor Chrome) does it. Really they ask redirected URL immediately after receiving the redirection response, and ignore Retry-After header at all. Anyway, I hope that they *will* support this feature, but this is not enough right now. So, instead of delegating waiting job to the client side, we need to organize waiting on the server. For this purpose I have created small separate application server with it's *own connection pool*, and delegated waiting task to him (see the snippet). This server needs one URL (/retry_after in the example) looking to the view above, installed in the url.py, and additional DEFAULT_RETRY_TIMEOUT parameter in the settings.py. The view uses request GET parameters, so Django will not try to cache it's results. The only task for this server is to *suspend* redirected requests until requested time is *really arrived*, and then redirect the request to the original URL. To make possible to process many requests simultaneously, it breaks waiting and redirects too long time requests to the self basing on the default timeout. Using this server is simple. We will instantiate the 'retry' server on the separate URL as a *separate application server* and will redirect all requests required to be delayed to this server, passing original URL and the time moment when the requested content should be ready, as parameters. The main server which prepares and returns the content, is *never waiting*. It returns prepared content from the cache, if it is found there, or *starts* content preparation process in *asynchronous* manner using threads, celery package, or any other appropriate technique, and *immediately redirects* the request to the 'retry' server The following is a fragment of the code using 'retry' server: ... # Create your views here. def jams(request,z,x,y): jams_timeout = settings.JAMS_TIMEOUT retry_after_server = settings.RETRY_AFTER_SERVER try: jams_ts,jams_png = request_jams(z,x,y) except Http404,ex: return HttpResponseNotFound() if not jams_png: if settings.DEBUG: logger.debug("JAMS NOT YET READY FOR: %s" % str((z,x,y))) # Tell a client to retrieve jams later time_after = time.time() + settings.DEFAULT_RETRY_TIMEOUT if settings.DEBUG: logger.debug("SEND RETRY AFTER FOR: %s" % str((z,x,y))) loc = reverse(jams,args=(z,x,y)) if request.get_host(): loc = request.build_absolute_uri(loc) r = HttpResponseRedirect( retry_after_server + '?' + urlencode({'redirect': loc,'retry_after':int(time_after)}) ) r['Retry-After'] = settings.DEFAULT_RETRY_TIMEOUT return r # return a png got from the cache if settings.DEBUG: logger.debug("RETURNING JAMS FOR: %s" % str((z,x,y,http_date(jams_ts),http_date(time.time())))) r = HttpResponse(jams_png) r['Content-Type'] = "image/png" r['Content-Length'] = len(jams_png) r['Content-Location'] = request.path # extra data for cache negotiation # Common and ConditionalGet Django Middleware will do the rest r['ETag'] = '"%s"' % md5_constructor(jams_png).hexdigest() r['Last-Modified'] = http_date(jams_ts) r['Expires'] = http_date(jams_ts + jams_timeout) return r ... At the application level, the request_jams() call tries to get content from the cache, and starts content update process asynchronously if required. It returns content got from the cache, and timestamp when the content has been created last time, or pair of None,None values, if the requested content is not available immediately. The line starting from 'r = HttpResponseRedirect' redirects the request to the 'retry' server, if the content is not available immediately. I am using nginx as a 'front-end' proxy server which passes requests to two separate fastcgi daemons - 'retry' and main django application servers separating requests by prefix. The following is a fragment of the nginx.conf configuration file for the nginx server: server { ... location /jams { fastcgi_pass unix:/var/tmp/jams.sock; ... } location /retry_after { fastcgi_pass unix:/var/tmp/retry.sock; ... } ... } As you can see, two locations are passed to *different* application server instances in the production environment, so they *don't block each other*. Really they both are implemented inside one django project, and started from the same directory as fastcgi daemon instances with different starting parameters (sockets to listen, log files etc.). I've used such structure to configure and debug my server easy, either just starting one developer server for both locations, or to have a possibility to start production server in described 'split' mode on the other hand. You can see that the both servers (jams and retry) use the same DEFAULT_RETRY_TIMEOUT setting from the settings.py file. The DEFAULT_RETRY_TIMEOUT plays role of load balancing parameter for requests which wait finishing asynchronous job. After the retry timeout is passed, the control returns to the client by the redirection response (or, for smart clients, the client itself waits the retry timeout before requesting the resource again), so the server may cleanup resources temporary acquired by these requests. Additional bonus of using such technique is stability of server processor load. Even in prime time, the server load is stable and controlled: number of the application instances and number of threads in the thread pool (evaluating asynchronous jobs) for each instance is fixed by configuration, nginx load per request is minimal, and concurrent requests are balanced (between server, network, and clients) by the retry timeout and redirect responses. You can see the result of the work on the our site, f.e. [on this page](http://www.doroga.tv/nnov/apps/map/), where all (transparent version) jam tiles are painting such a manner. The only difference for the production server environment is that some top zooms of most popular regions are refreshed by the special daemon (also written using django as a 'management command') which forces tiles to be refreshed in the cache even no one client requests it right now — to decrease response time for the jams map after long 'sleep' period.

  • cache
  • load
  • redirect
  • nginx
  • content
  • mass
Read More

Method Caching

A very simple decorator that caches both on-class and in memcached: @method_cache(3600) def some_intensive_method(self): return # do intensive stuff` Alternatively, if you just want to keep it per request and forgo memcaching, just do: @method_cache() def some_intensive_method(self): return # do intensive stuff`

  • memcache
  • cache
  • decorator
  • memcached
  • decorators
  • caching
Read More

Non-pickling locmem (in-process memory) cache backend

You can use this cache backend to cache data in-process and avoid the overhead of pickling. Make absolutely sure you don't modify any data you've stored to or retrieved from the cache. Make deep copies instead if necessary. The backend is basically identical to Django's stock locmem cache (as of r15852 - after 1.3rc1) with pickling removed. It has been tested with that specific Django revision, so basically it's >=1.3 compatible. See [Django ticket #6124](http://code.djangoproject.com/ticket/6124) for some background information.

  • cache
  • pickle
  • backend
  • locmem
  • memory
  • process
Read More

New view decorator to only cache pages for anonymous users

This is an addition to Django's cache_page decorator. I wanted to cache pages for anonymous users but not cache the same page for logged in users. There's middleware to do this at a site-wide level, and it also gives you the ability to partition anonymous users, but then you have to tell Django which pages not to cache with @never_cache.

  • cache
  • view
  • decorator
  • anonymous
Read More

Run and cache only one instance of a heavy request

I have many heavy views which run slowly when accessed at same time in multiple threads. I make this decorator to allow run only one view at the time and cache returned result. Other threads will wait to complete first thread and use response from the cache if executed thread put it to cache. Also I take idea of MintCache to refresh staled cache and return cached (stale) response while response refreshed in the cache. Usage: @single_cacheable(cache_timeout=60, stale_timeout=30, key_template='my_heavy_view-{arg1}-{arg2}') def heavy_view(request, arg1, arg2): response = HttpResponse() ... your code here # cache timeout may be set from inside of view response._cache_timeout = cache_time return responce The "key_template" is a template for cache key. Some my views have additinal parameter "cache_time" which set from parent page and request.path is different for these pages but they need to be cached not depending of this parameter. Variable "key_template" in the example used named agrument, if you have no named paramaters you need to use 'my_heavy_view-{1}-{2}-{...}' where {1},{2}, {...} arguments of view. The line with setting "key" variable may be changed to: key = "sc_"+request.get_full_path() if you need to take full URL path as cache key.

  • cache
  • decorator
Read More

Per-Instance On-Model M2M Caching

If you are like me and you find yourself often using M2M fields for tons of other on-model methods, in templates, and views alike, try using this quick and dirty caching. I show the use of a "through" model for the m2m, but that is purely optional. For example, let's say we need to do several different things with our list of beads, the old way is... # views.py necklace = Necklace.objects.get(id=1) bead = Bead.objects.get(id=1) if bead in necklace.beads.all(): # this bead is here! # template necklace.html {% for bead in necklace.beads.all %} <li>{{ bead }}</li> {% endfor %} ...which would hit the database twice. Instead, we do this: # views.py necklace = Necklace.objects.get(id=1) bead = Bead.objects.get(id=1) if bead in necklace.get_beads(): # this bead is here! # template necklace.html {% for bead in necklace.get_beads %} <li>{{ bead }}</li> {% endfor %} Which only does one hit on the database. While we could have easily set the m2m query to a variable and passed it to the template to do the same thing, the great thing is how you can build extra methods on the model that use the m2m field but share the cache with anyone down the line using the same m2m field. I'm by no means an expert, so if there is something here I've done foolishly, let me know.

  • models
  • cache
  • m2m
  • caching
Read More

Use memcached to throttle POSTs

### Problem: you want to limit posts to a view This can be accomplished with a view decorator that stores hits by IP in memcached, incrementing the cached value and returning 403's when the cached value exceeds a certain threshold for a given IP.

  • cache
  • decorator
Read More

Cached template filters

Say you'd like to cache one of your template filters. This decorator acts sort of like memoize, caching a result set based on the arguments passed in (which are used as the cache key).

  • template
  • cache
Read More

Extended db cache backend with 'filter() / LIKE' support (and now scheduled cache clean!)

Because the db caching doesn't support atomic operations, it was unsafe to store a list of 'keys' in a single key. So, rather than store the list, I just append each key with a specific tag, and then filter for it later. This means I don't need to worry too much about atomic usage with lists (i.e. queued requests). However - I still can think of many instances where I would need atomic usage, so I will probably implement this later on down the line. Hopefully, the atomic modifications will be accepted by the core devs. This also contains threaded cache cleaning, which means you no longer need to rely on requests to clean the cache (which would have potentially slowed the user query down), and will remove any cache entries past their expiry date every 3 minutes. Enjoy! Cal

  • filter
  • cache
  • db
  • backend
  • like
Read More

78 snippets posted so far.