This function is designed to make it easier to specify client-side query filtering options using JSON. Django has a great set of query operators as part of its database API. However, there's no way I know of to specify them in a way that's serializable, which means they can't be created on the client side or stored.
`build_query_filter_from_spec()` is a function that solves this problem by describing query filters using a vaguely LISP-like syntax. Query filters consist of lists with the filter operator name first, and arguments following. Complicated query filters can be composed by nesting descriptions. Read the doc string for more information.
To use this function in an AJAX application, construct a filter description in JavaScript on the client, serialize it to JSON, and send it over the wire using POST. On the server side, do something like:
> `from django.utils import simplejson`
> `filterString = request.POST.get('filter', '[]')`
> `filterSpec = simplejson.loads(filterString)`
> `q = build_query_filter_from_spec(filterSpec)`
> `result = Thing.objects.filter(q)`
You could also use this technique to serialize/marshall a query and store it in a database.
- filter
- ajax
- json
- database
- query
Django supports the serializing model objects, but does not support the serializing Q object like that,
============================
q = Q(username__contains="findme")
model0.objects.filter(q)
serialize(q) # X
============================
so I wrote a little marshaller for Q, this is example,
============================
from django.contrib.auth import models as django_models
qs = django_query.Q(username__contains="spike") | django_query.Q(email__contains="spike")
_m = QMarshaller()
a = _m.dumps(qs) # a was serialized.
When call the similiar queries in page by page, you don't need to write additional code for creating same Q(s) for filtering models, just use the serialized Q as http querystring and in the next page unserialize and apply it. That is simple life.
If you've ever wanted to dynamically lookup values in the template layer (e.g. `dictionary[bar]`), then you've probably realized/been told to do this in the python layer. The problem is then you often to build a huge 2-D list to hold all of that data.
These are two solutions to this problem: by using generators we can be lazy while still making it easy in the python layer. I'm going to write more documentation later, but here's a quick example:
from lazy_lookup import lazy_lookup_dict
def some_view(request):
users = User.objects.values('id', 'username')
articles = Article.objects.values('user', 'title', 'body')
articles = dict([(x['user'], x) for x in articles])
return render_to_response('some_template.html',
{'data': lazy_lookup_dict(users, key=lambda x: x['id'],
article=articles,
item_name='user')})
Then in the template layer you'd write something like:
{% for user_data in data %}
{{ user_data.user.username }}, {{ user_data.article.title }}
{% endfor %}
- template
- dynamic
- lookup
- lazy
- dynamic-lookup
Have you ever felt the need to run multiple Django projects on the same memcached server? How about other cache backends? To scope the cache keys, you simply need to prefix. However, since a lot of Django's internals rely on `django.core.cache.cache`, you cannot easily replace it everywhere.
This will automatically upgrade the `django.core.cache.cache` object if `settings.CACHE_PREFIX` is set to a string and the Middleware contains `ScopeCacheMiddleware`.
A thread discussing the merging of this functionality into Django is available on [the dev mailing list](http://groups.google.com/group/django-developers/browse_thread/thread/d45edaafec56da2a).
However, (as of now) nowhere in the thread does anyone mention the reason why this sort of treatment is needed: Many of Django's internal caching helpers use `django.core.cache.cache`, and will then conflict if multiple sites run on the same cache stores.
Example Usage:
>>> from django.conf import settings
>>> from django.core.cache import cache
>>> from scoped_caching import prefix_cache_object
>>> settings.CACHE_PREFIX
'FOO_'
# Do this once a process (e.g. on import or Middleware)
>>> prefix_cache_object(settings.CACHE_PREFIX, cache)
>>> cache.set("pi", 3.14159)
>>> cache.get("pi")
3.14159
>>> cache.get("pi", use_global_namespace=True)
>>> cache.get("FOO_pi", use_global_namespace=True)
3.14159
>>> cache.set("FOO_e", 2.71828, use_global_namespace=True)
>>> cache.get("e")
2.71828
To Install: Simply add `ScopeCacheMiddleware` as a middleware and define `settings.CACHE_PREFIX` and enjoy!
- middleware
- cache
- namespace
[A comment on a recent blog entry of mine](http://www.b-list.org/weblog/2008/feb/25/managers/#c63422) asked about a setup where one model has foreign keys pointing at it from several others, and how to write a manager which could attach to any of those models and query seamlessly on the relation regardless of what it's named.
This is a simple example of how to do it: in this case, both `Movie` and `Restaurant` have foreign keys to `Review`, albeit under different names. However, they both use `ReviewedObjectManager` to provide a method for querying objects whose review assigned a certain rating; this works because an instance of `ReviewedObjectManager` "knows" what model it's attached to, and can introspect that model, using [Django's model-introspection API](http://www.b-list.org/weblog/2007/nov/04/working-models/), to find out the correct name to use for the relation, and then use that to perform the query.
Using model introspection in this fashion is something of an advanced topic, but is extremely useful for writing flexible, reusable code.
**Also**, note that the introspection cannot be done in the manager's `__init__()` method -- at that point, `self.model` is still `None` (it won't be filled in with the correct model until a bit later) -- so it's necessary to come up with some way to defer the introspection. In this case, I'm doing it in a method that's called when the relation name is first needed, and which caches the result in an attribute.
- managers
- models
- introspection