This is a skeleton framework of a unittest for an app which will write out a fixture of the test database once the test has been done. I run this once for all apps, but you can limit which apps get serialized by modifying the self.apps value from get_apps (all apps) to a list of only certain apps. This script by default assumes that you have a SVN_DIR setting which points to the current working subversion directory, with a subdirectory of fixtures where it places `tests.json` upon completion. You may change this location as well. After running `python manage test` you can run `python manage loaddata fixtures/tests.json` to load in to the real database all of the test database fixtures. Feel free to edit at will, let me know of any changes that are helpful, and dont forget to fill in the `...`s
This recipe uses a modified version of Robin Dunn's fcgi.py module that adapts fcgi to wsgi and lets you run Django under mod_fcgid. One good
thing about mod_fcgid is that it does all process management for you,
which makes this setup quite straightforward.
Also, since Robin's module works both in a cgi and fcgi context,
switching a django site between cgi and fastcgi is a one-liner in the
apache config, without any changes to python code or django config. CGI may be handy for development, since it loads all code (including changed code) on every request, yet lets you work in an environment that resembles production.
Apache configuration examples are found in the comment at the beginning of the python module.
Expanded version of [snippet 715](http://www.djangosnippets.org/snippets/715/ "Django snippets: Simple View Middleware to allow a Prefilter") to be more flexible.
Updates:
* 2009-04-24: Multiple filters now work correctly
* 2009-03-22: Fixed bug
* 2009-02-03: Simplified process.
Class DatabaseStorage can be used with either FileField or ImageField. It can be used to map filenames to database blobs: so you have to use it with a **special additional table created manually**. The table should contain:
*a pk-column for filenames (I think it's better to use the same type that FileField uses: nvarchar(100))
*a blob column (image type for example)
*a size column (bigint type).
You can't just create blob column in the same table, where you defined FileField, since there is no way to find required row in the save() method.
Also size field is required to obtain better perfomance (see size() method).
So you can use it with different FileFields and even with different "upload_to" variables used. Thus it implements a kind of root filesystem, where you can define dirs using "upload_to" with FileField and store any files in these dirs. Beware saving file with the same "virtual path" overwrites old file.
It uses either settings.DB_FILES_URL or constructor param 'base_url' (@see __init__()) to create urls to files. Base url should be mapped to view that provides access to files (see example in the class doc-string). To store files in the same table, where FileField is defined you have to define your own field and provide extra argument (e.g. pk) to save().
Raw sql is used for all operations. In constractor or in DB_FILES of settings.py () you should specify a dictionary with db_table, fname_column, blob_column, size_column and 'base_url'. For example I just put to the settings.py the following line:
DB_FILES = {'db_table': 'FILES', 'fname_column': 'FILE_NAME', 'blob_column': 'BLOB', 'size_column': 'SIZE', 'base_url': 'http://localhost/dbfiles/' }"
And use it with ImageField as following:
player_photo = models.ImageField(upload_to="player_photos", storage = DatabaseStorage() )
DatabaseStorage class uses your settings.py file to perform custom connection to your database.
The reason to use custom connection: http://code.djangoproject.com/ticket/5135
Connection string looks like "cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=localhost;DATABASE=testdb;UID=me;PWD=pass')"
It's based on pyodbc module, so can be used with any database supported by pyodbc.
I've tested it with MS Sql Express 2005.
Note: It returns special path, which should be mapped to special view, which returns requested file:
**View and usage Example:**
def image_view(request, filename):
import os
from django.http import HttpResponse
from django.conf import settings
from django.utils._os import safe_join
from filestorage import DatabaseStorage
from django.core.exceptions import ObjectDoesNotExist
storage = DatabaseStorage()
try:
image_file = storage.open(filename, 'rb')
file_content = image_file.read()
except:
filename = 'no_image.gif'
path = safe_join(os.path.abspath(settings.MEDIA_ROOT), filename)
if not os.path.exists(path):
raise ObjectDoesNotExist
no_image = open(path, 'rb')
file_content = no_image.read()
response = HttpResponse(file_content, mimetype="image/jpeg")
response['Content-Disposition'] = 'inline; filename=%s'%filename
return response
**Warning:** *If filename exist, blob will be overwritten, to change this remove get_available_name(self, name), so Storage.get_available_name(self, name) will be used to generate new filename.*
For more information see docstrings in the code.
Please, drop me a line if you've found a mistake or have a suggestion :)
This is a somewhat simpler alternative to [http://www.djangosnippets.org/snippets/243/](http://www.djangosnippets.org/snippets/243/) that does not return a 401 response. It's meant to be used along with the login_required decorator as an alternative way to authenticate to REST-enabled views.
Usage:
@http_basic_auth
@login_required
def my_view(request):
...
If an HTTP basic auth header is provided, the request will be authenticated before the login_required check happens. Otherwise, the normal redirect to login page occurs.
A simple way to handle file validation by checking for a proper content_type and by making sure that the file does not go over its limit upload size limit.
In the example I am checking to make sure the file is an image or a video file then I check to make sure that the file is below the max upload limit. if conditions are not met, then a validation error is raised
This snippet adds simple partial support to your templates. You can pass data to the partial, and use it as you would in a regular template. It is different from Django's `{% include %}`, because it allows you to pass a custom variable (context), instead of reusing the same context for the included template. This decouples the templates from each other and allows for their greater reuse.
The attached code needs to go into `templatetags` folder underneath your project. The usage is pretty simple - `{% load ... %}` the tag library, and use `{% partial_template template-name data %}` in your template. This will result in template passed as **template-name** to be loaded from **partials** folder. The **.html** extension will be appended to the file name. The file has to be in one of template paths accessible to the loader) and rendered with **data** as its context. The data is available in the template as an `item` context variable.
You can find more information in the [relevant Django documentation](http://docs.djangoproject.com/en/dev/howto/custom-template-tags/#howto-custom-template-tags)
An example of how to modify the admin user creation form to assign an unusable password to externally authenticated users when they are created.
This code is more intimate with the django.contrib.auth classes than I'd like, but it should be fairly straightforward to maintain should the relevant django.contrib.auth classes change.
In-browser testing frameworks (I'm using [Windmill](http://www.getwindmill.com/)) have trouble testing file uploads because javascript's security policy prevents them from setting the value of file input fields. Instead the tests must issue some sort of "fake" file upload request, but implementing this on an ad-hoc basis quickly gets ugly.
This middleware is designed to support fake file uploads as transparently and as thoroughly as possible. For example, it is careful to properly trigger any file upload handlers so that things like upload progress reporting will work correctly. It can also simulate a slow file upload by sleeping between reads from the file.
From the client-side point of view, each input field of type "file" has a similarly-named hidden field automatically prepended. Test scripts can simply set the value of this hidden field to trigger a fake upload, rather than having to set the value of the file input field itself.
This is a basic stub for a model that you can use to easily add customizable JSON serialization to your models. Make your model inherit from JsonableModel, and then define the models/yourmodel.json template with whatever information from the model that you want to make available.
Given a string, it first lowercases it, then uppercases the first letter of each sentence.
Helpful when dealing with awfully formatted entirely UPPERCASE XML product data feeds.
Automatically adds filter methods to your objects manager based on their display name.
class Foo(models.Model):
MOO_CHOICES=((1,'foo'),(2,'bar'))
moo = models.IntegerField(choices=MOO_CHOICES)
objects = ChoiceFilterManager('moo',MOO_CHOICES)
Foo.objects.foo()
Foo.objects.bar()
This decorator allows you to wrap class methods or module functions and synchronize access to them. It maintains a dict of locks with a key for each unique name of combined module name and method name. (Implementing class name is lost in decorator? Otherwise it would have class name too if available.)
Effectively it functions as a class level method synchronizer, with the lock handling completely hidden from the wrapped function.
**Sumary**
M2M relation without creating table.
Normally you should specify m2m only in *one* model, thus widgets for many-to-many relations will be displayed inline on whichever model contains the actual reference to the ManyToManyField. But if you want to be able to have widgets displayed on both form you need some tricks, for example using intermediary-models can help, but you will not get multiselect widget (and in case of inlining extra = multi). Also you can write your own form which takes care about adding widget (just 1 line) and setting default values and saving it (more than few lines of code).
If you try ManyToManyField with same db_table specified, the only problem will be in syncdb (it will try to create two identical tables) and the only thing our class does is preventing creation of table for M2M, so in one model you should use ManyToManyField and in another ManyToManyField_NoSyncdb with the same db_table argument.
**Example**
So to have M2M widgets in both forms you can write:
class User(models.Model):
#...
groups = ManyToManyField('Group', related_name='groups',
db_table=u'USERS_TO_GROUPS')
class Group(models.Model):
#...
users = ManyToManyField_NoSyncdb(User, related_name='users',
db_table=u'USERS_TO_GROUPS')