Put this in a base.py file.
And config this settings:
DATABASE_ENGINE = 'path.to.module.with.base.file'
DATABASE_STATEMENT_TIMEOUT = 60000 # milliseconds
(Idea from: http://www.mailinglistarchive.com/html/[email protected]/2009-02/msg00024.html)
Have you always been annoyed by how you set up this elaborate big database schema and weren't able to have **ON DELETE CASCADE ON UPDATE CASCADE** in dbshell?
This solves the problem; create the two files and and empty *__init__.py* and put them somewhere in your path.
Then say DATABASE_ENGINE='postgresql_psycopg2_cascade' in settings.
Really I'd like this to be in the ForeignKey object, were it upstream Django or an own version of it, but it doesn't seem possible.
Ideas on how to make this configurable are more than welcome!
Props go out to Ari Flinkman for the inspiration to do this!
Django does not have a suitable model field can process time type in mysql, the DateTimeField built-in django can not process more than 24 hours. That's why the code born.
**Simply usage**
`from myapp.models import TimeAsTimeDeltaField, SECONDS_PER_MIN, SECONDS_PER_HOUR, SECONDS_PER_DAY
class EstimatedTime:
days = None
hours = None
minutes = None
seconds = None
class Case(models.Model):
estimated_time = TimeAsTimeDeltaField(null=True, blank=True)
def get_estimated_time(self):
estimated_time = EstimatedTime()
if self.estimated_time:
total_seconds = self.estimated_time.seconds + (self.estimated_time.days * SECONDS_PER_DAY)
days = total_seconds / SECONDS_PER_DAY
hours = total_seconds / SECONDS_PER_HOUR - days * 24
minutes = total_seconds / SECONDS_PER_MIN - hours * 60 - days * 24 * 60
seconds = total_seconds % SECONDS_PER_MIN
estimated_time.days = days
estimated_time.hours = hours
estimated_time.minutes = minutes
estimated_time.seconds = seconds
return estimated_time
else:
return estimated_time
Hi,
I made some small custom psycopg2 backend that implements persistent connection using global variable. With this I was able to improve the amout of requests per second from 350 to 1600 (on very simple page with few selects) Just save it in the file called base.py in any directory (e.g. postgresql_psycopg2_persistent) and set in settings
DATABASE_ENGINE to projectname.postgresql_psycopg2_persistent
This code is threadsafe, however because python don't use multiple processors with threads you won't get bit performance boost with this one.
I really recommend using it in daemon mode.
In apache mod_wsgi just set processes=8 threads=1
[Based on snippet #513 by obeattie.](http://www.djangosnippets.org/snippets/513/)
**Update 10/10/09:** [Further development is now occurring on GitHub, thanks to Shrubbery Software.](http://github.com/shrubberysoft/django-picklefield)
Incredibly useful for storing just about anything in the database (provided it is Pickle-able, of course) when there isn't a 'proper' field for the job.
`PickledObjectField` is database-agnostic, and should work with any database backend you can throw at it. You can pass in any Python object and it will automagically be converted behind the scenes. You never have to manually pickle or unpickle anything. Also works fine when querying; supports `exact`, `in`, and `isnull` lookups. It should be noted, however, that calling `QuerySet.values()` will only return the encoded data, not the original Python object.
*Please note that this is supposed to be two files, one fields.py and one tests.py (if you don't care about the unit tests, just use fields.py).*
This PickledObjectField has a few improvements over the one in [snippet #513](http://www.djangosnippets.org/snippets/513/).
1. This one solves the `DjangoUnicodeDecodeError` problem when saving an object containing non-ASCII data by base64 encoding the pickled output stream. This ensures that all stored data is ASCII, eliminating the problem.
2. `PickledObjectField` will now optionally use `zlib` to compress (and uncompress) pickled objects on the fly. This can be set per-field using the keyword argument "compress=True". For most items this is probably **not** worth the small performance penalty, but for Models with larger objects, it can be a real space saver.
3. You can also now specify the pickle protocol per-field, using the protocol keyword argument. The default of `2` should always work, unless you are trying to access the data from outside of the Django ORM.
4. Worked around a rare issue when using the `cPickle` and performing lookups of complex data types. In short, `cPickle` would sometimes output different streams for the same object depending on how it was referenced. This of course could cause lookups for complex objects to fail, even when a matching object exists. See the docstrings and tests for more information.
5. You can now use the `isnull` lookup and have it function as expected. A consequence of this is that by default, `PickledObjectField` has `null=True` set (you can of course pass `null=False` if you want to change that). If `null=False` is set (the default for fields), then you wouldn't be able to store a Python `None` value, since `None` values aren't pickled or encoded (this in turn is what makes the `isnull` lookup possible).
6. You can now pass in an object as the default argument for the field without it being converted to a unicode string first. If you pass in a callable though, the field will still call it. It will *not* try to pickle and encode it.
7. You can manually import `dbsafe_encode` and `dbsafe_decode` from fields.py if you want to encode and decode objects yourself. This is mostly useful for decoding values returned from calling `QuerySet.values()`, which are still encoded strings.
The tests have been updated to match the added features, but if you find any bugs, please post them in the comments. My goal is to make this an error-proof implementation.
**Note:** If you are trying to store other django models in the `PickledObjectField`, please see the comments for a discussion on the problems associated with doing that. The easy solution is to put django models into a list or tuple before assigning them to the `PickledObjectField`.
**Update 9/2/09:** Fixed the `value_to_string` method so that serialization should now work as expected. Also added `deepcopy` back into `dbsafe_encode`, fixing #4 above, since `deepcopy` had somehow managed to remove itself. This means that lookups should once again work as expected in **all** situations. Also made the field `editable=False` by default (which I swear I already did once before!) since it is never a good idea to have a `PickledObjectField` be user editable.
A Django model manager capable of using different database connections.
Inspired by:
* [Eric Florenzano](http://www.eflorenzano.com/blog/post/easy-multi-database-support-django/)
* [Kenneth Falck](http://kfalck.net/2009/07/01/multiple-databases-and-sharding-with-django)
There's a more detailed version in Portuguese in my blog:
[Manager para diferentes conexões de banco no Django](http://ricobl.wordpress.com/2009/08/06/manager-para-diferentes-conexoes-de-banco-no-django/)
Sometimes you just need to count things (or create unique-for-your-application IDs). This model class allows you to run as many persistent counters as you like. Basic usage looks like this:
>>> Counter.next()
0
>>> Counter.next()
1L
>>> Counter.next()
2L
That uses the "default" counter. If you want to create and use a different counter, pass its name as a string as the parameter to the method:
>>> Counter.next('hello')
0
>>> Counter.next('hey')
0
>>> Counter.next('hello')
1L
>>> Counter.next('hey')
1L
>>> Counter.next('hey')
2L
You can also get the value as hex (if you want slightly shorter IDs, for use in URLs for example):
>>> Counter.next_hex('some-counter-that-is-quite-high')
40e
This example shows, how to use database views with django models. NewestArticle models contains 100 newest Articles. Remember, that NewestArticle model is read-only. Tested with mysql.
This is an extension of the DecimalField database field that uses my [Currency Object](http://www.djangosnippets.org/snippets/1525/), [Currency Widget](http://www.djangosnippets.org/snippets/1526/), and [Currency Form Field](http://www.djangosnippets.org/snippets/1527/).
I placed my Currency object in the Django\\utils directory, the widget in Django\\froms\\widgets_special.py, and the form field in Django\\forms\\fields_special.py because I integrated this set of currency objects into the Admin app ( [here](http://www.djangosnippets.org/snippets/1529/) ) and it was just easier to have everything within Django.
UPDATE 08-18-2009: Added 'import decimal' and modified to_python slightly.
The rest of the series: [Currency Object](http://www.djangosnippets.org/snippets/1525/), [Currency Widget](http://www.djangosnippets.org/snippets/1526/), [Currency Form Field](http://www.djangosnippets.org/snippets/1527/), [Admin Integration](http://www.djangosnippets.org/snippets/1529/)
I needed the ability to serialize and deserialize my database, which contains millions of objects. The existing XML serializer encountered spurious parse errors; the JSON serializer failed to handle UTF-8 even when it was asked to; and both the JSON and YAML serializers tried to keep all the representations in memory simultaneously.
This custom serializer is the only one that has done the job. It uses YAML's "stream of documents" model so that it can successfully serialize and deserialize large databases.
Class DatabaseStorage can be used with either FileField or ImageField. It can be used to map filenames to database blobs: so you have to use it with a **special additional table created manually**. The table should contain:
*a pk-column for filenames (I think it's better to use the same type that FileField uses: nvarchar(100))
*a blob column (image type for example)
*a size column (bigint type).
You can't just create blob column in the same table, where you defined FileField, since there is no way to find required row in the save() method.
Also size field is required to obtain better perfomance (see size() method).
So you can use it with different FileFields and even with different "upload_to" variables used. Thus it implements a kind of root filesystem, where you can define dirs using "upload_to" with FileField and store any files in these dirs. Beware saving file with the same "virtual path" overwrites old file.
It uses either settings.DB_FILES_URL or constructor param 'base_url' (@see __init__()) to create urls to files. Base url should be mapped to view that provides access to files (see example in the class doc-string). To store files in the same table, where FileField is defined you have to define your own field and provide extra argument (e.g. pk) to save().
Raw sql is used for all operations. In constractor or in DB_FILES of settings.py () you should specify a dictionary with db_table, fname_column, blob_column, size_column and 'base_url'. For example I just put to the settings.py the following line:
DB_FILES = {'db_table': 'FILES', 'fname_column': 'FILE_NAME', 'blob_column': 'BLOB', 'size_column': 'SIZE', 'base_url': 'http://localhost/dbfiles/' }"
And use it with ImageField as following:
player_photo = models.ImageField(upload_to="player_photos", storage = DatabaseStorage() )
DatabaseStorage class uses your settings.py file to perform custom connection to your database.
The reason to use custom connection: http://code.djangoproject.com/ticket/5135
Connection string looks like "cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=localhost;DATABASE=testdb;UID=me;PWD=pass')"
It's based on pyodbc module, so can be used with any database supported by pyodbc.
I've tested it with MS Sql Express 2005.
Note: It returns special path, which should be mapped to special view, which returns requested file:
**View and usage Example:**
def image_view(request, filename):
import os
from django.http import HttpResponse
from django.conf import settings
from django.utils._os import safe_join
from filestorage import DatabaseStorage
from django.core.exceptions import ObjectDoesNotExist
storage = DatabaseStorage()
try:
image_file = storage.open(filename, 'rb')
file_content = image_file.read()
except:
filename = 'no_image.gif'
path = safe_join(os.path.abspath(settings.MEDIA_ROOT), filename)
if not os.path.exists(path):
raise ObjectDoesNotExist
no_image = open(path, 'rb')
file_content = no_image.read()
response = HttpResponse(file_content, mimetype="image/jpeg")
response['Content-Disposition'] = 'inline; filename=%s'%filename
return response
**Warning:** *If filename exist, blob will be overwritten, to change this remove get_available_name(self, name), so Storage.get_available_name(self, name) will be used to generate new filename.*
For more information see docstrings in the code.
Please, drop me a line if you've found a mistake or have a suggestion :)
This script generates an [GraphViz](http://www.graphviz.org/) graph of your database structure from your django models.
See the usage if the file underneath the license.
A django admin command that takes a fixture and makes the target database the same as that fixture, deleting objects that in the database but not in the fixture, updating objects that are different in the database, and inserting missing ones.
Place this code in your_app/management/commands/syncdata.py
You will need to use manage.py (not django-admin.py) for Django to recognise custom commands (see http://www.djangoproject.com/documentation/django-admin/#customized-actions).
This snippet is the 'loaddata' command with this patch applied: http://code.djangoproject.com/ticket/7159 (with minor tweaks).
The intention is that 'dumpdata' on system A followed by 'syncdata' on system B is equivalent to a database copy from A to B. The database structure in A and B must match.
Script to help manage database migrations. Explanation and background can be found in blog post at [paltman.com](http://paltman.com/2008/07/03/managing-database-changes-in-django/).
Warning: This python script is designed for Django 0.96.
It exports data from models quite like the `dumpdata` command, and throws the
data to the standard output.
It fixes glitches with unicode/ascii characters. It looked like the 0.96
handles very badly unicode characters, unless you specify an argument that is
not available via the command line. The simple usage is:
$ python export_models.py -a <application1> [application2, application3...]
As a plus, it allows you to export only one or several models inside your
application, and not all of them:
$ python export_models.py application1.MyModelStuff application1.MyOtherModel
Of course, you can specify the output format (serializer) with the -f
(--format) option.
$ python export_models.py --format=xml application1.MyModel