This lets you load global fixtures from a directory you set as `FIXTURES_ROOT` in your settings.py.
For example, setting `FIXTURES_ROOT` to `/path/to/myproject/fixtures/`
I tend to like to keep fixtures that should be loaded when a new instance of a site is deployed, but not auto-loaded (so they won't rewrite any data that comes after), in a project-level fixtures directory like this.
Sometimes these fixtures can be useful in your test suites, so this is a convenient way to load them.
Usage:
`load_global_fixtures('sites.json', 'contacts.json')`
`load_global_fixtures('sites.json', 'contacts.json', fixtures_root='/some/other/path', verbosity=1)`
**NOTE: This is modified from 1.0's test runner and has only been tested on Django 1.0 + Python 2.5**
This is a test runner that searches inside any app-level "tests" packages for django unit tests. Modules to be searched must have names that start with "test_" (this can be changed by modifying `get_multi_tests()`, [`mod.startswith('test_') and mod.endswith('.py')`]).
It also allows for arbitrarily nested test packages, with no restrictions on naming, so you could have:
myapp/
+- tests/
+- __init__.py
+- test_set1.py
+- category1/
+- __init__.py
+- test_something.py
+- subcat/
+- __init__.py
+- test_foobar.py
+- category2/
+- __init__.py
+- test_other.py
and "manage.py test myapp" would pick up tests in all four test_*.py test modules.
Searching the modules in this way, instead of importing them all into the top-level `__init__.py`, allows you to have "name collisions" with TestCase names -- two modules can each have a TestFooBar class, and they will both be run. Unfortunately, this snippet doesn't allow you to specify a full path to a specific test case or test module ("manage.py test myapp.category1.test_something" and "manage.py test myapp.test_set1.TestFooBar" won't work); using "manage.py test myapp.TestFooBar" will search out all test cases named "TestFooBar" and run them. "manage.py test myapp.TestFooBar.test_baz" will work similarly, returning the test_baz method of each TestFooBar class.
To use, put this code in "`testrunner.py`" in your project, and add `TEST_RUNNER = 'myproject.testrunner.run_tests'` to your `settings.py`.
- tests
- test-runner
- package
"Make fixture" command. Highly useful for making test fixtures.
Use it to pick only few items from your data to serialize, restricted by primary keys.
By default command also serializes foreign keys and m2m relations.
You can turn off related items serialization with `--skip-related` option.
How to use:
python manage.py makefixture
will display what models are installed
python manage.py makefixture User[:3]
or
python manage.py makefixture auth.User[:3]
or
python manage.py makefixture django.contrib.auth.User[:3]
will serialize users with ids 1 and 2, with assigned groups, permissions and content types.
python manage.py makefixture YourModel[3] YourModel[6:10]
will serialize YourModel with key 3 and keys 6 to 9 inclusively.
Of course, you can serialize whole tables, and also different tables at once, and use options of dumpdata:
python manage.py makefixture --format=xml --indent=4 YourModel[3] AnotherModel auth.User[:5] auth.Group
- serialize
- admin
- model
- fixtures
- tests
- test
- management
- commands
- fixture
- command
- make
Django's test client is very limited when it comes to testing complex interactions e.g. forms with hidden or persisted values etc. Twill excels in this area, and thankfully it is very easy to integrate it.
* Use `twill_setup` in your `TestCaseSubClass.setUp()` method
* Use `twill_teardown` in `TestCaseSubClass.tearDown()` method
* In a test, use something like `make_twill_url()` to generate URLs that will work for twill.
* Use `twill.commands.go()` etc. to control twill, or use `twill.execute_string()` or `twill.execute_script()`.
* Add `twill.set_output(StringIO())` to suppress twill output
* If you want to write half the test, then use twill interactively to write the rest as a twill script, use the example in `unfinished_test()`
Twill will raise exceptions if commands fail. This means you will get 'E' for error, rather than 'F' for fail in the test output. If necessary, it wouldn't be hard to wrap the twill commands to flag failure with TestCase.assert_
There are, of course, more advanced ways of using these functions (e.g. a mixin that does the setup/teardown for you), but the basic functions needed are here.
See also:
* [Twill](http://twill.idyll.org/)
* [Twill Python API](http://twill.idyll.org/python-api.html)