Snippet List
For use with S3 BotoStorage
STATICFILES_STORAGE ="storages.backends.s3boto.S3BotoStorage"
and
AWS_PRELOAD_METADATA = True
Custom management command that compares the MD5 sum and etag from S3 and if the two are the same skips file copy.
This makes running collect static MUCH faster if you are using git as a source control system which updates timestamps.
- s3
- amazon
- aws
- boto
- collectstatic
- storages
This is a 'fixed' version of snippet [1868](http://djangosnippets.org/snippets/1868/)
Changes:
*Correctly handle the Content-Type, because amazon requieres it to be named with a dash and we can't use dashes in the form attributes declaration.
*Also added max_size handling, with the corresponding update to the policy generation.
*Added an example usage with some javascript for basic validation.
[See the amazon reference](http://aws.amazon.com/articles/1434?_encoding=UTF8&jiveRedirect=1)
- s3
- amazon
- html form
- upload form
You can use this code to sign urls for streaming distributions or change it a bit and sign normal distribution's urls.
Available settings:
CLOUDFRONT_KEY - path to private key file
CLOUDFRONT_KEY_PAIR_ID - key pair id
CLOUDFRONT_EXPIRES_IN - expiration time in seconds
CLOUDFRONT_DOMAIN - domain name
**General notes:**
- Set MEDIA_URL (or whatever you use for uploaded content to point to S3 (ie. MEDIA_URL = "http://s3.amazonaws.com/MyBucket/"))
- Put django-storage in project_root/libraries, or change the paths to make you happy.
- This uses the functionality of django-storage, but *not* as DEFAULT_FILE_STORAGE.
The functionality works like so:
**Getting stuff to S3**
- On file upload of a noted model, a copy of the uploaded file is saved to S3.
- On any thumbnail generation, a copy is also saved to S3.
**On a page load:**
1. We check to see if the thumbnail exists locally. If so, we assume it's been sent to S3 and move on.
2. If it's missing, we check to see if S3 has a copy. If so, we download it and move on.
3. If the thumb is missing, we check to see if the source image exists. If so, we make a new thumb (which uploads itself to S3), and move on.
4. If the source is also missing, we see if it's on S3, and if so, get it, thumb it, and push the thumb back up, and move on.
5. If all of that fails, somebody deleted the image, or things have gone fubar'd.
**Advantages:**
- Thumbs are checked locally, so everything after the initial creation is very fast.
- You can clear out local files to save disk space on the server (one assumes you needed S3 for a reason), and trust that only the thumbs should ever be downloaded.
- If you want to be really clever, you can delete the original source files, and zero-byte the thumbs. This means very little space cost, and everything still works.
- If you're not actually low on disk space, Sorl Thumbnail keeps working just like it did, except your content is served by S3.
**Problems:**
- My python-fu is not as strong as those who wrote Sorl Thumbnail. I did tweak their code. Something may be wonky. YMMV.
- The relative_source property is a hack, and if the first 7 characters of the filename are repeated somewhere, step 4 above will fail.
- Upload is slow, and the first thumbnailing is slow, because we wait for the transfers to S3 to complete. This isn't django-storage, so things do genuinely take longer.
- image
- thumbnail
- s3
- amazon
- sorl
This is a bastardisation of a few of the Amazon s3 file uploader scripts that are around on the web. It's using Boto, but it's pretty easy to use the Amazon supplied S3 library they have for download at [their site](http://developer.amazonwebservices.com/connect/entry.jspa?externalID=134).
It's mostly based on [this](http://www.holovaty.com/blog/archive/2006/04/07/0927) and [this](http://www.davidcramer.net/code/112/writing-a-build-bot.html).
It's fairly limited in what it does (i didn't bother os.walking the directory structure), but I use it to quickly upload updated css or javascript. I'm sure it's a mess code wise, but it does the job.
This will first YUI compress the files, and then gzip them before uploading to s3. Hopefully someone might find this useful. It will also retain the path structure of the files in your MEDIA_ROOT directory.
To use it, set up your Amazon details, download the [YUI Compressor](http://developer.yahoo.com/yui/compressor/), and then enter the folder you wish to upload to s3, and basically run the script - python /path/to/s3_uploader.py
I couldn't find a Python implementation of this, so I threw this class together real quick.
This will let you share "private" files on S3 via a signed request. It will also have an expiration on the link, so it is only valid until a certain time period.
Example Usage:
s3 = SecureS3('AWS_ACCESS_KEY', 'AWS_SECRET_ACCESS_KEY')
s3.get_auth_link('your_bucket', 'your_file')
That would return your secure link. eg,
http://your_bucket.s3.amazonaws.com/your_file?AWSAccessKeyId=AWS_ACCESS_KEY&Expires=1226198694&Signature=IC5ifWgiuOZ1IcWXRltHoETYP1A%3D
8 snippets posted so far.