Snippet List
Beware if using Amazon Simple Queue Service to execute Celery tasks which send email messages! Sometimes SQS messages are duplicated which results in multiple copies of the messages being sent. This is a simple decorator which uses a cache backend to prevent the task from executing twice in a specified period. For example:
@task
@execute_once_in(3600*24*7)
def cron_first_week_follow_up():
"""
Send a follow-up email to new users!
"""
pass
For more info see
<http://atodorov.org/blog/2013/12/06/duplicate-amazon-sqs-messages-cause-multiple-emails/>
<http://atodorov.org/blog/2013/12/11/idempotent-django-email-sender-with-amazon-sqs-and-memcache/>
- django
- email
- decorator
- amazon
- queue
- celery
For use with S3 BotoStorage
STATICFILES_STORAGE ="storages.backends.s3boto.S3BotoStorage"
and
AWS_PRELOAD_METADATA = True
Custom management command that compares the MD5 sum and etag from S3 and if the two are the same skips file copy.
This makes running collect static MUCH faster if you are using git as a source control system which updates timestamps.
- s3
- amazon
- aws
- boto
- collectstatic
- storages
This is a 'fixed' version of snippet [1868](http://djangosnippets.org/snippets/1868/)
Changes:
*Correctly handle the Content-Type, because amazon requieres it to be named with a dash and we can't use dashes in the form attributes declaration.
*Also added max_size handling, with the corresponding update to the policy generation.
*Added an example usage with some javascript for basic validation.
[See the amazon reference](http://aws.amazon.com/articles/1434?_encoding=UTF8&jiveRedirect=1)
- s3
- amazon
- html form
- upload form
You can use this code to sign urls for streaming distributions or change it a bit and sign normal distribution's urls.
Available settings:
CLOUDFRONT_KEY - path to private key file
CLOUDFRONT_KEY_PAIR_ID - key pair id
CLOUDFRONT_EXPIRES_IN - expiration time in seconds
CLOUDFRONT_DOMAIN - domain name
I am not sure what to say about the state of PyAWS, or its future, what with the multiple forks available and lack of recent updates. The best version I've found is [http://github.com/IanLewis/pyaws](this one), a spiffed-up version of 0.2.2 by Ian Lewis. I wrote this class on top of PyAWS so I could have more pythonic/django-y calling conventions, and to isolate the calls in case I have to swap libraries or versions down the road.
You may want to familiarize yourself with PyAWS before using this. You'll definately need Amazon web service login credentials and keys -- they're available [here](http://aws.amazon.com/) for free.
personally I use it with [these monkeypatching and decorator-decorators](http://www.djangosnippets.org/snippets/1888/) -- at the top of my personal version of the file containing this snippet I use the two (non-silly) examples from that snippet, to make the PyAWS internal Bag collection class work for me.
EXAMPLE USE:
# search Amazon's product database (returns a list of nested dicts)
from amazon import aws
books = aws.search(q='raymond carver')
lenses = aws.search(q='leica summicron', idx='Photo')
# get the data for a specific ASIN/ISBN/EAN/etc ID number
what_we_talk_about_when_we_talk_about_love = aws.fetch(qid='0679723056', idtype='ASIN')
- decorator
- amazon
- amazonapi
- aws
- pyaws
**General notes:**
- Set MEDIA_URL (or whatever you use for uploaded content to point to S3 (ie. MEDIA_URL = "http://s3.amazonaws.com/MyBucket/"))
- Put django-storage in project_root/libraries, or change the paths to make you happy.
- This uses the functionality of django-storage, but *not* as DEFAULT_FILE_STORAGE.
The functionality works like so:
**Getting stuff to S3**
- On file upload of a noted model, a copy of the uploaded file is saved to S3.
- On any thumbnail generation, a copy is also saved to S3.
**On a page load:**
1. We check to see if the thumbnail exists locally. If so, we assume it's been sent to S3 and move on.
2. If it's missing, we check to see if S3 has a copy. If so, we download it and move on.
3. If the thumb is missing, we check to see if the source image exists. If so, we make a new thumb (which uploads itself to S3), and move on.
4. If the source is also missing, we see if it's on S3, and if so, get it, thumb it, and push the thumb back up, and move on.
5. If all of that fails, somebody deleted the image, or things have gone fubar'd.
**Advantages:**
- Thumbs are checked locally, so everything after the initial creation is very fast.
- You can clear out local files to save disk space on the server (one assumes you needed S3 for a reason), and trust that only the thumbs should ever be downloaded.
- If you want to be really clever, you can delete the original source files, and zero-byte the thumbs. This means very little space cost, and everything still works.
- If you're not actually low on disk space, Sorl Thumbnail keeps working just like it did, except your content is served by S3.
**Problems:**
- My python-fu is not as strong as those who wrote Sorl Thumbnail. I did tweak their code. Something may be wonky. YMMV.
- The relative_source property is a hack, and if the first 7 characters of the filename are repeated somewhere, step 4 above will fail.
- Upload is slow, and the first thumbnailing is slow, because we wait for the transfers to S3 to complete. This isn't django-storage, so things do genuinely take longer.
- image
- thumbnail
- s3
- amazon
- sorl
This is a bastardisation of a few of the Amazon s3 file uploader scripts that are around on the web. It's using Boto, but it's pretty easy to use the Amazon supplied S3 library they have for download at [their site](http://developer.amazonwebservices.com/connect/entry.jspa?externalID=134).
It's mostly based on [this](http://www.holovaty.com/blog/archive/2006/04/07/0927) and [this](http://www.davidcramer.net/code/112/writing-a-build-bot.html).
It's fairly limited in what it does (i didn't bother os.walking the directory structure), but I use it to quickly upload updated css or javascript. I'm sure it's a mess code wise, but it does the job.
This will first YUI compress the files, and then gzip them before uploading to s3. Hopefully someone might find this useful. It will also retain the path structure of the files in your MEDIA_ROOT directory.
To use it, set up your Amazon details, download the [YUI Compressor](http://developer.yahoo.com/yui/compressor/), and then enter the folder you wish to upload to s3, and basically run the script - python /path/to/s3_uploader.py
9 snippets posted so far.