Profiling Django applications – part1django
I have been doing some profiling of a Django application in different ways, and this is something that can be very useful to understand where bottlenecks are, why code behaves in certain ways and where we can trim the fat.
There are multiple places where we can profile Django, and many of the pointers and resources from Python at large can be used when profiling Django. Things like Hotshot, cProfile and timeit are available, and there are helper libraries around these tools.
In this series of posts I am going to be looking at what we can do to look at the performance of our Django application.
Firstly if you are not using logging in your app then start here. Without logging you are seriously making life more difficult for yourselves. The Python logging module is fine, Django 1.3 and onwards includes built in ability to log and setup logs from the settings.conf file, before that you could get email logs from setting up email subsystem and admins. The error email gave you stack traces of errors. It is better than nothing, and you will get notification of others having an error on your site but having debug level logs for development and error logs on exceptions and exception catching middleware that logs will help you a great deal.
If you are logging in a production environment a good first step is to use the tools that you have available to you on the operating system that you deploy to. This means use SyslogD or RSyslog on BSD and unix systems (unless you have very long logs – syslogd truncates long lines). This will give you automated rollover, the ability to configure central logging daemons and a logging infrastructure that most sysadmins will understand.
The Django settings.conf allows you to setup logging to the daemons and as it uses the Python logging module you can benefit from logging to several places if you need to. For instance a common pattern I use is to email out exceptions to admins, log normally to syslog.d but when an exception occurs log to a separate file so that the stack trace doesn’t get truncated.
Another reason to use a logging daemon and not just write to a file is that logging daemons can accept input from multiple sources. This really helps when you have multiple worker daemons on the same machine (which is most probably true if you are trying to take advantage of a multi-core system). Multiple daemons writing to the same file is not recommended with Python.
External logging services.
Moving on from here, if your site is in need of more logging capabilities consider using the excellent Django Sentry that was rolled out of Disqus. It will log to a central database, giving you excellent tools to deep dive into your logs for analysis.
Lastly if you prefer a hosted solution you should consider newrelic – it costs money, but in my testing of it it has been excellent. Not only does it give you the abliity to collect logging information, it also provides centralised profiling and indepth information on the permformance of your applications.comments powered by Disqus