The Cron Commandments - part 1
Although it’s a rare Unix machine that doesn’t run at least a couple of custom cronjobs it’s an even more special snowflake that does them properly. Below are some of the more common problems I’ve seen and my thoughts on them.
Always use a script, never a bare command line.
A parenthesis wrapped command-line in a crontab sends shivers down my spine. Nothing says "I didn't really think this through" and "I've done the bare minimum to make it work" in quite the same way.Don't shout about success
A cronjob that completes successfully shouldn't post anything to `stdout` or `stderr`. Most developers have no idea how annoying it is to get a single line email every minute proclaiming all's well. It also trains people to delete messages with certain subject lines without reading them, which'll catch you out when a real problem occurs.Caveat 1: Logging that the script finished, and adding some timing information, can often be useful. It’s good to have an audit trail of what actually ran and how long it took. By logging to syslog you gain the benefits of centralised logs (you are centralising your log files right?) and, because it’s passive, the sysadmin doesn’t get notified about expected completions unless she looks for them.
Debug information should be an option
A script invoked via cron has a different environment than one run from the command line, it'll work (and break) in different ways - which you'll want to see. It should be possible to enable additional debug without making any changes to the script itself. A command-line flag or environmental variable should be enough to trigger additional debug information. Often all you'll get is an email with the error and the debug information so ensure you can diagnose from your own output.Beware overrunning jobs
Almost all your cronjobs should check to ensure that another instance isn't already running and exit if it is - after logging the issue. I've lost track of the number of difficult to track bugs caused by a cronjob starting, taking longer to finish than the interval between runs, and then having another job follow it. This often causes deadlocks, resource conflicts, maxed out database connections and corrupted data. Some, very simple, cronjobs don't need this but when in doubt put it in. And log the fact, this can help pick up growth trends ("it took 2 minutes until we added the extra users").Beware /dev/null redirects in crontabs
Any cronjob that redirects `stdout`, `stderr` or (worse) both to `/dev/null` is going to cause you headaches and will need some attention. People typically add these when something is wrong and they lack either the skill or the time to fix it. The presence of these redirects show a lack of confidence in the script and should be treated as a red flag. On the plus side they point you at potential trouble.Avoid running as root
As in most things using root is bad. Try writing your cronjobs so they can run under a non-privileged user, with a little `sudo` mixed in if you need it. It'll save you a lot of hassle when something goes wrong and the script tries to eat your file system.Closing Comments
And to close, a couple of quick points: test your cronjobs from cron, not just interactively. `/etc/` is often backed up, `/var/spool/cron/crontabs/` is often missed so think about your deployment locations. Make sure your admins know about any cronjobs your packages add. And finally, if you generate your crontabs always add a newline at the end.If you at least know why you’re breaking some of these rules (and they better be good reasons) then you’ll be a good few steps above most developers I’ve worked with. And we’ll get on a lot better.