How to Monitor Cron Jobs in Python with Cronaman
The silent failure problem
Cron jobs are easy to set up and easy to forget. You write a Python script, add it to crontab, and move on. Weeks later you discover it silently stopped running after a package update — and nobody received an alert, because there was nothing to alert on.
This is the silent failure problem: scheduled jobs have no built-in mechanism to report success or failure. They run (or they don't), and unless you're actively watching logs, you won't know until something downstream breaks — a missing backup, unsent invoices, stale reports reaching your users.
What is heartbeat monitoring?
Heartbeat monitoring is a dead man's switch for scheduled jobs. You configure an expected interval — say, every 24 hours — and Cronaman watches the clock. Your script sends a simple HTTP ping at the end of every successful run. If that ping doesn't arrive within the window, Cronaman transitions the monitor to "late" and then "down", and fires an alert.
There's no agent to install on your server. No SDK to integrate. Your Python script sends one HTTP request at the end of every run. That's the entire integration.
Create a Cronaman monitor
Before adding any code, create a monitor in Cronaman:
- Sign up at cronaman.dev — free plan, no credit card required
- Click New Monitor
- Name it (e.g., "Database backup") and set the interval to match your cron schedule
- Copy your unique ping URL:
https://cronaman.dev/ping/python-backup
That URL is all you need. Every time your script calls it, the monitor resets its clock. Miss the window, and you get an email within minutes.
Ping Cronaman with urllib (zero dependencies)
Python's standard library includes urllib.request, which is all you need to send a ping. No third-party packages required — this works on any Python 3 environment, including minimal Docker images and shared hosting.
import urllib.request
PING_URL = "https://cronaman.dev/ping/python-backup"
def run_backup():
# Your job logic here
print("Backing up database...")
# ... connect to DB, dump files, upload to S3 ...
try:
run_backup()
urllib.request.urlopen(PING_URL, timeout=10)
except Exception as e:
print("Job failed:", str(e))
raiseA few details worth noting:
- Always set a
timeout. An unbounded HTTP call can hang indefinitely, blocking your cron slot from running again - Call
urlopenafter your job completes — the ping only fires on a successful run - The
raisere-raises after logging, keeping the exit code non-zero on failure (important for cron logging)
Using the requests library
If your project already uses requests, the pattern is identical — just swap the HTTP call:
import requests
PING_URL = "https://cronaman.dev/ping/python-backup"
def run_backup():
print("Backing up database...")
# ... your job logic here ...
try:
run_backup()
requests.get(PING_URL, timeout=10)
except Exception as e:
print("Job failed:", str(e))
raiseNote that unlike urllib, the requests library has no default timeout — making it even more important to set one explicitly.
Signaling failure explicitly
If your script crashes before reaching the ping call, Cronaman will detect the missed ping after the grace period and alert you. But you can also send an explicit failure signal by appending /fail to your ping URL. This marks the run as failed immediately — no waiting for the deadline.
import urllib.request
PING_URL = "https://cronaman.dev/ping/python-backup"
FAIL_URL = "https://cronaman.dev/ping/python-backup/fail"
def run_backup():
print("Backing up database...")
# ... your job logic here ...
try:
run_backup()
urllib.request.urlopen(PING_URL, timeout=10) # success
except Exception as e:
print("Job failed:", str(e))
try:
urllib.request.urlopen(FAIL_URL, timeout=10) # signal failure
except Exception:
pass # don't let a network error mask the original failure
raiseThe inner try/except around the fail ping is intentional: if Cronaman is temporarily unreachable, you don't want a network error to swallow the original exception.
Scheduling with crontab
Add your script to crontab with crontab -e. Match the cron schedule to the interval you configured in Cronaman:
# Run daily at 2:00 AM, log all output
0 2 * * * /usr/bin/python3 /home/user/backup.py >> /var/log/backup.log 2>&1In Cronaman, set the monitor interval to 24 hours and add a grace period of 10–15 minutes. The grace period lets Cronaman tolerate minor timing drift — load spikes, slow startup, clock skew — before alerting. Without it, a job that runs at 2:01 AM instead of 2:00 AM would trigger a false alarm.
Verify your setup
Run the script once manually to confirm the ping fires:
python3 backup.pyOpen your Cronaman dashboard. Within a few seconds, the monitor should show "healthy" with a "Last ping" timestamp. If it stays grey (no pings received), verify that your server can reach cronaman.dev on port 443 and that the ping URL matches exactly.
From here, Cronaman watches the clock so you don't have to. Miss a scheduled run, and you'll know immediately — before your users do.
More cron monitoring guides
Using a different stack? The same pattern works everywhere:
- How to Monitor Cron Jobs in Node.js — covers native fetch, Axios, and node-cron
- How to Monitor Cron Jobs in PHP — covers file_get_contents, cURL, and Laravel schedulers
More guides
What Is Cron Job Monitoring? (And Why You Need It)
5 min readProBeyond Timing: Catch Silent Successes with Semantic Ping Payloads
5 min readNode.jsHow to Monitor Cron Jobs in Node.js with Cronaman
7 min readPHPHow to Monitor Cron Jobs in PHP with Cronaman
6 min readBashHow to Monitor Cron Jobs in Bash with Cronaman
5 min readRubyHow to Monitor Cron Jobs in Ruby with Cronaman
6 min readGoHow to Monitor Cron Jobs in Go with Cronaman
6 min readLaravelHow to Monitor Laravel Cron Jobs with Cronaman
6 min readStart monitoring your Python cron jobs
Free forever for up to 3 monitors. No credit card required. Set up in under 2 minutes.
Start monitoring free