Published in Blog / Workload Automation

Running Python jobs: Cron basics and smarter scheduling options

Automating repetitive Python tasks is a common requirement for developers, system admins and data teams. Whether you’re running hourly scripts, nightly backups or scheduled reports, you need a reliable way to make Python jobs run on time without having to click a button.

Written by Editorial Staff | Last Updated: | 7 min read

Here’s how to set up a Python cron job, which scheduling libraries to consider and why you may want to look at modern alternatives for enterprise-scale orchestration.

What is Cron (and crontab)?

Cron is a built-in utility in Unix-like operating systems such as Linux and macOS. It quietly runs in the background, executing commands or scripts at scheduled times. These scheduled jobs are configured in a file called the crontab, which can also be managed through Python using the crontab module.

A crontab entry follows this format:

* * * * * /usr/bin/python /path/to/script.py

The five asterisks represent timing settings: minute, hour, day, month, day-of-week

So, if you want to run a Python script every day at 3:30 AM, your cron schedule would look like:

30 3 * * * /usr/bin/python /home/user/my_script.py

Open the crontab file with the crontab -e command line utility or generate with code using libraries like python-crontab. The cron daemon reads these entries and triggers the appropriate Python scripts at the scheduled times.

Running Python scripts with Cron

Here are the four basic steps for setting up a new cron job that runs a Python script.

1. Write your Python script

Start with a simple script to test your setup. For example:

import datetime
now = datetime.datetime.now()
print("The current time is:", now)

Save this file as print_time.py in a known directory.

2. Set up your cron job

Open the crontab file with crontab -e and add a line like:

0 * * * * /usr/bin/python3 /home/user/print_time.py >> 
/home/user/cron_output.log
2>&1

This will run the job at the start of every hour and write output to a log file. Adjust paths and times as needed.

3. Test your job

Wait for the next scheduled time, or temporarily set it to run every minute (* * * * *) to check if it works. Use tail -f on the log file to watch output in real time.

4. Customize your schedule

Here are a few common examples:

  • Every weekday at 8 a.m.: 0 8 * * 1-5
  • Every 15 minutes: */15 * * * *
  • Every Sunday at midnight: 0 0 * * 0

Cron syntax works, but it’s unforgiving. There’s no event-driven scheduling or built-in error handling, and no awareness of job dependencies. If you’ve ever chained together three bash scripts just to handle a retry, you know the pain. That’s usually the point when people start looking at Python job scheduler libraries.

Alternatives to Cron: Python job scheduler libraries

If you’re looking for a Python-native way to schedule tasks, several libraries provide more control and cleaner syntax — but each comes with trade-offs.

python-crontab

This library lets you create and manage cron jobs programmatically using Python code. It’s useful for writing automation scripts that install or modify crontab entries without requiring manual edits.

Pros:

  • Easy to use for developers
  • Great for scripting deployments

Cons:

  • Still relies on cron under the hood
  • No visibility into job status or failures

schedule

The schedule library is popular for lightweight task automation in Python scripts. It uses a readable syntax and is easy to install via pip install schedule. It’s often one of the first tools introduced in a Python job scheduling tutorial because of its simplicity. These libraries are great for one-off use cases, but what happens when you have a multi-step algorithm or data pipeline? They aren’t ideal when you’re running dozens of jobs with dependencies.

import schedule

import time

def job():

     print("Hello, world!")

schedule.every(10).minutes.do(job)

while True:

    schedule.run_pending()
    time.sleep(1)

Pros:

  • Clean Pythonic syntax
  • No need to mess with cron

Cons:

  • Doesn’t persist across restarts
  • No dashboard, no logs, no retries

APScheduler

Advanced Python Scheduler (APScheduler) supports more complex scheduling, such as running jobs at specific intervals, dates or even with cron-like expressions. It works with Flask, Django and other frameworks and can be installed as a Python package via pip.

Pros:

  • Flexible scheduling options
  • Can use background threads or persistent stores

Cons:

  • Requires more setup
  • Not built for large-scale orchestration

Beyond libraries: Workload automation platforms

When your Python scripts are part of a bigger picture like a data pipeline, an IT process or a DevOps workflow, you eventually hit the limits of simple schedulers. Your jobs have dependencies, they run on different systems, and you need to know immediately when one fails. That’s where workload automation platforms come in.

These tools act as a central command center for all your automated tasks. A great example is ActiveBatch by Redwood, a platform that handles Python scripts alongside thousands of other integrations, from SQL and bash scripts to SAP, AWS and Kubernetes.

Benefits over Cron and Python libraries:

  • Central monitoring for all scheduled jobs
  • Built-in alerting and retries
  • Cross-platform orchestration for hybrid cloud environments
  • Event triggers to run jobs based on file drops, API calls or database changes
  • No need for manual crontab editing or background loops

Python job scheduling in modern environments

Cron is fine if you’re just running local scripts on Linux or macOS. Move into the cloud, though, and things get messy fast.

  • In Kubernetes, you use CronJobs to manage scheduled tasks as containers
  • On Google Cloud, Cloud Scheduler triggers Python jobs via Cloud Functions or Cloud Run
  • AWS Lambda functions can be run on a schedule using Amazon EventBridge or CloudWatch Events

Each of these environments has its own configuration syntax, SDKs and logging tools. Managing these individually can quickly become complex, especially if you’re trying to coordinate tasks across platforms, trigger downstream jobs or share environment variables between steps.

To make matters worse, each tool has its own method of init and cleanup. Without orchestration, managing job lifecycle becomes a tangle of scripts, retries and manual handoffs.

Modern DevOps and data teams often need:

  • Job chaining across environments
  • API-driven scheduling
  • Real-time feedback loops
  • Audit logs and dashboard views

A tool like ActiveBatch consolidates all of this into one control plane, giving you visibility, security and scalability without writing custom scripts for each environment.

Monitoring and managing Python jobs

It’s tempting to set up a script and move on. Out of sight, out of mind. Until it fails at 2 AM and nobody notices. Cron and Python libraries don’t give you much visibility; you’ll need to build your own logging and alerts or risk running blind.

Here’s what to consider:

  • Logging: Make sure stdout and stderr are redirected to files or logging services like syslog or external SQL databases.
  • Debugging: If something fails, having job-level logs and error codes makes debugging much faster, especially in production.
  • Retries: Cron doesn’t bother. Use try/except blocks in your code or external tools to handle this.
  • Error alerts: If a job fails silently, how will you know? Platforms like ActiveBatch can send real-time notifications.
  • Dependencies: One job’s success may depend on another. Handling this with cron alone means custom bash scripts and brittle logic.

With enterprise scheduling platforms, you get built-in dashboards, dependency mapping, and job history for all workflows — not just Python.

Get more from Python with ActiveBatch

You’ve seen how to run a Python script on a schedule. But what happens when you have dozens of scripts with complex dependencies, logs to manage and a team that needs to know when things go wrong? That’s where the real challenge begins.

Instead of juggling cron files, standalone scripts and custom error alerts, an orchestration platform like ActiveBatch brings order to the chaos. It connects all the scattered pieces of your automated processes into a single, visible workflow.

You can:

  • Drop Python code directly into workflows
  • Trigger Python jobs based on files, events or endpoints
  • Pass values between steps, even across platforms
  • Integrate with CI/CD tools, data pipelines and ERP systems

ActiveBatch is built for teams who’ve outgrown cron and want fewer surprises in production. Run scripts reliably across cloud, hybrid and on-premises environments without worrying about coordination, retries or manual triggers.

Book a demo and see how teams like yours keep jobs running without late-night firefighting.


You May Also Like

ActiveBatch

ActiveBatch Academy is now Redwood University!

Redwood University is the new learning portal for ActiveBatch Academy users. It offers an improved, updated, seamless learning experience for ActiveBatch customers and partners.

ActiveBatch

BMC Control-M API for Workload Automation

Discover alternative job scheduling tools to BMC Control-M and find the best fit for your organization's needs with this comprehensive guide on BMC Control-M alternatives.

Popular Articles

Digital process automation streamlines data for business orchestration
Business Process Automation

Digital process automation (DPA) — Overview and trends for IT

Digital process automation (DPA) tools are used to automate and optimize end-to-end business workflows for IT operations, infrastructure, data warehousing and more. By automating business and IT processes, organizations can streamline daily operations to improve outcomes and customer satisfaction.

Be ready to automate anything

Build and automate workflows in half the time without the need for scripting. Gain operational peace of mind with real-time insights, customizable alerting, and more.

Get a Demo