Automate Your Life with Python Scripts
Automate Your Life with Python Scripts
Repetitive tasks are automation waiting to happen. If you find yourself doing the same thing more than a few times - renaming files in a pattern, pulling data from a website, sending the same email - Python can do it for you.
This guide covers practical, real-world scripts you can adapt for your own use. No frameworks, no complex setup - just Python and a few standard libraries.
Getting Set Up
Before anything, make sure you have a clean Python environment. If you haven't done this before, see my guide to Python virtual environments.
python3 -m venv ~/.venvs/automation
source ~/.venvs/automation/bin/activate
Install the libraries we'll use:
pip install requests beautifulsoup4 schedule
File Automation
Organize a Messy Downloads Folder
This script sorts files into subdirectories by type:
import os
import shutil
from pathlib import Path
FOLDER = Path.home() / "Downloads"
FILE_TYPES = {
"Images": [".jpg", ".jpeg", ".png", ".gif", ".svg", ".webp"],
"Documents": [".pdf", ".docx", ".doc", ".txt", ".md", ".xlsx"],
"Archives": [".zip", ".tar.gz", ".tgz", ".rar", ".7z"],
"Code": [".py", ".js", ".ts", ".html", ".css", ".sh"],
"Videos": [".mp4", ".mov", ".mkv", ".avi"],
"Audio": [".mp3", ".wav", ".flac", ".m4a"],
}
def get_category(extension):
for category, extensions in FILE_TYPES.items():
if extension.lower() in extensions:
return category
return "Other"
def organize_folder(folder):
for item in folder.iterdir():
if item.is_file() and not item.name.startswith("."):
category = get_category(item.suffix)
dest = folder / category
dest.mkdir(exist_ok=True)
shutil.move(str(item), str(dest / item.name))
print(f"Moved: {item.name} → {category}/")
organize_folder(FOLDER)
print("Done.")
Bulk Rename Files
Rename a batch of files with a consistent pattern - useful for photos, exports, or anything with messy names:
import os
from pathlib import Path
def bulk_rename(folder, prefix, extension=None):
folder = Path(folder)
files = sorted([
f for f in folder.iterdir()
if f.is_file() and (extension is None or f.suffix == extension)
])
for i, file in enumerate(files, start=1):
new_name = f"{prefix}_{i:03d}{file.suffix}"
file.rename(folder / new_name)
print(f"{file.name} → {new_name}")
# Example: rename all JPGs in a folder
bulk_rename("~/Desktop/photos", "vacation_2026", ".jpg")
Find and Delete Duplicate Files
import hashlib
from pathlib import Path
from collections import defaultdict
def file_hash(path):
h = hashlib.md5()
with open(path, "rb") as f:
for chunk in iter(lambda: f.read(8192), b""):
h.update(chunk)
return h.hexdigest()
def find_duplicates(folder):
hashes = defaultdict(list)
for path in Path(folder).rglob("*"):
if path.is_file():
hashes[file_hash(path)].append(path)
duplicates = {h: paths for h, paths in hashes.items() if len(paths) > 1}
return duplicates
dupes = find_duplicates("~/Documents")
for hash_val, paths in dupes.items():
print(f"\nDuplicate group:")
for p in paths:
print(f" {p}")
This only finds duplicates - you decide which to delete. Always safer to review before removing files.
Web Scraping
Check a Page for Changes
Useful for monitoring a job board, product price, or any page that updates:
import requests
import hashlib
import json
from pathlib import Path
CACHE_FILE = Path.home() / ".page_monitor_cache.json"
def get_page_hash(url):
response = requests.get(url, timeout=10)
return hashlib.md5(response.text.encode()).hexdigest()
def check_for_changes(url, name):
cache = {}
if CACHE_FILE.exists():
cache = json.loads(CACHE_FILE.read_text())
current_hash = get_page_hash(url)
previous_hash = cache.get(name)
if previous_hash is None:
print(f"[{name}] First run - saving baseline.")
elif current_hash != previous_hash:
print(f"[{name}] CHANGED! Page content is different from last check.")
else:
print(f"[{name}] No changes detected.")
cache[name] = current_hash
CACHE_FILE.write_text(json.dumps(cache, indent=2))
check_for_changes("https://example.com/jobs", "example-jobs")
Extract Data from a Table
Pull structured data from an HTML table into a list of dicts:
import requests
from bs4 import BeautifulSoup
def scrape_table(url, table_index=0):
response = requests.get(url, timeout=10)
soup = BeautifulSoup(response.text, "html.parser")
tables = soup.find_all("table")
if not tables:
print("No tables found on the page.")
return []
table = tables[table_index]
headers = [th.get_text(strip=True) for th in table.find_all("th")]
rows = []
for tr in table.find_all("tr")[1:]:
cells = [td.get_text(strip=True) for td in tr.find_all("td")]
if cells:
rows.append(dict(zip(headers, cells)))
return rows
data = scrape_table("https://example.com/data-table")
for row in data:
print(row)
Scheduled Tasks
Run a Script on a Schedule
The schedule library makes recurring jobs simple:
import schedule
import time
from datetime import datetime
def morning_report():
print(f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}] Running morning report...")
# Add your logic here
def cleanup_temp_files():
import shutil
from pathlib import Path
tmp = Path("/tmp/my_app_cache")
if tmp.exists():
shutil.rmtree(tmp)
print("Cleaned up temp files.")
# Schedule jobs
schedule.every().day.at("08:00").do(morning_report)
schedule.every().hour.do(cleanup_temp_files)
schedule.every(30).minutes.do(lambda: print("Still running..."))
print("Scheduler started. Press Ctrl+C to stop.")
while True:
schedule.run_pending()
time.sleep(60)
Using cron for Persistent Scheduling
For scripts that need to run even when your terminal isn't open, use cron:
crontab -e
Add a line like:
# Run organize_downloads.py every day at 9 AM
0 9 * * * /Users/you/.venvs/automation/bin/python /Users/you/scripts/organize_downloads.py >> /Users/you/logs/organize.log 2>&1
Cron syntax: minute hour day month weekday command
A few common patterns:
0 * * * * # Every hour
0 9 * * * # Every day at 9 AM
0 9 * * 1 # Every Monday at 9 AM
0 9 1 * * # First day of every month at 9 AM
Working with the Command Line
Accept Arguments in Your Script
Make scripts reusable by accepting arguments:
import argparse
parser = argparse.ArgumentParser(description="Organize files in a folder")
parser.add_argument("folder", help="Path to the folder to organize")
parser.add_argument("--dry-run", action="store_true", help="Preview without making changes")
args = parser.parse_args()
print(f"Organizing: {args.folder}")
if args.dry_run:
print("Dry run - no files will be moved.")
Run it like:
python organize.py ~/Downloads --dry-run
python organize.py ~/Downloads
Send a Desktop Notification When Done
On macOS:
import subprocess
def notify(title, message):
subprocess.run([
"osascript", "-e",
f'display notification "{message}" with title "{title}"'
])
notify("Script Complete", "Your files have been organized.")
On Linux (requires notify-send):
import subprocess
def notify(title, message):
subprocess.run(["notify-send", title, message])
Tips for Writing Good Automation Scripts
A few things I've learned the hard way:
Log what you do. Silent scripts that run via cron are impossible to debug without logs. Even a simple print() to a log file helps.
Dry-run mode first. Before any script that moves, renames, or deletes files, add a --dry-run flag that prints what would happen without doing it.
Handle errors gracefully. Wrap the main logic in a try/except and log failures. A crashed cron job that fails silently is worse than no automation at all.
Start small. Automate one specific thing, get it working reliably, then expand. Scripts that try to do everything tend to do nothing well.
The best automation scripts are the ones you forget about - they just quietly do their job while you do yours.