Complete Guide to Login for Python Developers

by SkillAiNest

Complete Guide to Login for Python DevelopersComplete Guide to Login for Python Developers
Photo by author

# Introduction

Most Python developers treat logging as an afterthought. They throw it around print() Statements during development, maybe go to basic logging later, and assume that’s enough. But when problems arise in production, they learn that they lack the context needed to effectively diagnose problems.

Proper logging techniques give you visibility into application behavior, performance patterns, and error conditions. With the right approach, you can track user actions, identify bottlenecks, and debug issues without having to reproduce them locally. Good logging turns debugging from guesswork to systematic problem solving.

This article covers essential logging patterns that Python developers can use. You’ll learn how to configure log messages for searchability, handle exceptions without losing context, and configure logging for different environments. We’ll start with the basics and work our way up to more advanced logging strategies that you can use in projects right now. We will only use Logging module.

You can find the code on GitHub.

# Setting up your first logger

Instead of jumping straight into complex configurations, let us understand what a logger actually does. We’ll create a basic logger that writes both to the console and to a file.

import logging

logger = logging.getLogger('my_app')
logger.setLevel(logging.DEBUG)

console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)

file_handler = logging.FileHandler('app.log')
file_handler.setLevel(logging.DEBUG)

formatter = logging.Formatter(
    '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
console_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)

logger.addHandler(console_handler)
logger.addHandler(file_handler)

logger.debug('This is a debug message')
logger.info('Application started')
logger.warning('Disk space running low')
logger.error('Failed to connect to database')
logger.critical('System shutting down')

This is what each piece of code does.

getLogger() The function creates a named logger instance. Think of it as creating a channel for your logs. The ‘My_app’ name helps you identify where logins come from in larger applications.

We set to logger level DEBUGwhich means it will process all messages. Then we create two handlers: one for console output and one for file output. Handlers control where logins occur.

The console handler only displays INFO level and above, while the file handler handles everything, including DEBUG Messages This is useful because you want detailed logs in files but cleaner output on the screen.

The formatter determines how your log messages look. Format strings use placeholders %(asctime)s and for timestamps %(levelname)s for intensity.

# Understanding log levels and when to use each

of the python Logging module There are five standard levels, and knowing when to use them is important for useful logs.

Here is an example:

logger = logging.getLogger('payment_processor')
logger.setLevel(logging.DEBUG)

handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
logger.addHandler(handler)

def process_payment(user_id, amount):
    logger.debug(f'Starting payment processing for user {user_id}')

    if amount <= 0:
        logger.error(f'Invalid payment amount: {amount}')
        return False

    logger.info(f'Processing ${amount} payment for user {user_id}')

    if amount > 10000:
        logger.warning(f'Large transaction detected: ${amount}')

    try:
        # Simulate payment processing
        success = charge_card(user_id, amount)
        if success:
            logger.info(f'Payment successful for user {user_id}')
            return True
        else:
            logger.error(f'Payment failed for user {user_id}')
            return False
    except Exception as e:
        logger.critical(f'Payment system crashed: {e}', exc_info=True)
        return False

def charge_card(user_id, amount):
    # Simulated payment logic
    return True

process_payment(12345, 150.00)
process_payment(12345, 15000.00)

Let’s break down each level to use:

  • Debug Useful for useful information during development. You’ll use it to mark variable values, loop iterations, or stepwise execution. These are usually inactive in production.
  • Information Marks common actions that you want to record. Starting a server, completing a task, or successful transactions go here. This verifies that your application is working as expected.
  • Warning Indicates something unexpected but breaking. This includes low disk space, use of deprecated APIs, or unusual but manageable situations. The application is ongoing, but someone should investigate.
  • mistake Meaning something failed but the request can continue. Failed database queries, validation errors, or network timeouts are here. The specified operation failed, but the app continues to run.
  • Criticism Indicates serious problems that can cause the application to crash or lose data. Use it sparingly for catastrophic failures that require immediate attention.

When you run the above code you will get:

DEBUG: Starting payment processing for user 12345
DEBUG:payment_processor:Starting payment processing for user 12345
INFO: Processing $150.0 payment for user 12345
INFO:payment_processor:Processing $150.0 payment for user 12345
INFO: Payment successful for user 12345
INFO:payment_processor:Payment successful for user 12345
DEBUG: Starting payment processing for user 12345
DEBUG:payment_processor:Starting payment processing for user 12345
INFO: Processing $15000.0 payment for user 12345
INFO:payment_processor:Processing $15000.0 payment for user 12345
WARNING: Large transaction detected: $15000.0
WARNING:payment_processor:Large transaction detected: $15000.0
INFO: Payment successful for user 12345
INFO:payment_processor:Payment successful for user 12345
True

Next, let’s move on to understand more about logging exceptions.

# Logging exceptions correctly

When exceptions occur, you need more than just an error message. You absolutely need a stack trace. Here’s how to catch exceptions effectively.

import json

logger = logging.getLogger('api_handler')
logger.setLevel(logging.DEBUG)

handler = logging.FileHandler('errors.log')
formatter = logging.Formatter(
    '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)

def fetch_user_data(user_id):
    logger.info(f'Fetching data for user {user_id}')

    try:
        # Simulate API call
        response = call_external_api(user_id)
        data = json.loads(response)
        logger.debug(f'Received data: {data}')
        return data
    except json.JSONDecodeError as e:
        logger.error(
            f'Failed to parse JSON for user {user_id}: {e}',
            exc_info=True
        )
        return None
    except ConnectionError as e:
        logger.error(
            f'Network error while fetching user {user_id}',
            exc_info=True
        )
        return None
    except Exception as e:
        logger.critical(
            f'Unexpected error in fetch_user_data: {e}',
            exc_info=True
        )
        raise

def call_external_api(user_id):
    # Simulated API response
    return '{"id": ' + str(user_id) + ', "name": "John"}'

fetch_user_data(123)

Here is the key exc_info=True This parameter tells the logger to include a full exception traceback in your logs. Without it, you just get an error message, which is often not enough to debug the problem.

See how we catch specific exceptions first, then catch a general one Exception Handler. Specific handlers let us provide context-appropriate error messages. The general handler catches anything unexpected and re-raises it because we don’t know how to handle it safely.

Also notice that we log in ERROR But for expected exceptions (such as network errors). CRITICAL For the unexpected. This distinction helps you prioritize when reviewing logs.

# Creating a reusable log configuration

Copying the logger setup code into files is tedious and error prone. Let’s create a configuration function that you can import anywhere in your project.

# logger_config.py

import logging
import os
from datetime import datetime


def setup_logger(name, log_dir="logs", level=logging.INFO):
    """
    Create a configured logger instance

    Args:
        name: Logger name (usually __name__ from calling module)
        log_dir: Directory to store log files
        level: Minimum logging level

    Returns:
        Configured logger instance
    """
    # Create logs directory if it doesn't exist

    if not os.path.exists(log_dir):
        os.makedirs(log_dir)
    logger = logging.getLogger(name)

    # Avoid adding handlers multiple times

    if logger.handlers:
        return logger
    logger.setLevel(level)

    # Console handler - INFO and above

    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.INFO)
    console_format = logging.Formatter("%(levelname)s - %(name)s - %(message)s")
    console_handler.setFormatter(console_format)

    # File handler - everything

    log_filename = os.path.join(
        log_dir, f"{name.replace('.', '_')}_{datetime.now().strftime('%Y%m%d')}.log"
    )
    file_handler = logging.FileHandler(log_filename)
    file_handler.setLevel(logging.DEBUG)
    file_format = logging.Formatter(
        "%(asctime)s - %(name)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s"
    )
    file_handler.setFormatter(file_format)

    logger.addHandler(console_handler)
    logger.addHandler(file_handler)

    return logger

Now that you are set up logger_configyou can use it in your Python script like this:

from logger_config import setup_logger

logger = setup_logger(__name__)

def calculate_discount(price, discount_percent):
    logger.debug(f'Calculating discount: {price} * {discount_percent}%')
    
    if discount_percent < 0 or discount_percent > 100:
        logger.warning(f'Invalid discount percentage: {discount_percent}')
        discount_percent = max(0, min(100, discount_percent))
    
    discount = price * (discount_percent / 100)
    final_price = price - discount
    
    logger.info(f'Applied {discount_percent}% discount: ${price} -> ${final_price}')
    return final_price

calculate_discount(100, 20)
calculate_discount(100, 150)

This setup function handles several important things. First, it creates the login directory if needed, which prevents accidents from missing directories.

The function checks if handlers already exist before adding new ones. Without this check, calling setup_logger Multiple times will generate duplicate log entries.

We automatically generate dated log filenames. This prevents log files from growing indefinitely and makes it easy to find logs from specific dates.

File handlers contain more detail than console handlers, including function names and line numbers. This is invaluable when debugging but clutters the console output.

By using __name__ Because the logger name creates a hierarchy that matches your module structure. This allows you to independently control logins for specific parts of your application.

# Configuring logins with context

Plain text logs are fine for simple applications, but structured logs with context make debugging much easier. Let’s add contextual information to our logs.

import json
from datetime import datetime, timezone

class ContextLogger:
    """Logger wrapper that adds contextual information to all log messages"""

    def __init__(self, name, context=None):
        self.logger = logging.getLogger(name)
        self.context = context or {}

        handler = logging.StreamHandler()
        formatter = logging.Formatter('%(message)s')
        handler.setFormatter(formatter)
        # Check if handler already exists to avoid duplicate handlers
        if not any(isinstance(h, logging.StreamHandler) and h.formatter._fmt == '%(message)s' for h in self.logger.handlers):
            self.logger.addHandler(handler)
        self.logger.setLevel(logging.DEBUG)

    def _format_message(self, message, level, extra_context=None):
        """Format message with context as JSON"""
        log_data = {
            'timestamp': datetime.now(timezone.utc).isoformat(),
            'level': level,
            'message': message,
            'context': {**self.context, **(extra_context or {})}
        }
        return json.dumps(log_data)

    def debug(self, message, **kwargs):
        self.logger.debug(self._format_message(message, 'DEBUG', kwargs))

    def info(self, message, **kwargs):
        self.logger.info(self._format_message(message, 'INFO', kwargs))

    def warning(self, message, **kwargs):
        self.logger.warning(self._format_message(message, 'WARNING', kwargs))

    def error(self, message, **kwargs):
        self.logger.error(self._format_message(message, 'ERROR', kwargs))

you can use ContextLogger such as:

def process_order(order_id, user_id):
    logger = ContextLogger(__name__, context={
        'order_id': order_id,
        'user_id': user_id
    })

    logger.info('Order processing started')

    try:
        items = fetch_order_items(order_id)
        logger.info('Items fetched', item_count=len(items))

        total = calculate_total(items)
        logger.info('Total calculated', total=total)

        if total > 1000:
            logger.warning('High value order', total=total, flagged=True)

        return True
    except Exception as e:
        logger.error('Order processing failed', error=str(e))
        return False

def fetch_order_items(order_id):
    return ({'id': 1, 'price': 50}, {'id': 2, 'price': 75})

def calculate_total(items):
    return sum(item('price') for item in items)

process_order('ORD-12345', 'USER-789')

This ContextLogger The wrapper does something useful: it automatically adds context to each log message. order_id And user_id Add to all logs without repeating in each login call.

JSON The format makes these logs easy to analyze and search.

**kwargs Each logging method lets you add additional context to specific log messages. It connects the global context (order_idfor , for , for , . user_id) with local context (item_countfor , for , for , . total) automatically.

This pattern is particularly useful in web applications where you want application IDs, user IDs, or session IDs in each application log message.

# Rotating log files to prevent disk space issues

Log files grow quickly in production. Without rotation, they will eventually fill up your disk. Here’s how to implement automatic log rotation.

from logging.handlers import RotatingFileHandler, TimedRotatingFileHandler

def setup_rotating_logger(name):
    logger = logging.getLogger(name)
    logger.setLevel(logging.DEBUG)

    # Size-based rotation: rotate when file reaches 10MB
    size_handler = RotatingFileHandler(
        'app_size_rotation.log',
        maxBytes=10 * 1024 * 1024,  # 10 MB
        backupCount=5  # Keep 5 old files
    )
    size_handler.setLevel(logging.DEBUG)

    # Time-based rotation: rotate daily at midnight
    time_handler = TimedRotatingFileHandler(
        'app_time_rotation.log',
        when='midnight',
        interval=1,
        backupCount=7  # Keep 7 days
    )
    time_handler.setLevel(logging.INFO)

    formatter = logging.Formatter(
        '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
    )
    size_handler.setFormatter(formatter)
    time_handler.setFormatter(formatter)

    logger.addHandler(size_handler)
    logger.addHandler(time_handler)

    return logger


logger = setup_rotating_logger('rotating_app')

Let’s now try to use rotation of log files:

for i in range(1000):
    logger.info(f'Processing record {i}')
    logger.debug(f'Record {i} details: completed in {i * 0.1}ms')

RotatingFileHandler Manages logging based on file size. When the log file reaches 10 MB (specified in bytes), it is renamed app_size_rotation.log.1and a new one app_size_rotation.log begins. backupCount Out of 5 means you will keep the 5 oldest log files before the oldest is deleted.

TimedRotatingFileHandler Rotates based on time intervals. The ‘midnight’ parameter means it creates a new log file every day at midnight. You can also use ‘H’, ‘D’, or ‘W0’ for daily (any time) for Monday weekly weekly.

interval Works with parameters when with parameters when='H' And interval=6logs rotate every 6 hours.

These handlers are essential for production environments. Without them, your application may crash when the disk fills up with logins.

# Logging in different environments

Your logging needs differ between development, staging and production. Here’s how to configure logging that suits each environment.

import logging
import os

def configure_environment_logger(app_name):
    """Configure logger based on environment"""
    environment = os.getenv('APP_ENV', 'development')
    
    logger = logging.getLogger(app_name)
    
    # Clear existing handlers
    logger.handlers = ()
    
    if environment == 'development':
        # Development: verbose console output
        logger.setLevel(logging.DEBUG)
        handler = logging.StreamHandler()
        handler.setLevel(logging.DEBUG)
        formatter = logging.Formatter(
            '%(levelname)s - %(name)s - %(funcName)s:%(lineno)d - %(message)s'
        )
        handler.setFormatter(formatter)
        logger.addHandler(handler)
        
    elif environment == 'staging':
        # Staging: detailed file logs + important console messages
        logger.setLevel(logging.DEBUG)
        
        file_handler = logging.FileHandler('staging.log')
        file_handler.setLevel(logging.DEBUG)
        file_formatter = logging.Formatter(
            '%(asctime)s - %(name)s - %(levelname)s - %(funcName)s - %(message)s'
        )
        file_handler.setFormatter(file_formatter)
        
        console_handler = logging.StreamHandler()
        console_handler.setLevel(logging.WARNING)
        console_formatter = logging.Formatter('%(levelname)s: %(message)s')
        console_handler.setFormatter(console_formatter)
        
        logger.addHandler(file_handler)
        logger.addHandler(console_handler)
        
    elif environment == 'production':
        # Production: structured logs, errors only to console
        logger.setLevel(logging.INFO)
        
        file_handler = logging.handlers.RotatingFileHandler(
            'production.log',
            maxBytes=50 * 1024 * 1024,  # 50 MB
            backupCount=10
        )
        file_handler.setLevel(logging.INFO)
        file_formatter = logging.Formatter(
            '{"timestamp": "%(asctime)s", "level": "%(levelname)s", '
            '"logger": "%(name)s", "message": "%(message)s"}'
        )
        file_handler.setFormatter(file_formatter)
        
        console_handler = logging.StreamHandler()
        console_handler.setLevel(logging.ERROR)
        console_formatter = logging.Formatter('%(levelname)s: %(message)s')
        console_handler.setFormatter(console_formatter)
        
        logger.addHandler(file_handler)
        logger.addHandler(console_handler)
    
    return logger

This environment-based setting handles each stage differently. Displays everything on the development console with detailed information, including function names and line numbers. This makes debugging faster.

Balancing growth and production. It writes detailed logs to files for investigation but only displays warnings and errors on the console to avoid noise.

Production is focused on performance and structure. It just logs INFO level and above for files, uses JSON formatting for easy parsing, and implements log rotation to manage disk space. Console output is limited to errors only.

# Set environment variable (normally done by deployment system)
os.environ('APP_ENV') = 'production'

logger = configure_environment_logger('my_application')

logger.debug('This debug message won\'t appear in production')
logger.info('User logged in successfully')
logger.error('Failed to process payment')

The environment is determined by it APP_ENV Environmental variables. Your deployment system (Dockerfor , for , for , . Kubernetesor other cloud platforms) sets this variable automatically.

Notice how we clean up existing handlers before sorting. This prevents duplicate handlers if the function is called multiple times during the application lifecycle.

# wrap up

Good logging can make the difference between diagnosing problems early and spending hours trying to figure out what went wrong. Start with basic logging using the appropriate severity level, add structured context to make logs searchable, and configure rotation to prevent disk space problems.

The samples shown here work for applications of any size. Start simple with basic logging, then add structured logging when you need better search, and implement environment-specific configurations when you deploy to production.

Happy logging!

Bala Priya c is a developer and technical writer from India. She loves working at the intersection of mathematics, programming, data science, and content creation. His areas of interest and expertise include devops, data science, and natural language processing. She enjoys reading, writing, coding and coffee! Currently, she is working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces and more. Bala also engages resource reviews and coding lessons.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro