Python Project Setup 2026: uv + Ruff + Ty + Polars

by SkillAiNest

Python Project Setup 2026: uv + Ruff + Ty + Polars
Photo by editor

# Introduction

The python Project setup means making dozens of small decisions before you write the first useful line of code. Which environment manager? Which instrument of dependence? Which formatter? Which linter? What kind of checker? And if your project touches data, do you have to start with it. Pandey, Duck DBor something new?

In 2026, that setup might be a lot easier.

For most new projects, the cleanest default stack is:

  • uv For Python installation, environment, dependency management, locking, and running commands.
  • Rough For linting and formatting.
  • Tie For type checking.
  • The polar For data frame work.

This stack is fast, modern and particularly integrated. Three of the four tools (uv, Ruff, and Ty) actually come from the same company. Astralwhich means they integrate seamlessly with each other and with you. pyproject.toml.

# Understanding why this stack works

Older setups often looked like this:

pyenv + pip + venv + pip-tools or Poetry + Black + isort + Flake8 + mypy + pandas

This worked, but it created significant overlap, incompatibility, and maintenance overhead. You had separate tools for environment setup, dependency locking, formatting, import sorting, linting and typing. Every new project started with an electoral explosion. A default stack of 2026 eliminates them all. The end result is fewer tools, fewer configuration files, and less friction when onboarding partners or wiring continuous integration (CI). Before jumping into setup, let’s take a quick look at what each tool in the 2026 stack does:

  1. uv: This is the basis of your project setup. It creates projects, manages versions, manages dependencies, and runs your code. Instead of manually configuring the virtual environment and installing packages, uv handles the heavy lifting. It keeps your environment consistent using a lock file and makes sure everything is correct before running any commands.
  2. Rough: It’s your all-in-one tool for code quality. It’s super fast, checks for problems, fixes many of them automatically, and even formats your code. You can use it instead of tools like Black, isort, Flake8, and others.
  3. Ty: This is a new tool for type checking. It helps catch errors by checking types in your code and works with different editors. While new with tools like mypy or Pyriteit is suitable for modern work flow.
  4. polar: It is a modern library for working with dataframes. It focuses on efficient data processing using lazy execution, which means it optimizes queries before running them. It is faster and more memory efficient than Pandas, especially for large data operations.

# Reviewing the prerequisites

Setup is pretty easy. Here are some things you need to get started:

  • Terminal: macOS Terminal, Windows PowerShell, or any Linux shell.
  • Internet connection: Once required to download the UV installer and package.
  • Code Editor: VS code Recommended because it works well with Ruff and Ty, but any editor is fine.
  • Git: Required for version control; Note that uv initiates a Gut Storage automatically.

it is. you do No Python needs to be pre-installed. you do No Requires pip, venv, pyenv, or conda. uv handles installation and environment management for you.

# Step 1: Installing UV

uv provides a standalone installer that works on macOS, Linux, and Windows without the need for Python or rust be present on your machine.

macOS and Linux:

curl -LsSf  | sh

Windows PowerShell:

powershell -ExecutionPolicy ByPass -c "irm  | iex"

After installation, restart your terminal and verify:

Output:

uv 0.8.0 (Homebrew 2025-07-17)

This single binary now replaces pyenv, pip, venv, pip-tools and Poetry’s project management layer.

# Step 2: Create a new project

Go to your project directory and create a new scaffold:

uv init my-project
cd my-project

uv creates a neat starting structure:

my-project/
├── .python-version
├── pyproject.toml
├── README.md
└── main.py

Change it to one. src/ Layout, which improves imports, packaging, test isolation, and type checker configuration:

mkdir -p src/my_project tests data/raw data/processed
mv main.py src/my_project/main.py
touch src/my_project/__init__.py tests/test_main.py

Your structure should now look like this:

my-project/
├── .python-version
├── README.md
├── pyproject.toml
├── uv.lock
├── src/
│   └── my_project/
│       ├── __init__.py
│       └── main.py
├── tests/
│   └── test_main.py
└── data/
    ├── raw/
    └── processed/

If you need a specific version (eg 3.12), uv can install and pin it:

uv python install 3.12
uv python pin 3.12

gave pin Writes the command version. .python-versionEnsuring that each team member uses the same interpreter.

# Step 3: Adding Dependencies

Adding a dependency is a single command that resolves, installs, and locks at the same time:

uv automatically creates a virtual environment (.venv/) if none exists, resolves the dependency tree, installs packages, and updates. uv.lock Exactly, with the pinned version.

For tools needed during development only, use --dev flag:

uv add --dev ruff ty pytest

It keeps them separate. (dependency-groups) In the section pyproject.tomlkeeping production dependence lean. You never have to run away. source .venv/bin/activate; When you use uv runit automatically activates the correct environment.

# Step 4: Organizing the Ruff (Linting and Formatting)

Ruff is arranged directly inside you. pyproject.toml. Add the following sections:

(tool.ruff)
line-length = 100
target-version = "py312"

(tool.ruff.lint)
select = ("E4", "E7", "E9", "F", "B", "I", "UP")

(tool.ruff.format)
docstring-code-format = true
quote-style = "double"

A line length of 100 characters is a good compromise for modern screens. Ruling groups flake8-bugbear (b) isort (I), and pyupgrade (UP) Add real value without overwhelming the new stock.

Running Rough:

# Lint your code
uv run ruff check .

# Auto-fix issues where possible
uv run ruff check --fix .

# Format your code
uv run ruff format .

Pay attention to the pattern: uv run . You never install tools globally or activate environments manually.

# Step 5: Configuring Ty for Type Checking

Ty is also sorted. pyproject.toml. Add these parts:

(tool.ty.environment)
root = ("./src")

(tool.ty.rules)
all = "warn"

((tool.ty.overrides))
include = ("src/**")

(tool.ty.overrides.rules)
possibly-unresolved-reference = "error"

(tool.ty.terminal)
error-on-warning = false
output-format = "full"

This configuration starts Ty in warning mode, which is ideal for adoption. You fix the obvious problems first, then slowly work your way up the rules to the mistakes. keep data/** excluded Prevents typechecker noise from non-code directories.

# Step 6: Configuring pytest

Add a section for pytest:

(tool.pytest.ini_options)
testpaths = ("tests")

Run your test suite with:

# Step 7: Testing the Complete pyproject.toml

Here’s what your final configuration looks like with everything — one file, each tool configured, with no scattered configuration files:

(project)
name = "my-project"
version = "0.1.0"
description = "Modern Python project with uv, Ruff, Ty, and Polars"
readme = "README.md"
requires-python = ">=3.13"
dependencies = (
    "polars>=1.39.3",
)

(dependency-groups)
dev = (
    "pytest>=9.0.2",
    "ruff>=0.15.8",
    "ty>=0.0.26",
)

(tool.ruff)
line-length = 100
target-version = "py312"

(tool.ruff.lint)
select = ("E4", "E7", "E9", "F", "B", "I", "UP")

(tool.ruff.format)
docstring-code-format = true
quote-style = "double"

(tool.ty.environment)
root = ("./src")

(tool.ty.rules)
all = "warn"

((tool.ty.overrides))
include = ("src/**")

(tool.ty.overrides.rules)
possibly-unresolved-reference = "error"

(tool.ty.terminal)
error-on-warning = false
output-format = "full"

(tool.pytest.ini_options)
testpaths = ("tests")

# Step 8: Writing Code with Pollers

Change the content of src/my_project/main.py With this code that uses the polar side of the stack:

"""Sample data analysis with Polars."""

import polars as pl

def build_report(path: str) -> pl.DataFrame:
    """Build a revenue summary from raw data using the lazy API."""
    q = (
        pl.scan_csv(path)
        .filter(pl.col("status") == "active")
        .with_columns(
            revenue_per_user=(pl.col("revenue") / pl.col("users")).alias("rpu")
        )
        .group_by("segment")
        .agg(
            pl.len().alias("rows"),
            pl.col("revenue").sum().alias("revenue"),
            pl.col("rpu").mean().alias("avg_rpu"),
        )
        .sort("revenue", descending=True)
    )
    return q.collect()

def main() -> None:
    """Entry point with sample in-memory data."""
    df = pl.DataFrame(
        {
            "segment": ("Enterprise", "SMB", "Enterprise", "SMB", "Enterprise"),
            "status": ("active", "active", "churned", "active", "active"),
            "revenue": (12000, 3500, 8000, 4200, 15000),
            "users": (120, 70, 80, 84, 150),
        }
    )

    summary = (
        df.lazy()
        .filter(pl.col("status") == "active")
        .with_columns(
            (pl.col("revenue") / pl.col("users")).round(2).alias("rpu")
        )
        .group_by("segment")
        .agg(
            pl.len().alias("rows"),
            pl.col("revenue").sum().alias("total_revenue"),
            pl.col("rpu").mean().round(2).alias("avg_rpu"),
        )
        .sort("total_revenue", descending=True)
        .collect()
    )

    print("Revenue Summary:")
    print(summary)

if __name__ == "__main__":
    main()

Before running, you need a build system pyproject.toml So uv installs your project as a package. We will use Hatching:

cat >> pyproject.toml << 'EOF'

(build-system)
requires = ("hatchling")
build-backend = "hatchling.build"

(tool.hatch.build.targets.wheel)
packages = ("src/my_project")
EOF

Then sync and run:

uv sync
uv run python -m my_project.main

You should see the formatted pollers table:

Revenue Summary:
shape: (2, 4)
┌────────────┬──────┬───────────────┬─────────┐
│ segment    ┆ rows ┆ total_revenue ┆ avg_rpu │
│ ---        ┆ ---  ┆ ---           ┆ ---     │
│ str        ┆ u32  ┆ i64           ┆ f64     │
╞════════════╪══════╪═══════════════╪═════════╡
│ Enterprise ┆ 2    ┆ 27000         ┆ 100.0   │
│ SMB        ┆ 2    ┆ 7700          ┆ 50.0    │
└────────────┴──────┴───────────────┴─────────┘

# Managing daily workflow

Once the project is set up, the daily loop is straightforward:

# Pull latest, sync dependencies
git pull
uv sync

# Write code...

# Before committing: lint, format, type-check, test
uv run ruff check --fix .
uv run ruff format .
uv run ty check
uv run pytest

# Commit
git add .
git commit -m "feat: add revenue report module"

# Changing the way Python writes with pollers

The biggest mental shift in this stack is on the data side. With pollers, your defaults should be:

  • Expressions on row operations. Polar expressions allow the engine to vectorize and parallelize operations. Avoid user-defined functions (UDFs) unless there is a native alternative, as UDFs are significantly slower.
  • Slow execution on eager loading. use scan_csv() instead of read_csv(). It creates a LazyFrame which creates a query plan, which allows the optimizer to push down filters and eliminate unused columns.
  • Parquet-first workflow on CSV-heavy pipelines. A good model for internal data preparation looks like this.

# Assessing when this setup is not the best fit

You want a different option if:

  • Your team has a mature mypy or mypy workflow that is working well.
  • Your code base is heavily dependent on specific Panda APIs or ecosystem libraries.
  • Your organization is standardized on Pyright.
  • You’re working in a legacy repository where changing tools will cause more disruption than value.

# Implementing Pro-Tips

  1. Never enable virtual environments manually. use uv run Everything to make sure you are using the right environment.
  2. Always commit. uv.lock For version control. This ensures that the project runs uniformly on every machine.
  3. use --frozen In CI It installs dependencies from the lock file for faster, more reliable builds.
  4. use uvx For one-of-a-kind tools. Run the tools without installing them in your project.
  5. Use the riff. --fix flag freely. It can automatically fix unused imports, obsolete syntax, and more.
  6. Prefer slow API by default. use scan_csv() And just call .collect() Finally
  7. Centralize configuration. use pyproject.toml As the single source of truth for all tools.

# Concluding thoughts

2026 Python’s default stack reduces setup effort and encourages better practices: locked environments, a single configuration file, fast feedback, and optimized data pipelines. Try it; Once you experience the environment’s execution, you’ll understand why developers are making the switch.

Kanwal Mehreen is a machine learning engineer and a technical writer with a deep passion for AI along with data science and medicine. He co-authored the e-book “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she is a champion of diversity and academic excellence. She is also recognized as a Teradata Diversity in Tech Scholar, a Mitacs Globalink Research Scholar, and a Harvard WeCode Scholar. Kanwal is a passionate advocate for change, having founded FEMCodes to empower women in STEM fields.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro