
Photo by author
# Introduction
AI has moved beyond mere chatting. Major language models (LLMs) giving them arms and legs, which allow them to perform actions in the digital world. These are often called Python AI agents — autonomous software programs powered by LLM that can observe their environment, make decisions, use external tools (such as APIs or code execution), and take actions to achieve specific goals without constant human intervention.
If you want to experiment with building your own AI agent but feel stifled by complex frameworks, you’re in the right place. Today, we’re going to take a look smolagantsA powerful yet incredibly simple library developed by A huggable face.
By the end of this article, you’ll understand what makes smolagents unique, and more importantly, you’ll have a working code agent that can fetch live data from the Internet. Let’s explore the implementation.
# Understanding Code Agents
Before we start coding, let’s understand the concept. An agent is basically an LLM equipped with tools. You give the model a goal (like “get the current weather in London”), and it decides which tools to use to achieve that goal.
What makes the Embracing Face Agents in the Simulagent library special is their approach to reasoning. Unlike many frameworks that generate JSON or text to decide which tool to use, Simulgent agents are code agents. This means they write snippets of Python code to integrate their tools and logic.
It is powerful because the code is precise. It is the most natural way to express complex instructions such as loops, conditionals, and data manipulation. Instead of figuring out how to integrate LLM tools, it simply writes a Python script to do it. As an open source agent framework, smolagents is transparent, lightweight and perfect for learning the basics.
// Conditions
To follow along, you’ll need:
- Knowledge of Python. You should be comfortable with variables, functions, and pipe installs.
- A huggable face. Since we’re using the Hugging Face ecosystem, we’ll use their free inference API. You can get tokens by signing up at huggingface.co And visiting your settings.
- Google account is optional. If you don’t want to install anything locally, you can run this code in a Google Collab Notebook
# Setting up your environment
Let’s prepare our workspace. Open your terminal or a new Colab notebook and install the library.
mkdir demo-project
cd demo-projectNext, let’s set up our security token. Better to store it as an environment variable. If you are using Google Collab, you can use the Secret tab in the left panel to add. HF_TOKEN and then access it through userdata.get('HF_TOKEN').
# Creating Your First Agent: The Weather Feature
For our first project, we will create an agent that can retrieve weather data for a specific city. To do this, the agent needs a tool. A tool is simply a function that LLM can call. We’ll use a free, public API called wttr.inwhich provides weather data in JSON format.
// Install and setup
Create a virtual environment:
A virtual environment isolates your project’s dependencies from your system. Now, let’s activate the virtual environment.
Windows:
macOS/Linux:
You will see. (env) In your terminal when enabled.
Install required packages:
pip install smolagents requests python-dotenvWe are installing smolagents, Hugging Face’s lightweight agent framework for building AI agents with tool usage capabilities. RequestsHTTP library for making API calls; And python-dotenvwhich will load environment variables from a .env File
That’s all with just one command. This simplicity is at the core of the smolagents philosophy.

Figure 1: Installing Simulants
// Setting up your API token
Make a env File in the root of your project and paste this code. Please replace the placeholder with your original token:
HF_TOKEN=your_huggingface_token_hereGet your token from huggingface.co/settings/tokens. Your project structure should look like this:
Figure 2: Project structure
// Importing Libraries
Open your demo.py File and paste the following code:
import requests
import os
from smolagents import tool, CodeAgent, InferenceClientModelrequests: to make an HTTP call to the weather APIos: to read environment variables safelysmolagents: A lightweight face-hugging agent framework provides:@tool: A decorator that defines the agent’s callable functions.CodeAgent: An agent that writes and executes Python code.InferenceClientModel: Hugging Face connects to host LLMs.
In smolagents, defining a tool is straightforward. We’ll create a function that takes a city name as input and returns the weather state. Add the following code to your demo.py file:
@tool
def get_weather(city: str) -> str:
"""
Returns the current weather forecast for a specified city.
Args:
city: The name of the city to get the weather for.
"""
# Using wttr.in which is a lovely free weather service
response = requests.get(f"
if response.status_code == 200:
# The response is plain text like "Partly cloudy +15°C"
return f"The weather in {city} is: {response.text.strip()}"
else:
return "Sorry, I couldn't fetch the weather data."Let’s break it down:
- We import.
toolDecorator from smolagents. This decorator turns our regular Python function into a tool that the agent can understand and use. - docstring (
""" ... """) inget_weatherFunction is important. The agent reads this description to understand what the tool does and how to use it. - Inside the function, we make a simple HTTP request. wttr.ina free weather service that returns plain text forecasts.
- Type the notation (
city: strTell the agent what input to provide.
This tool is a great example of a call-to-action. We are giving the agent a new capability.
// Setting up an LL.M
hf_token = os.getenv("HF_TOKEN")
if hf_token is None:
raise ValueError("Please set the HF_TOKEN environment variable")
model = InferenceClientModel(
model_id="Qwen/Qwen2.5-Coder-32B-Instruct",
token=hf_token
)An agent needs a brain – a large language model (LLM) that can reason about tasks. Here we use:
Qwen2.5-Coder-32B-Instruct: A powerful code-focused model hosted on Hugging FaceHF_TOKEN: Your Hugging Face API token, stored in a.envFile for security
Now, we need to create the agent itself.
agent = CodeAgent(
tools=(get_weather),
model=model,
add_base_tools=False
)CodeAgent There is a special type of agent that:
- Writes Python code to solve problems.
- Runs this code in a sandboxed environment.
- Can link multiple tool calls together.
Here, we are instantiating a CodeAgent. We pass it a list containing our get_weather Tool and Model objects. gave add_base_tools=False The argument tells it not to add any default tools, keeping our agent simple for now.
// Run the agent
This is the interesting part. Let’s give our agent a task. Run the agent with a specified prompt:
response = agent.run(
"Can you tell me the weather in Paris and also in Tokyo?"
)
print(response)When you call agent.run()Agent:
- Reads your prompt.
- Reasons about what equipment it needs.
- Generates code that calls
get_weather("Paris")Andget_weather("Tokyo"). - Executes code and returns results.

Figure 3: Reaction of simulants
When you run this code, you will see the huggable face agent magic. The agent receives your request. It sees that it has a tool called get_weather. It then writes a small Python script in its “mind” (using LLM) that looks something like this:
This is what the agent thinks, not the code you write.
weather_paris = get_weather(city="Paris")
weather_tokyo = get_weather(city="Tokyo")
final_answer(f"Here is the weather: {weather_paris} and {weather_tokyo}")
Figure 4: Final response to simulants
It executes this code, receives the data, and returns a friendly response. You’ve just created a code agent that can browse the web through APIs.
// How it works behind the scenes

Figure 5: Inner workings of an AI code agent
// Taking it further: Adding more tools
Agents’ power grows with their toolkit. What if we want to save the weather report to a file? We can create another tool.
@tool
def save_to_file(content: str, filename: str = "weather_report.txt") -> str:
"""
Saves the provided text content to a file.
Args:
content: The text content to save.
filename: The name of the file to save to (default: weather_report.txt).
"""
with open(filename, "w") as f:
f.write(content)
return f"Content successfully saved to {filename}"
# Re-initialize the agent with both tools
agent = CodeAgent(
tools=(get_weather, save_to_file),
model=model,
)agent.run("Get the weather for London and save the report to a file called london_weather.txt")Now, your agent can receive data and interact with your local file system. This combination of skills is what makes Python AI agents so versatile.
# The result
In just a few minutes and with less than 20 lines of basic logic, you’ve created a functional AI agent. We’ve seen how smolagents simplifies the process of creating code agents that write and execute Python to solve problems.
The beauty of this open source agent framework is that it removes the boilerplate, allowing you to focus on the fun part: building tools and defining tasks. You’re no longer just chatting with an AI. You are collaborating with what can work. This is just the beginning. Now you can give your agent access to the Internet through search APIs, connect it to a database, or let it control a web browser.
// References and learning resources
Shatu Olomide A software engineer and technical writer with a knack for simplifying complex concepts and a keen eye for detail, passionate about leveraging modern technology to craft compelling narratives. You can also search on Shittu. Twitter.