Artificial intelligence is changing how we build software. Just a few years ago, writing code that could talk, make decisions, or manipulate external data was a challenge.
Today, thanks to new tools, developers can build smart agents that read messages, reason about them, and make calls on their own.
One such platform that makes it easy Lingchen. With LangChain, you can connect language models, tools, and apps together. You can also wrap your agent inside a Fast API server, then push it to a cloud platform for deployment.
This article walks you through building your first AI agent. You will learn what Lengchain is, how to build an agent, how to serve it with FastPy, and how to deploy it on Seolla.
What we will cover
What is Lingchen?
Langchain is a framework for working with large language models. It helps you build apps that think, reason, and act.

One model answers the text itself, but Langchain lets it do more. It allows a model to call functions, use tools, connect to databases and follow workflows.
Think of Lengchain as a bridge. On one side is the language model. On the other side are your tools, data sources, and business logic. Langchain tells the model what tools are available, when to use them, and how to respond. This makes it ideal for building agents that answer questions, automate tasks, or handle complex flows.
Many developers use Lingchain because it is flexible. It supports many AI models. It fits well with Python.
Lengchain also makes it easy to move from prototype to production. Once you learn how to create an agent, you can reuse the pattern for more and more advanced use cases.
I recently published a detailed Langchain Tutorial Here
How to create your first agent with Lengchain
Let’s create our first agent. It will answer user queries and call a tool when needed.
We’ll give him a handy weather tool, then ask him about the weather in the city. First, create a file called .env And add your Openai API key. Langchain will automatically use this when making requests to OpenAI.
OPENAI_API_KEY=
Here is the code for our agent:
from langchain.agents import create_agent
from dotenv import load_dotenv
load_dotenv()
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
agent = create_agent(
model="gpt-4o",
tools=(get_weather),
system_prompt="You are a helpful assistant",
)
result = agent.invoke({"messages":({"role":"user","content":"What is the weather in san francisco?"})})
This small program demonstrates the power of langchain agents.
First, we import create_agentwhich helps us build the agent. Then we write a function called get_weather. It takes the name of the city and returns the friendly punctuation.
The function acts as our instrument. A device is something that an agent can use. In real projects, tools may fetch values, store notes, or call APIs.
Next, we call create_agent. We give him three things. We pass in the model we want to use. We list the tools we want. And we give a system prompt. System cues tell the agent who it is and how it should behave.
Finally, we run the agent. We call invoke with a message.
The user requests the weather in San Francisco. The agent reads this message. It sees that the query requires a weather function. So it calls our tool get_weatherpasses through the city, and returns the answer.
Although this example is short, it captures the main idea. The agent reads the natural language, determines which tool to use, and sends a response.
Later, you can add more tools or replace the weather function with one that connects to a real API. But that’s enough for us to wrap up and deploy.
Wrapping your agent with FastPy
The next step is to serve our agent. Fastpy Help us expose our agent via HTTP endpoint. Thus, users and systems can call it via URLs, send messages, and receive responses.
To get started, you install FastPy and write a simple file main.py. Inside it, you import fastpy, load the agent, and write a route.
When someone posts a query, the API sends it to the agent and returns a response. The flow is easy.
The user talks to FastPy. FastPay talks to your agent. The agent thinks and responds. Here’s a quick API wrapper for your agent.
from fastapi import FastAPI
from pydantic import BaseModel
import uvicorn
from langchain.agents import create_agent
from dotenv import load_dotenv
import os
load_dotenv()
def get_weather(city: str) -> str:
"""Get weather for a given city."""
return f"It's always sunny in {city}!"
agent = create_agent(
model="gpt-4o",
tools=(get_weather),
system_prompt="You are a helpful assistant",
)
app = FastAPI()
class ChatRequest(BaseModel):
message: str
@app.get("/")
def root():
return {"message": "Welcome to your first agent"}
@app.post("/chat")
def chat(request: ChatRequest):
result = agent.invoke({"messages":({"role":"user","content":request.message})})
return {"reply": result("messages")(-1).content}
def main():
port = int(os.getenv("PORT", 8000))
uvicorn.run(app, host="0.0.0.0", port=port)
if __name__ == "__main__":
main()
Here, FastPy explains one /chat When the endpoint sends a message, the server calls our agent. The agent processes it as before. Fastpy then returns a clean JSON response. The API layer hides complexity within a simple interface.
At this point, you have a working Agent server. You can run it on your machine, call it with postman or curl, and check the responses. When it works, you’re ready to deploy.

How to deploy your AI agent in Seoul
You can choose any cloud provider, such as AWS, Digitalisan, or others, to host your agent. I’ll use Siola for this example.
Seoul is a developer-friendly PAAS provider. It offers application hosting, database, object storage, and static site hosting for your projects.
Each platform will charge you to create a cloud resource. Siola comes with a $50 credit for our use, so we won’t incur any costs for this instance.
Let’s push this project to GitHub so that we can connect our repository to Seolla. We can also enable auto-deployments so that any new changes to the repository are automatically deployed.
You can too Fork my collection From here
Login Click on Seola and Applications -> Create New Application. You can see the option to link your GitHub repository to create a new application

Use the default settings. Click on “Create Application”. Now we need to add our openui API key to the environment variables. Once the application is created click on the “Environment Variables” section, and save OPENAI_API_KEY value as an environmental variable.

Now we are ready to deploy our application. Click on “Deploy” and click on “Deploy Now”. The deployment will take 2–3 minutes to complete.

Once done, click on “View App”. You will see the request ending with the URL sevalla.app . This is your new root URL. You can change localhost:8000 With this url and test in postman.

Congratulations! Your first AI agent with tool calling is now live. You can extend it by adding more tools and other capabilities, and pushing your code to GitHub, and Siola will automatically deploy your application to production.
The result
Building AI agents is no longer a task for experts. With LangChain, you can write a few lines and create reasoning tools that respond to users and call themselves.
By wrapping the agent with FastPy, you give it a door that apps and users can access. Finally, Siola makes it easy to deploy, monitor, and run your agent in production.
This journey from agent idea to deployed service shows what modern AI development looks like. You start small. You find the tools. You roll them up and deploy them.
Then you iterate, add more capability, improve logic, and plug in real tools. Before long, you have a smart, real estate agent online. This is the power of this new wave of technology.
Hope you enjoyed this article. Sign up for my free newsletter turingtalks.ai For more tutorials on AI. You can too Visit my website.