Create a Web Interface with Streamlit (Step Step) for your Chat Boat-Deta Quest

by SkillAiNest

https://www.youtube.com/watch?v=Paiggg8e8lhk

You’ve made a chat boot in Azar, but it just runs in your terminal. What if you can give him a sleek web interface that anyone can use? What if you can deploy online to communicate with friends, potential employers, or clients?

In this hands -on tutorial, we will convert a command line chatboat to a professional web application using a streamlit. You will learn to create an interactive interface with customers, real -time setting controls, and learn to deploy it directly to the Internet.

By the end of this tutorial, you will have a web app that shows your AI development capabilities and shows your ability to create user -facing applications.

Why create a web interface for your chat boot?

The command line chat is impressive for boot developers, but a web interface talks to everyone. Portfolio reviewers, potential clients, and non -technical users can see and interact with your work immediately. More importantly, creating a web interface for AI applications is a desired skill because businesses want to rapidly deploy AI tools that their teams can actually use.

Streamllate makes this transition smooth. Instead of learning complex web framework, you will use the syntax that you already know not in minutes but to develop professional -looking applications in minutes.

What will you make

  • Interactive web chat boot with real -time personality switching
  • Customized control for AI parameters (temperature, token limits)
  • Professional chat interface with user/assistant message discrimination
  • Reset the functionality and conversation management
  • Direct deployment is accessible from any web browser
  • Foundation for more advanced AI applications

Before you start: Pre -instruction

To take the maximum of this project, follow these initial steps:

1. Review the project

Discover the goals and structures of this project: Start the project here

2. Complete your Chat Boat Foundation

Necessary condition: If you don’t have before, Complete the previous chat boot project To make your basic logic. Before starting this tutorial, you will need a working memory and token management with a work -to -work chatboat.

3. Set up your development environment

Wanted tools:

  • IDE (vs. Code or Paches recommended)
  • Openi API Key (or AI together for a free replacement)
  • Got Hub Account for deployment

We will work with standardized files (.PY format) instead of Gapter Notebook, so make sure you are comfortable in coding in your selected IDE.

4. Install and test streamlit

Install the desired packages:

pip install streamlit openai tiktoken

Check your installation With a simple demo:

import streamlit as st
st.write("Hello Streamlit!")

Save it test.py And run the following in the command line:

streamlit run test.py

If the browser window “Hello Streamlit!” When opens with the message of, you are ready to move forward.

5. Confirm your API access

Test your Openi API Key tasks:

import os
from openai import OpenAI

api_key = os.getenv("OPENAI_API_KEY")
client = OpenAI(api_key=api_key)

# Simple test call
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=({"role": "user", "content": "Say hello!"}),
    max_tokens=10
)

print(response.choices(0).message.content)

6. Access Full Solution

View and download solution files: Solved

What will you get to:

  • starter_code.py – Early chat boot code with which we will start
  • final.py – Full Streamlit application
  • requirements.txt – All the necessary dependence
  • Deployment configuration files

Top start: Your Chat Boat Foundation

If you don’t already have a chat boot, make a file that says starter_code.py With this Foundation:

import os
from openai import OpenAI
import tiktoken

# Configuration
api_key = os.getenv("OPENAI_API_KEY")
client = OpenAI(api_key=api_key)
MODEL = "gpt-4o-mini"
TEMPERATURE = 0.7
MAX_TOKENS = 100
TOKEN_BUDGET = 1000
SYSTEM_PROMPT = "You are a fed up and sassy assistant who hates answering questions."

messages = ({"role": "system", "content": SYSTEM_PROMPT})

# Token management functions (collapsed for clarity)
def get_encoding(model):
    try:
        return tiktoken.encoding_for_model(model)
    except KeyError:
        print(f"Warning: Tokenizer for model '{model}' not found. Falling back to 'cl100k_base'.")
        return tiktoken.get_encoding("cl100k_base")

ENCODING = get_encoding(MODEL)

def count_tokens(text):
    return len(ENCODING.encode(text))

def total_tokens_used(messages):
    try:
        return sum(count_tokens(msg("content")) for msg in messages)
    except Exception as e:
        print(f"(token count error): {e}")
        return 0

def enforce_token_budget(messages, budget=TOKEN_BUDGET):
    try:
        while total_tokens_used(messages) > budget:
            if len(messages) <= 2:
                break
            messages.pop(1)
    except Exception as e:
        print(f"(token budget error): {e}")

# Core chat function
def chat(user_input):
    messages.append({"role": "user", "content": user_input})

    response = client.chat.completions.create(
        model=MODEL,
        messages=messages,
        temperature=TEMPERATURE,
        max_tokens=MAX_TOKENS
    )

    reply = response.choices(0).message.content
    messages.append({"role": "assistant", "content": reply})

    enforce_token_budget(messages)
    return reply

This gives us a working chat boot with conversation memory and cost control. Now let’s turn it into a web app.

Part 1: Your First Streamlit Interface

Create a new file that says app.py And copy your Starter Code in it. Now we will add the web interface layer.

Add the Streamlit Import in Top:

import streamlit as st

Under your file, add your first streamlit elements:

### Streamlit Interface ###
st.title("Sassy Chatbot")

Test your app by driving it in your terminal:

streamlit run app.py

Your default browser should open your web app under the title “CC Chat Boat”. See the auto -reload feature. When you save the changes, the streamlit indicates you to run the app.

Learning Visual: Streamllate uses “magic” rendering. You do not have to clearly reveal the elements. Just to call st.title() Automatically gives the title in his web interface.

Part 2: Construction of the control panel

Real applications need user control. Let’s add a sidebar with personality options and parameter control.

Construction of the control panelConstruction of the control panel

Add it after your title:

# Sidebar controls
st.sidebar.header("Options")
st.sidebar.write("This is a demo of a sassy chatbot using OpenAI's API.")

# Temperature and token controls
max_tokens = st.sidebar.slider("Max Tokens", 1, 250, 100)
temperature = st.sidebar.slider("Temperature", 0.0, 1.0, 0.7)

# Personality selection
system_message_type = st.sidebar.selectbox("System Message",
    ("Sassy Assistant", "Angry Assistant", "Custom"))

Settle your sidebar with interactive control and watch. When users talk to them, these slider automatically store their values ​​in the relevant variable.

Adding a dynamic personality system

Let’s now work the personality of personality:

# Dynamic system prompt based on selection
if system_message_type == "Sassy Assistant":
    SYSTEM_PROMPT = "You are a sassy assistant that is fed up with answering questions."
elif system_message_type == "Angry Assistant":
    SYSTEM_PROMPT = "You are an angry assistant that likes yelling in all caps."
elif system_message_type == "Custom":
    SYSTEM_PROMPT = st.sidebar.text_area("Custom System Message",
        "Enter your custom system message here.")
else:
    SYSTEM_PROMPT = "You are a helpful assistant."

The Customs Option creates a text area where users can write their personality instructions. Try to switch between personalities and see how the interface is compatible.

Part 3: Understanding Session State

This is the place where the streamllate becomes difficult. Each time the user interacts with your app, releases the full script from top to bottom. This will usually reset your chat date every time, which is not the thing we want to talk!

Session State It solves it by maintaining the app in the app’s re -works:

# Initialize session state for conversation memory
if "messages" not in st.session_state:
    st.session_state.messages = ({"role": "system", "content": SYSTEM_PROMPT})

This creates permanent messages The list that works again. Now we need to edit our chat function to use session state:

def chat(user_input, temperature=TEMPERATURE, max_tokens=MAX_TOKENS):
    # Get messages from session state
    messages = st.session_state.messages
    messages.append({"role": "user", "content": user_input})

    enforce_token_budget(messages)

    # Add loading spinner for better UX
    with st.spinner("Thinking..."):
        response = client.chat.completions.create(
            model=MODEL,
            messages=messages,
            temperature=temperature,
            max_tokens=max_tokens
        )

    reply = response.choices(0).message.content
    messages.append({"role": "assistant", "content": reply})
    return reply

Learning Visual: The session is like a dictionary that maintains the app’s re -works. Think of it as your app’s memory system.

Part 4: Interactive button and control

Interactive button and controlInteractive button and control

Let’s add the button to make the interface more user -friendly:

# Control buttons
if st.sidebar.button("Apply New System Message"):
    st.session_state.messages(0) = {"role": "system", "content": SYSTEM_PROMPT}
    st.success("System message updated.")

if st.sidebar.button("Reset Conversation"):
    st.session_state.messages = ({"role": "system", "content": SYSTEM_PROMPT})
    st.success("Conversation reset.")

These buttons provide immediate feedback with success messages, which have more polished user experience.

Part 5: Chat interface

Chat interfaceChat interface

Now for the main event – the original chat interface. Add this code:

# Chat input using walrus operator
if prompt := st.chat_input("What is up?"):
    reply = chat(prompt, temperature=temperature, max_tokens=max_tokens)

# Display chat history
for message in st.session_state.messages(1:):  # Skip system message
    with st.chat_message(message("role")):
        st.markdown(message("content"))

chat_input The widget produces a text box below your app. Wallers operator (:=) Input assigns the user prompt And checks whether it is in a line.

Visual increase: When you use the appropriate roll name (“user” and “assistant”), you automatically add user and assistant icons to chat messages.

Part 6: Testing your full app

Save your file and check the full interface:

  1. Personality examination: Switch between sassi and angry assistants, apply new system message, then chat to see the difference
  2. Memory test: Talk, then refer to some of what you said before
  3. Parameter test: Drag the maximum token slider at 1 and see how the response ends
  4. Reset.: Use the reset button to clear the conversation date

Your full working app should look something like this:

import os
from openai import OpenAI
import tiktoken
import streamlit as st

# API and model configuration
api_key = st.secrets.get("OPENAI_API_KEY") or os.getenv("OPENAI_API_KEY")
client = OpenAI(api_key=api_key)
MODEL = "gpt-4o-mini"
TEMPERATURE = 0.7
MAX_TOKENS = 100
TOKEN_BUDGET = 1000
SYSTEM_PROMPT = "You are a fed up and sassy assistant who hates answering questions."

# (Token management functions here - same as starter code)

def chat(user_input, temperature=TEMPERATURE, max_tokens=MAX_TOKENS):
    messages = st.session_state.messages
    messages.append({"role": "user", "content": user_input})
    enforce_token_budget(messages)

    with st.spinner("Thinking..."):
        response = client.chat.completions.create(
            model=MODEL,
            messages=messages,
            temperature=temperature,
            max_tokens=max_tokens
        )

    reply = response.choices(0).message.content
    messages.append({"role": "assistant", "content": reply})
    return reply

### Streamlit Interface ###
st.title("Sassy Chatbot")
st.sidebar.header("Options")
st.sidebar.write("This is a demo of a sassy chatbot using OpenAI's API.")

max_tokens = st.sidebar.slider("Max Tokens", 1, 250, 100)
temperature = st.sidebar.slider("Temperature", 0.0, 1.0, 0.7)
system_message_type = st.sidebar.selectbox("System Message",
    ("Sassy Assistant", "Angry Assistant", "Custom"))

if system_message_type == "Sassy Assistant":
    SYSTEM_PROMPT = "You are a sassy assistant that is fed up with answering questions."
elif system_message_type == "Angry Assistant":
    SYSTEM_PROMPT = "You are an angry assistant that likes yelling in all caps."
elif system_message_type == "Custom":
    SYSTEM_PROMPT = st.sidebar.text_area("Custom System Message",
        "Enter your custom system message here.")

if "messages" not in st.session_state:
    st.session_state.messages = ({"role": "system", "content": SYSTEM_PROMPT})

if st.sidebar.button("Apply New System Message"):
    st.session_state.messages(0) = {"role": "system", "content": SYSTEM_PROMPT}
    st.success("System message updated.")

if st.sidebar.button("Reset Conversation"):
    st.session_state.messages = ({"role": "system", "content": SYSTEM_PROMPT})
    st.success("Conversation reset.")

if prompt := st.chat_input("What is up?"):
    reply = chat(prompt, temperature=temperature, max_tokens=max_tokens)

for message in st.session_state.messages(1:):
    with st.chat_message(message("role")):
        st.markdown(message("content"))

Part 7: deploying on the Internet

Locally good is great for development, but deployment makes your project worth and accessible to others. Smooth the Community Cloud Your gut hub offers free hosting directly from the repository.

Prepare for deployment

First, make the desired files in your project folder:

requirements.txt:

openai
streamlit
tiktoken

.gitignore:

.streamlit/

Note that if you have stored your API key in a .env File you should add it to .gitignore Plus

Managing secrets: Make a .streamlit/secrets.toml File locally:

OPENAI_API_KEY = "your-api-key-here"

Main: Add .streamlit/ You .gitignore So you do not mistakenly commit your API key with a gut hub.

Gut Hub Setup

  1. Make a new gut hub repository
  2. Press your code ( .gitignore Will protect your secrets)
  3. Should contain your storage: app.pyFor, for, for,. requirements.txtAnd .gitignore

Deploy to smooth the cloud

  1. Barley share.streamlit.io

  2. Connect your Gut Hub Account

  3. Choose your repository and central branch

  4. Choose your app file (app.pyJes

  5. In advanced settings, add a secret to your API key:

    OPENAI_API_KEY = "your-api-key-here"
  6. Click “deploy”

Within a few minutes, your app will be directly in a public URL that you can share with anyone!

Security notes: The secrets that you add to the Streamlit Cloud are secret and safe. Never put the API keys directly into your code files.

Understanding key concepts

Session state deep divers

The session is the memory system of the State Streamllate. Without it, every user’s conversation will fully reset your app. Think of it as a permanent dictionary that avoids the app’s re -work:

# Initialize once
if "my_data" not in st.session_state:
    st.session_state.my_data = ()

# Use throughout your app
st.session_state.my_data.append("new item")

Streamlit’s implementation model

Re -prepares your entire script at every conversation. This “reaction” model means:

  • Your app always shows the current condition
  • You need a session state for perseverance
  • Expensive acts should be catching or minimizing

Vegetate State Management

Widgets (sliders, inputs, buttons) automatically manage their condition:

  • Slider values ​​are always present
  • Button press trigger ranges
  • Input update in real time

To solve common issues

  • “There is no module called ‘streamlit’.: Install Streamlit with pip install streamlit
  • API key errors: Confirm that your environmental variables or streamlit secrets are correctly compiled
  • The app will not reload: Checking syntax errors in your terminal output
  • Session State is not working: Make sure you are checking if "key" not in st.session_state: Before starting
  • Deployment fails: Confirm your requirements.txt Contains all required packages

Grow your chat boot app

Quick increase

  • Upload the file: Let users upload documents for chat boot
  • Export conversation: Add download button for chat date
  • Use analytics: Track the token use and costs
  • Multiple chat sessions: Support multiple conversation threads

Advanced properties

  • User verification: Login is required for personalized experiences
  • Database Integration: Store the conversation permanently
  • Sound interface: Add the text and text -to -speech from speech
  • Multi Model Support: Let users select different AI models

Business requests

  • Customer service boot: Deploy the client to assist the company with specific knowledge
  • Interview Prep tool: Make specific interviews related to domain
  • Academic auxiliary: Create tutoring boats for specific subjects
  • Content generator: Prepare special written assistants

Key path

The construction of a web interface for AI applications shows that you can eliminate the difference between technical ability and user access. Through this tutorial, you have learned:

Technical skills:

  • Smooth the basic principles and reaction programming models
  • Session State Management for Permanent Applications
  • Web deployment from development to production
  • Integration samples for AI APIs in the web context

Professional skills:

  • Creating LSER user friendly interface of technical functionality
  • Managing the secrets and security in the applications deployed
  • Building Portfolio projects that demonstrate real world skills
  • Understand the way from prototype to the production application

Strategic understanding:

  • Why are web interfaces for AI applications
  • How to make technical projects accessible to non -technical users
  • Importance of user experience in adopting AI application

Now you have a deployed chat boot application that shows numerous demand skills: AI integration, web development, user interface design, and cloud deployment. This foundation develops you to create more sophisticated applications and shows your ability to create a full, user -facing AI solution.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro