How to maintain the state in the Time Series models with Dokar and Redis

by SkillAiNest

Have you ever created an excellent time series model, which can predict sales or predict stock prices, just to see if it fails in the real world? Well, this is a common frustration. Your model works perfectly on your machine, but the time you deploy it to the Dokar container, it seems that it develops ammonia. It forgets everything tomorrow, which makes its predictions for tomorrow.

Don’t worry This is not a flaw in your model. This is a collision that is designed to operate the time series model and docker containers.

Time series models are about memory. They need to remember the past to predict the future. But the Dokar containers are designed to be unconscious and forgotten, and each re -start their memory with a start. This basic conflict can make a powerful model useless in production.

In this article, we will solve this problem. We are going to give your time series model permanent memory. You will learn how to prepare a prediction for production that uses Radius as a volume of external brain and doer to ensure that the memory resumes in any way. As an example, we will move step by step, so that you can learn how to create a system that is intelligent and incredibly reliable.

We will cover what:

Who is this leader for?

It will be helpful to take some things under your belt to benefit the most from this tutorial. We will be divers in some code and command line work, so a little preparation will go a long way.

  • There are important tools for this project Doctor And Doer compose. Make sure you have run and run them on your computer.

  • If you are pleased with the basics of the Doper, Azigar and you will also be easier to follow it Flask Web framework. It will also be easier to experience a little command line to run a command in the tutorial.

  • But don’t worry if you have never used Redis First you just need to know that it’s a fast, memory database. We will handle the rest on the way.

Think of it as a guided tour. As long as you are curious and basic tools are ready, your shape will be very good.

Understanding the problem

Before jumping in solutions, first make it clear what the time series model is and then discover why it is so difficult to contain containers.

So, what is the time series model?

In direct words, the time series model is a type of model that analyzes the data points collected over time to forecast future values. Think about it as weather forecast. Meteorological experts do not see the sky yet. They look at the temperature, pressure and air patterns for the past few hours and days to see what will happen tomorrow.

Time series models do the same with data, whether website traffic, stock prices, or energy consumption. The important thing is that history is important. The sequence of past events provides contexts needed to make intelligent predictions about the future.

Now, when you put these models in the Doper, what is broken here?

1. The containers are chronic by design

Dokar containers mean numerous. It mostly works great for APIS. A user profile closing point? State Lace. A emotional analysis model? State Lace. They take an input, return out, and forget everything in the middle.

Time series models do not work like this. They need context with the previous predictions. Without it, your model is basically blind.

2. The lost context between predictions

Every prediction is in isolation. Your model receives the same data point and makes a guess what has come before. This time series defeats the entire purpose of modeling.

You can think: “I’ll just load all historical data on every request.” But this approach fails for two reasons:

  • It’s slow. If you have thousands of data points really slow

  • This is not scale. When you have multiple series or high application volume, you will hit the performance walls faster

3. On the Model Emensia resume

Whenever you deploy a new version or container cracks, all the accumulated state disappears. Your model begins from the beginning. In production, this is unacceptable.

Solution: Outdoor State Store

Instead of having a state inside the container, we will move it out. Radis becomes a model memory.

The pattern looks like this:

Client Request → Flask API → Redis → Prediction with Context

Your container is worth the state equipped and changing. But overall, the system maintains the state through Radius.

Implementation

Let’s build it. Clone Demo Storage:

git clone 
cd docker-redis-time-series

Start from a broken point of view

docker-compose.initial.yml The file shows what to do:

services:
  api:
    build: ./flask-api
    ports:
      - "5000:5000"

  redis:
    image: redis:alpine

What is the notice missing? No volumes. The Redis Stores data in the container file system, which means the data is temporary.

Drive it:

docker compose -f docker-compose.initial.yml up

Make some predictions:

curl -X POST  \
  -H "Content-Type: application/json" \
  -d '{
    "series_id": "demo",
    "historical_data": (
      {"timestamp": "2024-01-01T12:00:00", "value": 10},
      {"timestamp": "2024-01-01T12:01:00", "value": 20},
      {"timestamp": "2024-01-01T12:02:00", "value": 30}
    )
  }'

You will get an answer that shows that Redis is working:

{
  "data_points_used": 3,
  "prediction": 40,
  "redis_connected": true
}

Restart Services Now:

docker compose down
docker compose -f docker-compose.initial.yml up

Make another prediction. Check out data_points_used Field reset this. All your historical data is gone. This is exactly what we are trying to avoid.

How to fix it with volume

Proper docker-compose.yml Increase the perseverance:

services:
  api:
    build: ./flask-api
    ports:
      - "5000:5000"
    environment:
      - REDIS_HOST=redis

  redis:
    image: redis:alpine
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data

volumes:
  redis_data:

So, what is the volume and how does it work?

Think of the Dokar volume for your container as a dedicated external hard drive. By default, when a container writes data, it does it in a temporary layer that is destroyed when the container is removed. A volume provides a way to permanently save this data.

How does it work:

  1. The Dokar creates and manages a special storage area on a host machine completely separate from any container file system. In our Dokar-Comampos.Mal, volumes: redis_data: The section at the bottom says from the section Doker to prepare the designated volume redis_data.

  2. When the Redis container begins volumes: - redis_data:/data The line does “plug” this outer hard drive to this external hard drive. It is pair of redis_data Volume /data Directory inside the container.

  3. Now, whenever the Redis process inside the container writes it data /data Directory (which we have created to configure it), it’s actually writing it redis_data Volume on the host machine.

  4. When you write down the Doker, the Redis container is destroyed, but redis_data The volume is good. This is equivalent to plugging an external hard drive, and the data is still safe. The next time you write down the dock, the new radius container is formed, the volume is re -attached, and Redis gets its old data where it is left.

This mechanism is the key to giving our state service a memory that resumes.

Run the correct version:

docker compose up --build

Send multiple predictions to extend the state:

for i in {1..5}; do
  curl -X POST  \
    -H "Content-Type: application/json" \
    -d "{
      \"series_id\": \"demo\",
      \"historical_data\": ({\"timestamp\": \"2024-01-01T12:0$i:00\", \"value\": $((i*10))})
    }"
done

Now the test comes. Restart by everything:

docker compose down
docker compose up

Make another prediction. Look data_points_used. This includes all previous points. The model chooses the same place where he left.

It works because the volume container is free from the life cycle.

How the code handles the state

In the flask API flask-api/app.py Each data point using configured sets stores in Redis:

def store_data_point(series_id, timestamp, value):
    key = f"ts:{series_id}"
    redis_client.zadd(key, {json.dumps({"ts": timestamp, "val": value}): timestamp})

When making predictions, it retrieves recent history:

def get_recent_data(series_id, limit=100):
    key = f"ts:{series_id}"
    data = redis_client.zrange(key, -limit, -1)
    return (json.loads(d) for d in data)

Redis configuated sets give you an automatic time order. The volume ensures that the data is resumed.

Test the end of the health end

Check that everything is well -connected:

curl 

You should see:

{
  "model_loaded": true,
  "redis_connected": true,
  "status": "healthy"
}

Unless redis_connected Wrong, check your Dokar Logs. Common issues are the structure or radius of the network that has not started properly.

What do you think about scaling?

This setup works well for single phrase deployment. When traffic increases, you have some options.

Horizontal scaling with the Redis cluster

Divide your data into numerous redis nodes for high thropped. Redis cluster automatically handles Sharding.

High availability with Redis Sentinel

Include the Fail Over qualification so that your state store does not become a point of failure. When the primary fails, the centenl monitors for example and promotes copies.

Use organized Redis services

AWS Flexible, Radius Azure Cash, or Google Cloud Memory Store handle operational loads. You focus on your model, they handle the Redis reliability.

Key Insight: Your API containers remain indifferent. You freely scales the state store.

To avoid normal damage

I can’t emphasize it: Test your perseverance before deploying in production.

Don’t accept the volume of the volume

Actually restart your containers and confirm that the state is intact. I have seen the deployment failed because someone has forgotten to increase the volume in production.

Do not ignore the limits of Redis memory

Redis keeps everything in memory. Monitor your memory use. Set up the appropriate macable policies for your workload. If you finish the memory, the Redis will begin to evacuate the keys or refuse to write.

Drop the monitoring

Add health checks. Monitor the position of contact again. Find out a delay in forecasting. You want to know that when things break up, don’t learn about it from angry users.

Conclusion

Time series models need memory. Dokar containers lose the default memory. Solution is easy: separate states from computers.

Use Redis as an outer state store. Use the Dokar volume to maintain this state. Your model is smart, your containers change, and your deployment becomes reliable.

Available on a full working code github.com/ag-chirag/docker-redis- Time Series. Clone it, drive, break it, learn from it.

And remember: The easiest solution that works is usually right. You do not always need cabbnettes and state philosys. Sometimes a dock compose and one volume is sufficient.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro