

Photo by Editor | Chat GPT
. Introduction
Air Table Not only does data storage and analysis offer a flexible, spreadset -like interface, but it also provides an API for programmic interactions. In other words, you can connect it to external tools and technologies – for example, Dear – Bring your results to your air table database (or just “base”, in the air table jigon) to build data pipelines or process workflows.
This article shows that how to create a simple, ETL pipeline using the air table API. We will remain in the free level, making sure that the approach works without any features.
. Air Table Dataset Setup
Although the pipeline built in this article can be easily adapted to different datases, they require the starting point of the new air table and air table project and the starting point of the need for a stored dataset, but we recommend that you make this recent introductory tutorial in the Air table and the Air table.


Users Dataset/Table in Air Table | Picture by the writer
. Air Table-Pipeline Pipeline
In the air table, go to your user’s avatar-when writing, this app is a cycle-based avatar located on the left corner of the interface-and select the “Builder Hub”. In the new screen (see screenshot below), click on “Personal Access token”, then “Create the token”. Give it a name, and make sure you at least add these two scoops: data.records:read
And data.records:write
. Similarly, select the base where your users’ table is located in the “access” section, so that your token has created access to the base.


Making an air table API token | Picture by the writer
Once the tokens are made, copy it and carefully store it in a safe place, as it will be shown only once. We will need it later. The token begins pat
Then there is a long alphabetic code.
Another important piece of information we will need to create our own -based pipeline that interacts with the air table is the identity of our base. Return to your base in the air table web interface, and once there, you should see that the browser has its URL syntax like: https://airtable.com/app(xxxxxx)/xxxx/xxxx
. The part we are interested in copying is app(xxxx)
The ID exists between two consecutive slash (/
): This is the base ID we will need.
With this, and assuming that you already have a population table called “customer” at your base, we are ready to start our program. I will use it a notebook for coding. If you are using the IDE, you may need to change the section where three air tables are defined by environmental variables, so read them from one. .env
File instead. In this version, we will explain the simplicity and ease of example, we will explain them directly in our notebook. Let’s start by installing the necessary dependence:
!pip install pyairtable python-dotenv
Next, we explain the air table environmental variables. Note that the first two of you, you need to change the value with your original access token and base ID, respectively:
import os
from dotenv import load_dotenv # Necessary only if reading variables from a .env file
from pyairtable import Api, Table
import pandas as pd
PAT = "pat-xxx" # Your PAT (Personal Access Token) is pasted here
BASE_ID = "app-xxx" # Your Airtable Base ID is pasted here
TABLE_NAME = "Customers"
api = Api(PAT)
table = Table(PAT, BASE_ID, TABLE_NAME)
We have just set an example of the Azgar Air Table API and made a connection point on the consumer’s table in our base. Now, thus we read the entire dataset in our air table table and load it in a Pandas DataFrame
. You just need to be careful to use the exact name of the column from the source table for string arguments inside get()
Method Call:
rows = ()
for rec in table.all(): # honors 5 rps; auto-retries on 429s
fields = rec.get("fields", {})
rows.append({
"id": rec("id"),
"CustomerID": fields.get("CustomerID"),
"Gender": fields.get("Gender"),
"Age": fields.get("Age"),
"Annual Income (k$)": fields.get("Annual Income (k$)"),
"Spending Score (1-100)": fields.get("Spending Score (1-100)"),
"Income class": fields.get("Income Class"),
})
df = pd.DataFrame(rows)
Once the data is filled, the time has come to apply a simple change. We will only apply a change of simplicity, but we can apply as much as we need, as we usually used to prepare or clean the datases with pandas. We will create a new binary attribute, called Is High Value
To identify high -value consumers, ie those whose income and cost scores are higher:
def high_value(row):
try:
return (row("Spending Score (1-100)") >= 70) and (row("Annual Income (k$)") >= 70)
except TypeError:
return False
df("Is High Value") = df.apply(high_value, axis=1)
df.head()
Result Datasate:


Air table data change with Azgar and Pandas | Picture by the writer
Finally, the time has come to include changes to the air table by adding new data associated with the new column. There is a little warning: We first need to create a new column called “high price” in our air table users’ table, equivalent to its type “checkbox” (equivalent to binary categories). Once this empty column is produced, run the following code in your program, and the new data will automatically be added to the air table!
updates = ()
for _, r in df.iterrows():
if pd.isna(r("id")):
continue
updates.append({
"id": r("id"),
"fields": {
"High Value": bool(r("Is High Value"))
}
})
if updates:
table.batch_update(updates)
Time to go back to the air table and see what our source has changed in the consumer’s table! If you do not see any change at first glance and the new column still seems empty, don’t be afraid yet. Many consumers are not labeled “high price”, and you may need to scroll down a bit to see some labeling with green trick sign.


Latest users Table | Picture by the writer
This is! You have just created your own lightweight, ETL -like data pipeline, which is based on a bilateral interaction between the air table and the Azgar. OK!
. Wrap
This article focuses on displaying data capabilities with a versatile and user -friendly cloud -based platform for data management and analysis that connects the features of the spreadsheet and relative database with AI -powered functions. In particular, we showed how to run a lightweight data transformation pipeline with the air table API that reads data from the air table, changes it, and returns it to the air table – all of this is in the free version of the air table and the limits.
Ivan Palomars Carcosa AI, Machine Learning, Deep Learning and LLMS is a leader, writer, speaker, and adviser. He trains and guides others to use AI in the real world.