In event-driven systems, two things need to happen when you process a request: you need to store the data in your database, and you need to publish an event to a message broker to let other services know that something has changed.
Both of these operations look simple, but they hide a dangerous reliability problem. What if the database write succeeds but the message broker is temporarily unreachable? Or your service crashes between two steps? You end up in a paradoxical situation: you have new data in your database, but the rest of the system has never heard of it.
gave Outbox pattern There is a well-established solution to this problem. In this tutorial, you’ll learn what a pattern is, why it works, and how to implement it in Go with PostgreSQL and Google Cloud Pub/Sub.
Conditions
Before reading this tutorial, you should be familiar with:
Go programming language basics
SQL and PostgreSQL
Concept of database transaction
Basic familiarity with event-driven or distributed systems (helpful but not required)
Table of Contents
Problem: Two operations, no atomicity.
To understand why the outbox pattern exists, you need to understand a fundamental challenge in distributed systems: Atoms in different systems.
In a relational database, a Transactions Lets you group multiple operations so that they either all succeed or fail all together. If you insert one row and update another row in the same transaction, you are guaranteed that both will occur – or neither will occur.
The problem arises when you try to extend this guarantee. In two different systems: For example, your database and your message broker (such as Kafka, RabbitMQ, or Pub/Sub). These systems do not share transaction limits.
Here’s a typical event-driven flow that breaks down without the outbox pattern:
A customer places an order.
The service saves your order in the database
Your service publishes a
order.createdEvent to Message Broker ❌ (Broker is down)The order exists in the database, but downstream services never learn about it.
or reverse failure:
Your service publishes the event first
Your service tries to save the order to the database ❌ (database timed out).
Downstream services received notification of an order that does not exist.
Either scenario leaves your system in an inconsistent state. This is the core problem that the Outbox pattern solves.
Here’s what the process looks like when not using the outbox pattern:

How the Outbox Pattern Works
The outbox pattern solves the nuclear problem by maintaining both operations. inside Database:
Stores your business data (for example, a new order) in your database.
Writes the event message to a special table called the outbox table in the same database transaction.
A separate background process called message relay polls the outbox table and publishes pending messages to the broker.
Once the broker confirms receipt, the relay marks the message as processed.
Because steps 1 and 2 occur in the same database transaction, they are nuclear. Either both succeed or neither. You can never end up with saved data but no corresponding event queued – or an event queued for data that was never saved.
The message is never published directly to the broker in your main application code. Instead, the database acts as a trusted staging area.

Outbox table schema
The outbox table stores pending messages until a relay picks them up. Here is a typical PostgreSQL schema:
CREATE TABLE outbox (
id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
topic varchar(255) NOT NULL,
message jsonb NOT NULL,
state varchar(50) NOT NULL DEFAULT 'pending',
created_at timestamptz NOT NULL DEFAULT now(),
processed_at timestamptz
);
Let’s go through each column:
id: A unique identifier for each message. Using UUIDs makes it easy to refer to specific messages.topic: The destination title or queue name in your message broker (for example,orders.created).message: The payload of the event, stored as JSON. This is the data that your users will receive.state: Tracks whether a message has been sent. There are two main values.pending(Awaiting Publication) Andprocessed(published successfully).created_at: When the message was entered. Relay uses this to process messages sequentially.processed_at: When the relay has successfully published the message.
You may want to add additional columns to suit your needs: for example, a retry_count The number of times the relay has attempted to send a message, or one error Column to log failure reasons.
Message relay
A message relay is a background process (often a goroutine, sidecar, or a separate service) that connects the outbox table and the message broker.
Its responsibilities are:
Periodically query the Outbox table for messages with
state="pending".Publish each message to the appropriate topic in the broker.
Once the broker confirms the delivery, update the queue.
stateTo'processed'.Handle failures gracefully: If publishing fails, leave the message as is.
'pending'So it will be tried again.
This design gives you At least one delivery: A message will always be sent, even if the relay crashes and restarts. The trade-off is that a message can occasionally be sent more than once (more on that below), so your users must handle duplicates.
Go and the PostgreSQL implementation
Let’s make a concrete example. Imagine you have an order service. When a new order is created, you want to:
Save the order in PostgreSQL.
ordersThe tablePublish one.
order.createdEvent for Google Cloud Pub/Sub.
you will use pgx For the PostgreSQL driver.
Orders Service
The key insight is that the order enters and exits the box. Within the same transaction. If something goes wrong, both are returned.
// orders/main.go
package main
import (
"context"
"encoding/json"
"log"
"os"
"github.com/google/uuid"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgxpool"
)
// Order represents a customer order in our system.
type Order struct {
ID uuid.UUID `json:"id"`
Product string `json:"product"`
Quantity int `json:"quantity"`
}
// OrderCreatedEvent is the payload published to the message broker.
// It contains only the fields that downstream services need to know about.
type OrderCreatedEvent struct {
OrderID uuid.UUID `json:"order_id"`
Product string `json:"product"`
}
// createOrderInTx saves a new order and its outbox event atomically.
// Both operations share the same transaction (tx), so either both succeed
// or both are rolled back — ensuring consistency.
func createOrderInTx(ctx context.Context, tx pgx.Tx, order Order) error {
// Step 1: Insert the business data (the actual order).
_, err := tx.Exec(ctx,
"INSERT INTO orders (id, product, quantity) VALUES (\(1, \)2, $3)",
order.ID, order.Product, order.Quantity,
)
if err != nil {
return err
}
log.Printf("Inserted order %s into database", order.ID)
// Step 2: Serialize the event payload that consumers will receive.
event := OrderCreatedEvent{
OrderID: order.ID,
Product: order.Product,
}
msg, err := json.Marshal(event)
if err != nil {
return err
}
// Step 3: Write the event to the outbox table.
// This does NOT publish to Pub/Sub — it just queues it for the relay.
_, err = tx.Exec(ctx,
"INSERT INTO outbox (topic, message) VALUES (\(1, \)2)",
"orders.created", msg,
)
if err != nil {
return err
}
log.Printf("Inserted outbox event for order %s", order.ID)
return nil
}
func main() {
ctx := context.Background()
pool, err := pgxpool.New(ctx, os.Getenv("DATABASE_URL"))
if err != nil {
log.Fatalf("Unable to connect to database: %v", err)
}
defer pool.Close()
// Begin a transaction that will cover both the order insert
// and the outbox insert.
tx, err := pool.Begin(ctx)
if err != nil {
log.Fatalf("Unable to begin transaction: %v", err)
}
// If anything fails, the deferred Rollback is a no-op after a successful Commit.
defer tx.Rollback(ctx)
newOrder := Order{
ID: uuid.New(),
Product: "Super Widget",
Quantity: 10,
}
if err := createOrderInTx(ctx, tx, newOrder); err != nil {
log.Fatalf("Failed to create order: %v", err)
}
// Committing the transaction makes both writes permanent simultaneously.
if err := tx.Commit(ctx); err != nil {
log.Fatalf("Failed to commit transaction: %v", err)
}
log.Println("Successfully created order and queued outbox event.")
}
Pay attention to it. createOrderInTx Gets a pgx.Tx (a transaction) instead of a pool connection. This is intentional: it enforces that the caller is responsible for managing the transaction limit, making atomic guarantees clear.
Relay service
Relay runs as a separate background process. It polls the outbox table, publishes messages, and marks them as processed.
Here is the use of an important detail. FOR UPDATE SKIP LOCKED In the SQL query. This PostgreSQL feature lets you run Multiple relay instances without stepping on each other at the same time. When one instance locks a queue to process it, other instances release that queue and move on to the next one.
// relay/main.go
package main
import (
"context"
"log"
"time"
"cloud.google.com/go/pubsub"
"github.com/google/uuid"
"github.com/jackc/pgx/v5/pgxpool"
)
// OutboxMessage mirrors the columns we need from the outbox table.
type OutboxMessage struct {
ID uuid.UUID
Topic string
Message ()byte
}
// processOutboxMessages picks up one pending message, publishes it to Pub/Sub,
// and marks it as processed — all within a single database transaction.
func processOutboxMessages(ctx context.Context, pool *pgxpool.Pool, pubsubClient *pubsub.Client) error {
tx, err := pool.Begin(ctx)
if err != nil {
return err
}
defer tx.Rollback(ctx)
// Query for the next pending message.
// FOR UPDATE SKIP LOCKED ensures that if multiple relay instances are
// running, they won't try to process the same message simultaneously.
rows, err := tx.Query(ctx, `
SELECT id, topic, message
FROM outbox
WHERE state="pending"
ORDER BY created_at
LIMIT 1
FOR UPDATE SKIP LOCKED
`)
if err != nil {
return err
}
defer rows.Close()
var msg OutboxMessage
if rows.Next() {
if err := rows.Scan(&msg.ID, &msg.Topic, &msg.Message); err != nil {
return err
}
} else {
// No pending messages — nothing to do.
return nil
}
log.Printf("Publishing message %s to topic %s", msg.ID, msg.Topic)
// Publish the message to the Pub/Sub topic and wait for confirmation.
result := pubsubClient.Topic(msg.Topic).Publish(ctx, &pubsub.Message{
Data: msg.Message,
})
if _, err = result.Get(ctx); err != nil {
// Publishing failed. We return the error here without committing,
// so the transaction rolls back and the message stays 'pending'.
// The relay will retry it on the next polling interval.
return err
}
// Mark the message as processed now that the broker has confirmed receipt.
_, err = tx.Exec(ctx,
"UPDATE outbox SET state="processed", processed_at = now() WHERE id = $1",
msg.ID,
)
if err != nil {
return err
}
log.Printf("Marked message %s as processed", msg.ID)
// Commit the transaction: the state update becomes permanent.
return tx.Commit(ctx)
}
func main() {
// In production, initialize real connections using environment variables
// or a config file. These are left as placeholders for clarity.
var (
pool *pgxpool.Pool
pubsubClient *pubsub.Client
)
// Poll the outbox table every second.
// Adjust the interval based on your latency requirements.
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
for range ticker.C {
if err := processOutboxMessages(context.Background(), pool, pubsubClient); err != nil {
log.Printf("Error processing outbox: %v", err)
}
}
}
The polling interval (1 second in this example) controls the maximum delay between when an event is written to the outbox and when it is published to the broker. For most use cases, 1-5 seconds is perfectly acceptable. If you need lower latency, you can reduce the interval, or consider using PostgreSQL’s. LISTEN/NOTIFY A feature to wake up the relay immediately when a new queue is entered.
Why can messages be delivered more than once?
You may be wondering: Is the outbox pattern not guaranteed? Exactly once Delivery?
It doesn’t happen. It guarantees. At least once Delivery is the edge case here:
The relay successfully publishes the message to pub/sub.
before it can update the outbox queue.
'processed'the relay process crashes.Upon restart, the relay sees that the message is static.
'pending'and republishes it.
This is a rare but possible scenario. The standard way to handle this is to design your message. Consumers are apathetic. This means that they can safely receive and process the same message multiple times without causing incorrect behavior.
Common coping strategies include:
Using the message
idAs a duplication key, and checking if you’ve already processed it before executing.Making your operations inherently vulnerable. For example, using
INSERT ... ON CONFLICT DO NOTHINGInstead of a fieldINSERT.
Alternative: PostgreSQL logical replication
The polling method described above is simple and works well, but it has two drawbacks: it introduces some latency (up to a polling interval), and it issues database queries even when there is nothing to process.
For high-throughput systems where these trade-offs are important, PostgreSQL offers a more advanced alternative: Logical replication through Write Ahead Log (WAL).
Every change made to a PostgreSQL database is first written to WAL – an append-only log used for crash recovery and replication. With logical replication, you can subscribe to changes to specific tables and receive them as a stream in near real time.
Instead of asking your relay “Are there any new messages?” On a schedule, PostgreSQL will notify your relay when a new row is inserted into the outbox table.
This approach has low latency and is more resource efficient for high-volume workloads. The trade-off is added implementation complexity: you need to manage a replication slot in PostgreSQL and handle the WAL stream properly.
In Go, you can use pglogrepl A library for interacting with PostgreSQL’s logical replication protocol.
For more details on how PostgreSQL handles WAL and data capture, see Formal write-ahead logging documents.

The result
The outbox pattern solves a fundamental problem in distributed systems: how do you reliably write and publish messages to a broker to a database in a consistent way?
The key idea is to use your database as the source of truth. both Business data and pending messages. In the same transaction as writing your business data to the outbox table, you get atomic guarantees from the database itself: no distributed transaction protocol is required.
Here’s a quick summary of the key concepts:
Outbox table Stores pending events as part of your regular database schema.
Transactions Encloses both BusinessWrite and Outbox, making them atomic.
Message relay There is a background process that reads from Outbox and publishes to the broker.
At least one delivery This means your customers must be apathetic.
FOR UPDATE SKIP LOCKEDAllows multiple relay instances to run in parallel safely.Logical replication is an advanced alternative that avoids polling for high-throughput systems.
The pattern is simple in concept, but there are several ways to implement it depending on your scale and infrastructure. The polling approach shown in this tutorial is a solid starting point for most applications.