SQLite is a remarkable database. It powers more applications than any other database engine in the world. For prototyping, it is unbeatable: zero configuration, a single file, and full SQL support. We started 0fee.dev with SQLite because it let us build at maximum speed with zero infrastructure overhead.
But SQLite has a fundamental limitation that makes it unsuitable for a production payment platform: it allows only one writer at a time. When concurrent API requests try to insert transactions, update statuses, and record webhook deliveries simultaneously, they serialize. Under load, this single-writer bottleneck becomes a wall.
In Session 081, we migrated 0fee.dev from SQLite to PostgreSQL. 39 SQLAlchemy ORM models. 1,204 rows of data across 39 tables. Every raw SQL query converted to ORM. And a set of bugs that only surfaced because of the migration.
Why the Migration Was Necessary
The WAL race conditions documented in article 059 were the immediate trigger. But the architectural reasons ran deeper:
| Limitation | SQLite Impact | PostgreSQL Solution |
|---|---|---|
| Single writer | Serialized writes under concurrency | MVCC allows concurrent writers |
| Connection pooling | Not supported | Built-in via psycopg2 pool |
| Type enforcement | Flexible types (stores anything) | Strict types (catches bugs) |
| JSON operations | Limited JSON functions | Full JSONB with indexing |
| Full-text search | Basic FTS5 | Advanced tsvector/tsquery |
| Replication | Not supported | Streaming replication |
| Concurrent reads during writes | WAL stale reads possible | Snapshot isolation |
The concurrent write limitation was the dealbreaker. A payment platform processes payments, records webhooks, updates statuses, and generates invoices simultaneously. Serializing those writes was a ticking time bomb.
The 39 ORM Models
The migration required converting every table to a proper SQLAlchemy ORM model. Many routes had been using raw SQL -- a pattern that worked with SQLite but needed to change regardless of the database backend (for SQL injection prevention among other reasons).
Using parallel Claude agents, we converted all 39 tables in a single session:
python# Example: Transaction model (one of 39)
from sqlalchemy import (
Column, String, Float, DateTime, ForeignKey,
JSON, Boolean, Integer, func
)
from sqlalchemy.orm import relationship
from models.base import Base
class Transaction(Base):
__tablename__ = "transactions"
id = Column(String, primary_key=True)
app_id = Column(String, ForeignKey("apps.id", ondelete="RESTRICT"), nullable=False)
user_id = Column(String, ForeignKey("users.id"), nullable=False)
# Amount fields (post-currency update)
source_amount = Column(Float, nullable=False)
source_currency = Column(String(3), nullable=False)
destination_amount = Column(Float, nullable=True)
destination_currency = Column(String(3), nullable=True)
# Payment details
provider = Column(String, nullable=True)
payment_method = Column(String, nullable=True)
status = Column(String, default="pending")
reference = Column(String, nullable=False)
# Provider data
provider_transaction_id = Column(String, nullable=True)
meta = Column(JSON, nullable=True) # Renamed from 'metadata'
# Timestamps
created_at = Column(DateTime, server_default=func.now())
updated_at = Column(DateTime, onupdate=func.now())
completed_at = Column(DateTime, nullable=True)
# Relationships
app = relationship("App", back_populates="transactions")
user = relationship("User", back_populates="transactions")
events = relationship("PaymentEvent", back_populates="transaction")
webhook_deliveries = relationship("WebhookDelivery", back_populates="transaction")The complete model inventory:
| Category | Models | Count |
|---|---|---|
| Core | User, App, AppProvider, ApiKey | 4 |
| Payments | Transaction, PaymentEvent, PaymentLink, Invoice, InvoiceItem | 5 |
| Providers | Provider, PaymentMethod, ProviderHealth, ProviderConfig | 4 |
| Webhooks | Webhook, WebhookDelivery, WebhookEvent | 3 |
| Billing | BillingCycle, Fee, Coupon, CouponUsage, Wallet, WalletTransaction | 6 |
| Auth | OAuthState, RefreshToken, Session | 3 |
| Admin | AuditLog, AdminAction | 2 |
| Features | FeatureRequest, FeatureUpvote, FeatureComment, FeatureSubscriber | 4 |
| i18n | Translation, Language | 2 |
| Config | Currency, Country, PayinMethod, BrandingConfig | 4 |
| SDK | SdkDownload | 1 |
| AI | AiConversation | 1 |
| Total | 39 |
Converting Raw SQL to ORM
The most tedious part of the migration was converting raw SQL queries to SQLAlchemy ORM queries. Here is a representative example:
python# BEFORE: Raw SQL
async def get_user_transactions(user_id: str, status: str = None):
query = f"""
SELECT t.*, a.name as app_name
FROM transactions t
JOIN apps a ON t.app_id = a.id
WHERE t.user_id = ?
"""
params = [user_id]
if status:
query += " AND t.status = ?"
params.append(status)
query += " ORDER BY t.created_at DESC"
result = await db.execute(text(query), params)
return result.fetchall()python# AFTER: SQLAlchemy ORM
async def get_user_transactions(user_id: str, status: str = None):
query = (
select(Transaction)
.options(joinedload(Transaction.app))
.where(Transaction.user_id == user_id)
)
if status:
query = query.where(Transaction.status == status)
query = query.order_by(Transaction.created_at.desc())
result = await db.scalars(query)
return result.all()The ORM version is type-safe, immune to SQL injection, and produces the same SQL. But the conversion was not always straightforward -- complex queries with multiple JOINs, GROUP BYs, and subqueries required careful translation.
Data Migration: 1,204 Rows
Migrating the data itself was handled by a Python script that read from SQLite and wrote to PostgreSQL:
python# scripts/migrate_sqlite_to_postgres.py
import sqlite3
import asyncio
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
SQLITE_PATH = "data/0fee.db"
POSTGRES_URL = "postgresql+asyncpg://user:pass@localhost:5432/zerofee"
# Table migration order (respects foreign keys)
MIGRATION_ORDER = [
"users",
"apps",
"providers",
"payment_methods",
"app_providers",
"api_keys",
"transactions",
"payment_events",
"webhooks",
"webhook_deliveries",
# ... remaining 29 tables
]
async def migrate_table(table_name: str, sqlite_conn, pg_session: AsyncSession):
"""Migrate a single table from SQLite to PostgreSQL."""
cursor = sqlite_conn.execute(f"SELECT * FROM {table_name}")
columns = [desc[0] for desc in cursor.description]
rows = cursor.fetchall()
if not rows:
print(f" {table_name}: 0 rows (empty)")
return 0
# Handle column rename: metadata -> meta
if "metadata" in columns:
idx = columns.index("metadata")
columns[idx] = "meta"
# Build INSERT statement
model = get_model_for_table(table_name)
for row in rows:
data = dict(zip(columns, row))
obj = model(**data)
pg_session.add(obj)
await pg_session.flush()
print(f" {table_name}: {len(rows)} rows migrated")
return len(rows)
async def run_migration(): sqlite_conn = sqlite3.connect(SQLITE_PATH) pg_engine = create_async_engine(POSTGRES_URL) BLANK async with AsyncSession(pg_engine) as session: total = 0 for table in MIGRATION_ORDER: count = await migrate_table(table, sqlite_conn, session) total += count BLANK await session.commit() print(f"\nTotal: {total} rows migrated across {len(MIGRATION_ORDER)} tables") BLANK sqlite_conn.close() ```
The migration order was critical. Foreign key constraints in PostgreSQL are strict -- you cannot insert a transaction that references a non-existent app. The script processes tables in dependency order: users first, then apps, then transactions.
Final migration statistics:
users: 23 rows
apps: 31 rows
providers: 53 rows
payment_methods: 117 rows
app_providers: 47 rows
api_keys: 38 rows
transactions: 312 rows
payment_events: 487 rows
webhooks: 19 rows
webhook_deliveries: 34 rows
... (29 more tables)
Total: 1,204 rows across 39 tablesBugs Surfaced by the Migration
Branding Persistence
The branding configuration (colors, logos, custom text for checkout pages) was stored in a way that assumed SQLite's flexible typing. When moved to PostgreSQL's strict typing, several fields broke:
python# SQLite accepted this (stores as string)
branding.primary_color = 0xFF6B35 # Integer stored as string
# PostgreSQL rejected it
# Column 'primary_color' expects VARCHAR, got INTEGERThe fix was to ensure all branding values were explicitly cast to strings before storage.
PayinMethods Float Types
Payment method configuration included minimum and maximum amounts stored as integers in SQLite. PostgreSQL's stricter type enforcement revealed that some values should have been floats:
python# BEFORE: Integer amounts caused truncation
class PayinMethod(Base):
min_amount = Column(Integer) # 0.50 USD stored as 0
max_amount = Column(Integer) # 999.99 stored as 999python# AFTER: Float amounts preserve precision
class PayinMethod(Base):
min_amount = Column(Float, default=0.0)
max_amount = Column(Float, nullable=True) # None = no limitThis was a subtle bug that had been silently corrupting data in SQLite. Minimum amounts of $0.50 were being stored as $0, effectively disabling minimum amount validation.
The metadata to meta Rename
As covered in the SQLAdmin article, the metadata column name conflicts with SQLAlchemy's internal MetaData attribute. The migration script handled this rename, but it required updating every query, every API response, every SDK, and every piece of documentation that referenced the field.
Connection Pooling With psycopg2
PostgreSQL connections are expensive to create. Unlike SQLite (which opens a file), each PostgreSQL connection involves TCP handshake, authentication, and protocol negotiation. Connection pooling reuses established connections:
python# database.py
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import sessionmaker
engine = create_async_engine(
DATABASE_URL,
pool_size=10, # Maintain 10 idle connections
max_overflow=20, # Allow up to 20 additional under load
pool_timeout=30, # Wait 30s for a connection before error
pool_recycle=3600, # Recycle connections after 1 hour
pool_pre_ping=True, # Check connection health before use
)
async_session = sessionmaker(
engine, class_=AsyncSession, expire_on_commit=False
)
async def get_db() -> AsyncSession:
async with async_session() as session:
try:
yield session
finally:
await session.close()The pool_pre_ping=True setting is important for production. It sends a lightweight query (SELECT 1) before returning a connection from the pool, ensuring that stale connections (killed by network timeouts, server restarts) are detected and replaced.
Async Driver: asyncpg
We used asyncpg as the async PostgreSQL driver, which is significantly faster than psycopg2 for async workloads:
python# Connection URL format for asyncpg
DATABASE_URL = "postgresql+asyncpg://zerofee:password@localhost:5432/zerofee"
# vs psycopg2 (synchronous)
# DATABASE_URL = "postgresql+psycopg2://zerofee:password@localhost:5432/zerofee"The asyncpg driver is a pure-Python implementation that uses PostgreSQL's binary protocol, making it both faster and fully compatible with asyncio.
Testing the Migration
We validated the migration with three checks:
python# 1. Row count verification
async def verify_row_counts():
sqlite_counts = get_sqlite_counts()
pg_counts = await get_postgres_counts()
for table in MIGRATION_ORDER:
assert sqlite_counts[table] == pg_counts[table], \
f"Row count mismatch in {table}: SQLite={sqlite_counts[table]}, PG={pg_counts[table]}"
# 2. Sample data verification (spot-check 10 rows per table)
async def verify_sample_data():
for table in MIGRATION_ORDER:
sqlite_rows = get_sqlite_sample(table, 10)
pg_rows = await get_pg_sample(table, 10)
for s_row, p_row in zip(sqlite_rows, pg_rows):
assert normalize(s_row) == normalize(p_row), \
f"Data mismatch in {table}"
# 3. API endpoint smoke tests
async def smoke_test_endpoints():
endpoints = [
"/api/health",
"/api/providers",
"/api/payment-methods",
]
for endpoint in endpoints:
response = await client.get(endpoint)
assert response.status_code == 200, f"Failed: {endpoint}"What We Learned
Start with PostgreSQL if you are building a payment platform. SQLite saved us time in the first few sessions but cost us a full session for the migration plus hours debugging WAL issues. PostgreSQL's strict typing would have caught bugs earlier. Its concurrent write support would have eliminated an entire class of race conditions.
ORM models are worth the upfront investment. Converting 39 tables from raw SQL to ORM was tedious but transformative. It eliminated SQL injection risks, made the codebase type-safe, and enabled SQLAdmin integration. If we had started with ORM, the migration would have been a connection string change.
Strict typing is a feature, not a limitation. SQLite's flexible typing hid bugs. Integer columns silently accepted floats, string columns silently accepted integers. PostgreSQL's refusal to accept type mismatches forced us to fix data integrity issues we did not know we had.
Test the migration with row counts, spot checks, and smoke tests. Trust, but verify. A migration that completes without errors is not necessarily correct. Verify row counts match, sample data matches, and API endpoints return correct results.
This article is part of the "How We Built 0fee.dev" series. 0fee.dev is a payment orchestrator covering 53+ providers across 200+ countries, built by Juste A. GNIMAVO and Claude from Abidjan with zero human engineers. Follow the series for the complete build story.