Back to 0fee
0fee

Architecture Decisions: Python, FastAPI, SolidJS, SQLite

The architecture behind 0fee.dev: why we chose Python FastAPI, SolidJS, SQLite, DragonflyDB, and Celery. By Juste A. Gnimavo and Claude.

Thales & Claude | March 25, 2026 11 min 0fee
architecturepythonfastapisolidjssqlitedragonflycelery

Every technology choice in 0fee.dev was deliberate. We are a team of two -- one CEO and one AI CTO -- building a payment orchestration platform that must be reliable enough to handle financial transactions, fast enough to compete with established players, and simple enough to maintain without a human engineering team. This article explains every major architecture decision, what we considered, and why we chose what we did.

The Full Stack

┌──────────────────────────────────────────────────┐
│                   Clients                         │
│  SDKs (TS, Python, PHP, Ruby, Go, Java, C#)     │
│  Dashboard (SolidJS SPA)                         │
│  Checkout Widget (iframe/redirect)               │
│  CLI Tool                                        │
└──────────────────────┬───────────────────────────┘
                       │ HTTPS
┌──────────────────────▼───────────────────────────┐
│              API Gateway (FastAPI)                │
│  Authentication · Rate Limiting · Routing         │
│  90+ endpoints · OpenAPI/Swagger auto-generated  │
├──────────────────────────────────────────────────┤
│              Core Services                        │
│  Payment Engine · Routing Engine · Webhook Mgr   │
│  Provider Adapters (53+) · Reconciliation        │
├──────────────────────────────────────────────────┤
│              Data Layer                           │
│  SQLite/PostgreSQL · DragonflyDB · Celery/Redis  │
└──────────────────────────────────────────────────┘

Why Python + FastAPI

We evaluated four backend frameworks before writing a single line of code:

FrameworkLanguageAsyncType SafetyOpenAPIEcosystem
FastAPIPythonNative async/awaitPydantic modelsAuto-generatedExcellent for fintech
Express.jsTypeScriptCallback/PromiseOptional (TS)Manual (Swagger)Large but fragmented
Go (Gin/Fiber)GoGoroutinesCompile-timeManualGrowing
Rust (Actix)RustTokio asyncCompile-timeManualSmall

The decision: FastAPI

Reason 1: Pydantic models are a payment platform's best friend.

In a payment system, data validation is not optional -- it is critical. A malformed amount, an invalid currency code, or a missing phone number can result in lost money. Pydantic gives us runtime type validation with zero boilerplate:

pythonfrom pydantic import BaseModel, Field
from decimal import Decimal
from enum import Enum

class Currency(str, Enum):
    USD = "USD"
    EUR = "EUR"
    XOF = "XOF"
    KES = "KES"
    NGN = "NGN"
    GHS = "GHS"
    # ... 35+ more

class PaymentCreate(BaseModel):
    amount: int = Field(gt=0, description="Amount in smallest currency unit")
    currency: Currency
    country: str = Field(pattern=r"^[A-Z]{2}$")
    method: str = Field(pattern=r"^(PAYIN|PAYOUT)_[A-Z]+(_[A-Z]{2})?$")
    customer: CustomerInfo
    metadata: dict[str, str] = Field(default_factory=dict, max_length=20)
    return_url: str = Field(pattern=r"^https://")
    cancel_url: str | None = None

    class Config:
        json_schema_extra = {
            "example": {
                "amount": 5000,
                "currency": "XOF",
                "country": "CI",
                "method": "PAYIN_ORANGE_CI",
                "customer": {"phone": "+2250700112233"},
                "return_url": "https://yourapp.com/callback"
            }
        }

Every incoming request is validated before it reaches business logic. Invalid data returns a structured 422 error with field-level details. This alone prevents an entire class of bugs.

Reason 2: Auto-generated OpenAPI documentation.

FastAPI generates a complete OpenAPI 3.1 specification from our route definitions and Pydantic models. This specification drives:

  • Interactive Swagger UI at /docs for testing
  • ReDoc documentation at /redoc for reading
  • SDK code generation for all seven languages
  • Postman collection generation for manual testing

We never write API documentation manually. The code is the documentation.

Reason 3: Async support for provider calls.

Payment processing involves calling external provider APIs -- operations that can take 200ms to 5 seconds depending on the provider. FastAPI's native async/await support means we can handle thousands of concurrent payment requests without blocking:

python@router.post("/payments", response_model=PaymentResponse, status_code=201)
async def create_payment(
    request: PaymentCreate,
    app: Application = Depends(get_current_app),
    db: AsyncSession = Depends(get_db)
):
    # Route to optimal provider (async DB queries)
    provider = await routing_engine.select_provider(
        country=request.country,
        method=request.method,
        currency=request.currency,
        app=app
    )

    # Call provider API (async HTTP)
    result = await provider.adapter.create_payment(
        amount=request.amount,
        currency=request.currency,
        customer=request.customer,
        metadata=request.metadata
    )

    # Persist transaction (async DB write)
    payment = await payment_service.create(db, result, app.id)

    return PaymentResponse.from_orm(payment)

Reason 4: Python's fintech ecosystem.

Python has the most mature libraries for financial operations: decimal for precise arithmetic (never use floats for money), pycountry for ISO country/currency validation, phonenumbers for international phone number parsing, cryptography for encryption of provider credentials. We did not have to build any of these from scratch.

Why SQLite Initially (and the Migration to PostgreSQL)

This is perhaps our most unconventional decision. We started with SQLite for the primary database of a payment platform.

The case for SQLite

FeatureSQLitePostgreSQL
Setup timeZero (single file)15-30 minutes
ConfigurationNoneTuning required
BackupCopy one filepg_dump + restore
WAL mode readsConcurrentConcurrent
Writes/second~1,000 (WAL mode)~10,000+
DeploymentNo separate processSeparate server
Cost$0$0-50+/month

For a platform in its first months, SQLite was the pragmatic choice:

  • Zero configuration: no database server to manage, no connection pooling to configure, no authentication to set up.
  • WAL mode: Write-Ahead Logging enables concurrent reads while writes are happening -- essential for a payment system where you are reading transaction status while writing new transactions.
  • Single-file backup: cp 0fee.db 0fee.db.backup -- that is the entire backup strategy.
  • Fast reads: SQLite reads are often faster than PostgreSQL for single-machine deployments because there is no network overhead.
python# SQLite configuration with WAL mode
from sqlalchemy.ext.asyncio import create_async_engine

engine = create_async_engine(
    "sqlite+aiosqlite:///./data/0fee.db",
    connect_args={"check_same_thread": False},
    echo=False
)

# Enable WAL mode for concurrent reads
@event.listens_for(engine.sync_engine, "connect")
def set_sqlite_pragma(dbapi_conn, connection_record):
    cursor = dbapi_conn.cursor()
    cursor.execute("PRAGMA journal_mode=WAL")
    cursor.execute("PRAGMA synchronous=NORMAL")
    cursor.execute("PRAGMA foreign_keys=ON")
    cursor.execute("PRAGMA busy_timeout=5000")
    cursor.close()

When we outgrew SQLite

SQLite's limitation is write concurrency. With WAL mode, you get one writer at a time. For a payment platform processing increasing transaction volume, this becomes a bottleneck. We migrated to PostgreSQL when:

  • Concurrent write operations started queuing during peak hours.
  • We needed full-text search for the admin transaction explorer.
  • We wanted advisory locks for distributed payment processing.
  • Row-level locking became necessary for idempotency guarantees.

The migration was straightforward because we used SQLAlchemy's ORM layer from day one. Changing the database required updating one connection string and running migrations -- no application code changes.

python# PostgreSQL configuration (post-migration)
engine = create_async_engine(
    "postgresql+asyncpg://0fee:password@localhost:5432/0fee",
    pool_size=20,
    max_overflow=10,
    pool_pre_ping=True
)

The lesson

Start with the simplest tool that works. SQLite let us ship a working payment platform in weeks. PostgreSQL let us scale it. The abstraction layer (SQLAlchemy) made the transition painless.

Why SolidJS for the Dashboard

The 0fee dashboard is where merchants manage their applications, configure provider credentials, view transactions, and monitor analytics. We evaluated three frontend frameworks:

FrameworkBundle SizeReactivityLearning CurveEcosystem
SolidJS~7 KBFine-grained, no virtual DOMModerateGrowing
React~40 KBVirtual DOM diffingLow (widespread)Massive
Svelte 5~5 KB (compiled)Compiler-basedLowGrowing

The decision: SolidJS

Fine-grained reactivity was the deciding factor. In a payment dashboard, you have tables with hundreds of rows updating in real-time (transaction statuses, webhook deliveries, success rates). SolidJS updates only the specific DOM nodes that change -- no virtual DOM diffing, no re-rendering entire component trees.

tsx// SolidJS component: real-time transaction table
import { createSignal, createResource, For } from 'solid-js';

function TransactionTable() {
  const [filter, setFilter] = createSignal({ status: 'all', page: 1 });

  const [transactions] = createResource(filter, async (f) => {
    const res = await fetch(`/api/transactions?status=${f.status}&page=${f.page}`);
    return res.json();
  });

  return (
    <table class="w-full">
      <thead>
        <tr>
          <th>ID</th>
          <th>Amount</th>
          <th>Status</th>
          <th>Provider</th>
          <th>Created</th>
        </tr>
      </thead>
      <tbody>
        <For each={transactions()?.data}>
          {(tx) => (
            <tr>
              <td class="font-mono">{tx.id}</td>
              <td>{formatCurrency(tx.amount, tx.currency)}</td>
              <td><StatusBadge status={tx.status} /></td>
              <td>{tx.provider}</td>
              <td>{formatDate(tx.created_at)}</td>
            </tr>
          )}
        </For>
      </tbody>
    </table>
  );
}

The 7 KB bundle size also matters. The dashboard must load fast on African internet connections where latency is higher and bandwidth is lower than in North America or Europe.

Why DragonflyDB for Cache

DragonflyDB is a Redis-compatible in-memory data store that serves as the caching and ephemeral data layer for 0fee. We use it for:

OTP and Session Management

Mobile money OTP codes have a 60-120 second TTL. DragonflyDB handles this natively:

python# Store OTP with automatic expiry
await cache.set(
    f"otp:{transaction_id}",
    otp_code,
    ex=120  # expires in 120 seconds
)

# Verify OTP
stored_otp = await cache.get(f"otp:{transaction_id}")
if stored_otp != submitted_otp:
    raise InvalidOTPError()

Rate Limiting

API rate limits use a sliding window counter in DragonflyDB:

pythonasync def check_rate_limit(api_key: str, limit: int = 100, window: int = 60):
    key = f"rate:{api_key}:{int(time.time()) // window}"
    current = await cache.incr(key)
    if current == 1:
        await cache.expire(key, window)
    if current > limit:
        raise RateLimitExceededError(retry_after=window)

Idempotency Keys

Payment idempotency is critical -- a network timeout must not result in a double charge. DragonflyDB stores idempotency keys with a 24-hour TTL:

pythonasync def check_idempotency(key: str) -> PaymentResponse | None:
    cached = await cache.get(f"idempotency:{key}")
    if cached:
        return PaymentResponse.parse_raw(cached)
    return None

async def set_idempotency(key: str, response: PaymentResponse):
    await cache.set(
        f"idempotency:{key}",
        response.json(),
        ex=86400  # 24 hours
    )

Why DragonflyDB over Redis?

DragonflyDB is fully Redis-compatible (same protocol, same commands) but uses a multi-threaded architecture that delivers higher throughput on modern hardware. For our use case, the key advantage is that it runs as a single binary with lower memory overhead than Redis -- important when deploying on a single server.

Why Celery for Background Tasks

Payment processing generates significant background work that cannot block the API response:

TaskTriggerSLA
Webhook deliveryPayment status change< 5 seconds
Webhook retry (exponential backoff)Previous delivery failed5s, 30s, 5m, 30m, 2h, 24h
Payment reconciliationDaily scheduleOnce per day
Settlement calculationProvider settlement received< 1 hour
Provider health checkPeriodicEvery 60 seconds
Credential rotation alertsExpiry approachingDaily check

Celery handles all of this with a Redis-compatible broker (DragonflyDB) and configurable retry policies:

pythonfrom celery import Celery
from celery.utils.log import get_task_logger

app = Celery('zerofee', broker='redis://localhost:6379/1')
logger = get_task_logger(__name__)

@app.task(
    bind=True,
    max_retries=6,
    retry_backoff=True,
    retry_backoff_max=86400  # max 24 hours
)
def deliver_webhook(self, webhook_id: str, endpoint_url: str, payload: dict):
    try:
        response = requests.post(
            endpoint_url,
            json=payload,
            headers={
                'Content-Type': 'application/json',
                'X-ZeroFee-Signature': sign_payload(payload),
                'X-ZeroFee-Delivery': webhook_id
            },
            timeout=30
        )
        response.raise_for_status()
        mark_webhook_delivered(webhook_id)

    except requests.RequestException as exc:
        logger.warning(f"Webhook delivery failed: {webhook_id}, attempt {self.request.retries}")
        mark_webhook_failed(webhook_id, str(exc))
        raise self.retry(exc=exc)

The exponential backoff with six retries means a failed webhook is retried over a 24-hour period before being marked as permanently failed. This matches industry standards (Stripe retries over 72 hours, PayPal over 24 hours).

Deployment: Docker + EasyPanel

The entire 0fee stack deploys as a set of Docker containers managed by EasyPanel.io:

yaml# docker-compose.yml (simplified)
services:
  api:
    build: ./backend
    ports:
      - "8000:8000"
    environment:
      - DATABASE_URL=postgresql+asyncpg://...
      - DRAGONFLY_URL=redis://dragonfly:6379
      - ENCRYPTION_KEY=${ENCRYPTION_KEY}
    depends_on:
      - db
      - dragonfly

  worker:
    build: ./backend
    command: celery -A app.worker worker -l info -c 4
    depends_on:
      - dragonfly

  beat:
    build: ./backend
    command: celery -A app.worker beat -l info
    depends_on:
      - dragonfly

  dashboard:
    build: ./frontend
    ports:
      - "3000:3000"

  db:
    image: postgres:17-alpine
    volumes:
      - pgdata:/var/lib/postgresql/data

  dragonfly:
    image: docker.dragonflydb.io/dragonflydb/dragonfly
    ports:
      - "6379:6379"

EasyPanel provides the orchestration layer: SSL certificates, domain routing, container health checks, log aggregation, and zero-downtime deployments. It is essentially a self-hosted Heroku that runs on any VPS.

Architecture Decisions We Would Revisit

No architecture is perfect. Here is what we would reconsider:

  1. Starting with SQLite: While pragmatic, we should have started with PostgreSQL. The migration cost was low (thanks to SQLAlchemy), but the time spent on SQLite-specific workarounds (write locking, no advisory locks) was not trivial.
  1. Celery complexity: For a smaller task queue, something like arq (async Redis queue for Python) would have been simpler. Celery's configuration surface area is large.
  1. Monolithic API: As the endpoint count grew past 90, a modular monolith with clear domain boundaries would have been easier to navigate than flat route files.

These are refinements, not regrets. The architecture shipped a working payment platform in 80 days, and every component can be evolved independently.


This article is part of the "How We Built 0fee.dev" series. 0fee.dev is a payment orchestrator covering 53+ providers across 200+ countries, built by Juste A. GNIMAVO and Claude from Abidjan with zero human engineers. Follow the series for the complete build story.

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles