Back to 0fee
0fee

What No Human Engineer Would Do: 42 Files in 45 Minutes

Analyzing Session 001 of 0fee.dev: 42 files and 7,900 lines in 45 minutes. What AI does differently from human engineers. By Juste A. Gnimavo.

Thales & Claude | March 25, 2026 8 min 0fee
ai-developmentproductivitysession-001code-generationceo-ai-cto

Session 001 of 0fee.dev produced 42 files containing 7,900 lines of production-grade Python code in approximately 45 minutes. A complete FastAPI backend with SQLAlchemy models, provider adapters, a routing engine, middleware stack, encryption service, and full project structure.

No human engineering team writes 7,900 lines of production code in under an hour. Not because human engineers are slow -- they are not. But because human engineering involves activities that AI development does not: context switching, architectural debates, code review cycles, meeting interruptions, and the cognitive overhead of holding a 42-file project structure in working memory.

This article analyzes what happened in Session 001, what makes it impossible for human teams, and what the CEO+AI CTO model looks like in practice.

What Session 001 Produced

The session generated a complete backend architecture:

backend/
    main.py                          # FastAPI application entry point
    config.py                        # Environment and configuration
    database.py                      # SQLAlchemy engine and session
    models/
        __init__.py
        user.py                      # User model
        app.py                       # Application model
        transaction.py               # Transaction model
        provider.py                  # Provider model
        payment_method.py            # Payment method model
        webhook.py                   # Webhook configuration model
        api_key.py                   # API key model
    routes/
        __init__.py
        auth.py                      # Authentication endpoints
        apps.py                      # Application CRUD
        payments.py                  # Payment creation and status
        transactions.py              # Transaction listing and detail
        providers.py                 # Provider listing
        webhooks.py                  # Webhook CRUD
    providers/
        __init__.py
        base.py                      # Abstract base provider
        stripe_adapter.py            # Stripe integration
        paypal_adapter.py            # PayPal integration
        hub2_adapter.py              # Hub2 integration
        pawapay_adapter.py           # PawaPay integration
        test_adapter.py              # Test/sandbox provider
    services/
        __init__.py
        routing.py                   # Payment routing engine
        encryption.py                # Credential encryption
        billing.py                   # Fee calculation
        webhook_delivery.py          # Webhook sending
    middleware/
        __init__.py
        auth.py                      # JWT authentication
        rate_limit.py                # Rate limiting
        cors.py                      # CORS configuration
        logging.py                   # Request logging
    utils/
        __init__.py
        ids.py                       # ID generation
        currency.py                  # Currency utilities
    requirements.txt                 # Python dependencies

42 files. Every file was syntactically correct. Every import resolved. The application started without errors. The API endpoints returned valid responses.

What Human Engineers Do Differently

The Architecture Discussion (2-4 hours)

Before a human team writes the first line of code for a payment platform, they have an architecture discussion. Should we use Django or FastAPI? PostgreSQL or SQLite for development? What ORM? What authentication strategy? How should providers be abstracted?

In a typical team, this discussion involves: - A tech lead presenting 2-3 architectural options - Senior engineers debating trade-offs - A product manager asking about timeline implications - A decision that may take days if there is disagreement

In Session 001, Thales said "payment orchestrator, Python, FastAPI" and Claude produced the architecture. The "debate" was a 30-second prompt, not a multi-day discussion.

The Setup Phase (4-8 hours)

A human team starting a new Python project: 1. Create a virtual environment 2. Install FastAPI, SQLAlchemy, and dependencies 3. Configure the project structure (where do models go? routes? services?) 4. Set up the database connection 5. Write the first model 6. Write the first route 7. Test that it works 8. Commit to git

Each step involves decisions, documentation lookups, and occasional debugging. The setup phase for a 42-file project typically takes a senior engineer 4-8 hours, and that is without writing any business logic.

Claude generates the entire structure -- including configuration, dependencies, and project layout -- in a single response.

The Context Switching Cost (50% overhead)

Human engineers lose context when they switch between files. Writing models/transaction.py requires thinking about the transaction data model. Switching to routes/payments.py requires thinking about the API contract. Switching to providers/stripe_adapter.py requires thinking about Stripe's API.

Research on context switching suggests that it costs 15-25 minutes to fully re-engage after switching tasks. In a 42-file project, if you switch context between each file, the overhead is enormous.

Claude does not context-switch. It generates all 42 files within a single context window. The transaction model, the payment route, and the Stripe adapter are all "in memory" simultaneously. The provider adapter references the exact fields from the transaction model because both exist in the same generation context.

The Code Review Loop (1-3 days)

In a team, Session 001's output would go through code review. A senior engineer would review 42 files, leave comments, request changes, and the author would address them. This loop typically takes 1-3 days for a PR of this size.

In the CEO+AI CTO model, Thales reviews the output by running the application and testing endpoints. Issues are addressed in the same session or the next. The review is functional (does it work?) rather than stylistic (does it follow our conventions?), because the conventions were established by the same AI that wrote the code.

No Meetings

A human team building a payment platform has: - Daily standups (15 minutes) - Sprint planning (1-2 hours every 2 weeks) - Architecture reviews (1-2 hours as needed) - One-on-ones (30 minutes per team member per week) - Retrospectives (1 hour every 2 weeks)

For a 5-person team, this is approximately 10-15 hours per week of meetings. Over 80 days, that is 115-170 hours of meetings -- roughly the equivalent of 4 full work weeks.

0fee.dev had zero meetings.

The AI Advantage: No Context Loss

The most significant advantage is not speed. It is context retention within a session.

When Claude generates providers/base.py, it defines an abstract interface:

pythonclass BaseProvider(ABC):
    @abstractmethod
    async def create_payment(self, params: PaymentParams) -> PaymentResult:
        pass

When it then generates providers/stripe_adapter.py, it implements that exact interface:

pythonclass StripeAdapter(BaseProvider):
    async def create_payment(self, params: PaymentParams) -> PaymentResult:
        intent = await stripe.PaymentIntent.create(...)
        return PaymentResult(
            provider_id=intent.id,
            status=self._map_status(intent.status),
        )

And when it generates services/routing.py, it calls the interface correctly:

pythonasync def route_and_process(payment_data: dict) -> PaymentResult:
    provider = self.get_best_provider(payment_data)
    result = await provider.create_payment(
        PaymentParams(**payment_data)
    )
    return result

All three files are generated with perfect internal consistency because they exist in the same context. A human team would need explicit interface documentation, code reviews, and integration testing to achieve the same consistency.

The AI Limitation: No Institutional Memory

But Session 001 also revealed the fundamental limitation: when Session 002 started, Claude had no memory of Session 001. The entire 42-file architecture existed only in the codebase and the session log.

This is why the session log model is essential. Without it, each new session would start from scratch, re-discovering architectural decisions that were already made. The session log serves as Claude's institutional memory -- a role that human engineers perform naturally through their experience on the project.

The practical impact:

Human TeamCEO+AI CTO
Engineer remembers the architectureClaude reads the session log
Engineer knows why a decision was madeSession log explains the decision
Team has collective memoryNo persistent memory; logs are mandatory
Knowledge survives employee turnover (partially)Knowledge survives only in documentation

The CEO's Role: Decision Velocity

The AI advantage is throughput. The CEO advantage is decision velocity.

In Session 001, Thales made approximately 30 architectural decisions: - Python over Node.js - FastAPI over Django - SQLite for development - JWT for authentication - Adapter pattern for providers - Which providers to include - How to structure routes - How to handle encryption - What middleware to include

Each decision was made in seconds. There was no committee, no DACI framework, no "let me think about it and get back to you." The CEO decided; the AI implemented.

This velocity cuts both ways. Good decisions are implemented instantly. Bad decisions are also implemented instantly. The December 12 pricing tier (Session 013) was implemented and deleted within two sessions because the CEO recognized the mistake quickly. In a committee-driven organization, the mistake might have taken weeks to recognize and months to reverse.

Would We Do It Differently?

If we started 0fee.dev over, Session 001 would look similar but with three changes:

  1. Start with PostgreSQL. The SQLite decision saved 10 minutes of setup and cost 15 hours of WAL debugging. Not worth it.
  1. Define currency conventions in the session log. "All amounts stored in major units (dollars, not cents)" -- this one sentence would have prevented the most persistent bug category.
  1. Write the BaseProvider interface test first. Not a full test suite, but a contract test that validates every provider adapter implements the interface correctly. This would have caught the get_provider() vs. get_instance() confusion earlier.

Everything else -- the project structure, the adapter pattern, the middleware stack, the routing engine -- held up across 85 subsequent sessions. The architecture established in 45 minutes was sound. The implementation details needed iteration, but the foundation was solid.

That is what no human engineer would do: produce a sound 42-file architecture in 45 minutes. Not because humans cannot design well, but because the design-implement-review loop in human teams is inherently serial. In the CEO+AI CTO model, that loop is compressed to its theoretical minimum: decide, generate, test.


This article is part of the "How We Built 0fee.dev" series. 0fee.dev is a payment orchestrator covering 53+ providers across 200+ countries, built by Juste A. GNIMAVO and Claude from Abidjan with zero human engineers. Follow the series for the complete build story.

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles