Back to flin
flin

151 FlinUI Components Built by AI Agents

How we used parallel AI agents to build 151 FlinUI components in a single extended session -- the orchestration strategy, quality controls, and what it reveals about AI-assisted development.

Thales & Claude | March 25, 2026 11 min flin
flinflinuiai-agentsparallelbuild

On a single day in January 2026, Thales opened a terminal and started what would become the most productive session in ZeroSuite's history. By the time the session ended, 151 FlinUI components had been created -- not prototypes, not stubs, but production-ready components with proper props, variant handling, design token integration, and accessibility attributes.

One hundred fifty-one components. Built by AI agents running in parallel. Orchestrated by a human who understood what his product needed. This article is the story of that sprint -- the orchestration strategy, the quality controls, the failures, and what the session reveals about the future of software development.

The Setup: Why 151 at Once

By Session 037, FlinUI had 70 components. By Session 038, it had 100. But the gap analysis was brutal. A developer trying to build a real SaaS application with FlinUI would hit missing components within minutes. No DataTable with sorting. No DateRangePicker. No OrgChart. No ActivityFeed. No Kanban board. No command palette.

Thales had a spreadsheet. Three columns: Component Name, Category, Priority. One hundred fifty-one rows. Each row represented a component that existed in Material UI, Ant Design, or Chakra UI but did not yet exist in FlinUI.

The decision was not whether to build them. It was how fast.

The Orchestration Strategy

Building 151 components sequentially -- even at the pace of an AI that writes code without pausing to think about syntax -- would take an entire day. Each component needs a prop interface, variant handling, responsive behavior, dark mode support, and integration with the design token system. At 15 minutes per component, 151 components would take 38 hours.

Instead, we used parallel agents in waves:

Wave 1: Specifications (30 minutes)

Before any agent wrote a single line of FLIN code, we generated specifications for all 151 components. Each specification included:

  • Component name
  • Category (basic, layout, data, forms, feedback, navigation, enterprise, pro)
  • Props with types and defaults
  • Variants (visual variations)
  • Size options
  • Behavior description
  • Design token references
Component: DateRangePicker
Category: forms
Props:
  start_date: time? = none
  end_date: time? = none
  min_date: time? = none
  max_date: time? = none
  placeholder: text = "Select date range"
  format: text = "YYYY-MM-DD"
  disabled: bool = false
  onChange: fn(start, end)
Variants: default, inline
Design tokens: primary, bg-surface, border-color, radius-md, shadow-md

The specifications ensured every agent worked from the same requirements. No ambiguity about what a component should do. No conflicting interpretations of how props should work.

Wave 2: Basic and Layout (6 agents, 45 minutes)

The first wave of agents built the simplest components -- the ones with few props and no complex interactions:

  • Agent 1: 8 additional basic components (Kbd, Code, Highlight, Mark, Abbr, Sub, Sup, BlockQuote)
  • Agent 2: 5 layout extensions (Section, Header, Footer, Sidebar layout, Holy Grail)
  • Agent 3: 10 typography components (Heading, Paragraph, Caption, Label, Overline, etc.)
  • Agent 4: 8 data display additions (Timeline, Tree, Description, Callout, Metric, etc.)
  • Agent 5: 10 feedback additions (Banner, Snackbar, ProgressToast, LoadingOverlay, etc.)
  • Agent 6: 10 navigation additions (CommandPalette, MegaMenu, ScrollSpy, BackToTop, etc.)

Each agent received its specification list and the design token file. Each agent worked independently. At the end of 45 minutes, 51 components existed.

Wave 3: Complex Components (6 agents, 90 minutes)

The second wave tackled components with complex behavior:

  • Agent 1: DataGrid (with sorting, filtering, pagination, selection, inline editing)
  • Agent 2: Pivot Table (with row/column grouping, aggregation, drill-down)
  • Agent 3: 10 chart components (LineChart, BarChart, PieChart, etc.)
  • Agent 4: 10 form components (DateRangePicker, ColorPicker, FileUpload, etc.)
  • Agent 5: 10 enterprise components (OrgChart, Workflow, AuditLog, etc.)
  • Agent 6: 10 mobile components (SwipeAction, PullToRefresh, BottomSheet, etc.)

These components were more complex, so fewer were assigned per agent. The DataGrid alone is over 400 lines of FLIN code. The Pivot Table required implementing aggregation logic. The chart components required SVG generation with scaling algorithms.

Wave 4: PRO and Specialized (6 agents, 60 minutes)

The third wave built the remaining specialized components:

  • Agent 1: 15 AI/Chat components (ChatBubble, TypingIndicator, ModelSelector, etc.)
  • Agent 2: 15 e-commerce components (ProductCard, CartSummary, Checkout, etc.)
  • Agent 3: 10 admin components (PermissionEditor, TenantSwitcher, BulkAction, etc.)
  • Agent 4: 10 content components (ImageGallery, VideoPlayer, AudioPlayer, etc.)
  • Agent 5: 10 notification components (Push, InApp, Badge, Counter, etc.)
  • Agent 6: 10 developer tool components (JSON Viewer, Console, Network Inspector, etc.)

Wave 5: Integration and Testing (2 agents, 30 minutes)

The final wave verified that all components worked together:

  • Agent 1: Created index files for each category, built a demo application using 50+ components on a single page
  • Agent 2: Reviewed all components for design token consistency, prop naming conventions, and accessibility attributes

Quality Controls

Building fast creates risk. Here is how we maintained quality:

Convention Enforcement

Every component followed the same patterns:

flin// Every component starts with prop extraction and defaults
label = props.label || ""
variant = props.variant || "default"
size = props.size || "md"
disabled = props.disabled || false

// Every component uses design tokens
<style>
    .component {
        background: var(--flin-bg-surface);
        color: var(--flin-text-primary);
        border: 1px solid var(--flin-border-color);
        border-radius: var(--flin-radius-md);
    }
</style>

// Every component handles the disabled state
<div class="component {if disabled then 'disabled' else ''}">

Agents were instructed to follow these patterns exactly. Deviations were caught in Wave 5's review.

Prop Naming Consistency

All components use the same prop names for the same concepts:

ConceptProp NameType
Visual stylevarianttext ("default", "primary", "danger", etc.)
Dimensionssizetext ("sm", "md", "lg", "xl")
Interactabilitydisabledbool
Loading stateloadingbool
Click handlerclick or onClickfn
Change handleronChangefn
Close handleronClosefn
Label textlabeltext
Placeholderplaceholdertext
Error stateerrortext?

This consistency means a developer who knows how to use <Button variant="primary" disabled={true}> can immediately use <Badge variant="primary">, <Alert variant="primary">, or <Tag variant="primary">. The API is predictable.

Design Token Integration

Every component uses CSS custom properties from the design token system. No hardcoded colors, no hardcoded spacing, no hardcoded fonts. This was verified in the Wave 5 review by searching for hex color values in component files:

Hex colors found in components: 0
CSS variable references: 2,847
Design token coverage: 100%

Zero hardcoded colors. Every visual value comes from the token system. This means dark mode, custom themes, and brand customization work for all 151 components without any per-component modifications.

Accessibility

Each component includes appropriate ARIA attributes:

flin// Button includes role and aria-disabled
<button role="button" aria-disabled={disabled}>

// Modal includes role and aria-label
<div role="dialog" aria-modal="true" aria-label={props.title}>

// Alert includes role
<div role="alert" aria-live="polite">

// Input includes aria-invalid and aria-describedby
<input aria-invalid={error != none}
       aria-describedby={error != none ? "{id}-error" : none}>

Not every component is perfectly accessible (that would require extensive testing with screen readers, which was not done during the sprint). But the foundation is in place: ARIA roles, labels, and live regions are present. Future accessibility audits will refine them, but the structure does not need to change.

What Went Wrong

Not everything was perfect. The Wave 5 review caught several issues:

Inconsistent event naming. Some agents used click for click handlers. Others used onClick. The review standardized on click for direct event handlers and onX for callback props (onClose, onChange, onSelect).

Missing size variants. Twelve components forgot to implement the size prop. They rendered at a fixed size regardless of the prop value. The review added the missing size handling.

Duplicate component names. Two agents both created a Metric component -- one in the data display category, one in the enterprise category. They had different props and different purposes. The review renamed the enterprise version to KPIMetric.

Overly complex components. One agent's DataGrid implementation was 800 lines -- far too complex for a single component file. The review split it into a DataGrid orchestrator and several sub-components (DataGridHeader, DataGridBody, DataGridPagination, DataGridFilter).

Missing dark mode testing. Components that used rgba() for transparency effects (shadows, overlays) sometimes produced wrong visual results in dark mode. The review replaced hardcoded rgba values with CSS custom properties that could be theme-adjusted.

These issues affected roughly 15% of the components. The Wave 5 review caught and fixed all of them. The total review time was 30 minutes, which is far less than the time that would have been spent fixing these issues if they had been discovered by users.

The Numbers

MetricValue
Total components built151
Total time~4 hours (all waves)
Agent-hours~15 (6 agents x avg 2.5 hours)
Lines of FLIN code~18,000
Files created163 (151 components + 12 index files)
Design token references2,847
Props defined~1,200
Issues found in review23
Issues fixed23
Components per hour~38

Thirty-eight components per hour. In a traditional development process with human developers, a single component takes 2-4 hours (design, implement, test, document). Thirty-eight components would take 76-152 person-hours -- roughly 2-4 weeks of a single developer's time. We did it in an afternoon.

What This Reveals About AI-Assisted Development

Building 151 components with AI agents taught us three things:

First, the human's job is architecture, not code. Thales did not write a single line of FLIN code during this session. He wrote specifications. He defined conventions. He reviewed results. He made decisions about naming, organization, and priority. The agents translated those decisions into code. The code was the easy part. The decisions were the hard part.

Second, parallelism is the multiplier. A single AI agent writing 151 components would take 4+ hours of serial work. Six agents running in parallel reduced wall-clock time to the longest wave (90 minutes for the complex components). The speedup is not linear (there is overhead for specification, review, and integration), but it is substantial.

Third, conventions enable scale. The design token system, the prop naming conventions, the component structure patterns -- these conventions made it possible for six independent agents to produce 151 components that look and feel like they were written by one person. Without conventions, the agents would have produced 151 components with 151 different styles. Conventions are the difference between a library and a collection of files.

The Result

After the session, FlinUI had 251+ components (100 from earlier sessions + 151 from this sprint). Every category was covered. Every common UI pattern was implemented. A developer evaluating FLIN could build a SaaS dashboard, an e-commerce store, a content management system, or an analytics platform without creating a single custom component.

The 151 components were not perfect. Some needed refinement. Some edge cases were not handled. Some accessibility improvements were needed. But they were functional, consistent, and available. A developer could start building immediately and file issues for improvements later.

That is the philosophy: ship first, polish continuously. One hundred fifty-one components built in an afternoon. Each one ready to use on day one. Each one improvable over time. The alternative -- building each component to perfection before moving to the next -- would have taken months and delivered the same end result, just later.

The Commit

feat: FlinUI expansion - 151 new components across 8 categories

163 files changed, 18,247 insertions(+)

One hundred sixty-three files. Eighteen thousand lines. One afternoon in Abidjan.


This is Part 95 of the "How We Built FLIN" series, documenting how a CEO in Abidjan and an AI CTO used parallel AI agents to build 151 UI components in a single extended session.

Series Navigation: - [94] The Raw Tag: Escape Hatch for HTML - [95] 151 FlinUI Components Built by AI Agents (you are here)

This concludes Arc 8: FlinUI. The next arc covers FLIN's routing and server-side rendering system.

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles