Back to flin
flin

The Intent Engine: Natural Language Database Queries

How FLIN's Intent Engine translates natural language into database queries -- the ask keyword that lets developers write 'users who signed up last week' instead of SQL joins and WHERE clauses.

Thales & Claude | March 25, 2026 9 min flin
flinintentnlpdatabasequeries

SQL is powerful but verbose. Writing SELECT DISTINCT u.* FROM users u JOIN purchases p ON p.user_id = u.id WHERE u.created_at >= DATE_SUB(NOW(), INTERVAL 7 DAY) AND u.created_at < NOW() to answer the question "users who signed up last week and made a purchase" is the kind of ceremony that makes database interaction feel like a chore.

FLIN's ORM-style queries improve this: User.where(created_at >= last_week).where(purchases.count > 0). Better, but still technical. The developer must know the field names, the comparison operators, and the join semantics.

The Intent Engine goes one step further:

flinusers = ask "users who signed up last week and made a purchase"

One line. Natural language. The Intent Engine translates the intent into a database query, executes it, and returns typed FLIN entities. The developer does not write SQL. They do not write ORM queries. They describe what they want, and FLIN figures out how to get it.

This is not a toy feature. It is the most ambitious part of FLIN's design -- a bridge between human intent and machine execution that makes database interaction as natural as conversation.

The ask Keyword

The ask keyword is a first-class language construct that triggers the Intent Engine:

flin// Simple queries
users = ask "all active users"
posts = ask "posts from today"
products = ask "products under 100 dollars"

// Complex queries
vips = ask "users who spent more than 1000 in the last month"
trending = ask "most viewed articles this week"
at_risk = ask "customers who haven't ordered in 30 days"

// Aggregations
count = ask "how many users signed up yesterday"
average = ask "average order value this month"

The string inside ask is parsed semantically, not literally. "Users who signed up last week" is understood as a filter on the User entity's created_at field with a date range of the previous 7 days. "Products under 100 dollars" maps to Product.where(price < 100).

How the Translation Pipeline Works

When the FLIN runtime encounters an ask expression, it executes a five-stage pipeline:

Stage 1: INPUT
  ask "users who signed up last week and made a purchase"

Stage 2: SCHEMA ANALYSIS
  Available entities: User, Order, Product
  User fields: name, email, created_at, active
  Order fields: user, total, created_at, status

Stage 3: AI TRANSLATION
  LLM generates query plan:
  {
    "entity": "User",
    "filters": [
      {"field": "created_at", "op": ">=", "value": "last_week"},
      {"join": "orders", "condition": "count > 0"}
    ]
  }

Stage 4: QUERY EXECUTION
  User.where(created_at >= last_week)
      .where(orders.count > 0)

Stage 5: RESULTS
  [User, User, User, ...]

Stage 1: Input

The raw natural language string is extracted from the ask expression.

Stage 2: Schema Analysis

The runtime collects the entity schemas from the current application. Entity names, field names, field types, and relationships are compiled into a schema description that will be sent to the LLM as context.

rustfn build_schema_context(entities: &HashMap<String, EntitySchema>) -> String {
    let mut context = String::new();
    for (name, schema) in entities {
        context.push_str(&format!("Entity {}:\n", name));
        for field in &schema.fields {
            context.push_str(&format!(
                "  - {}: {} {}\n",
                field.name,
                field.field_type,
                if field.is_relation { "(relation)" } else { "" }
            ));
        }
    }
    context
}

This schema context is what makes ask application-aware. The LLM knows which entities exist, which fields they have, and how they relate to each other.

Stage 3: AI Translation

The schema context and the natural language query are sent to an LLM (configurable, defaulting to the AI provider set in flin.config). The LLM returns a structured query plan:

flin// The prompt sent to the LLM
// "Given these entities:
//   User: name (text), email (text), created_at (time), active (bool)
//   Order: user (User), total (money), created_at (time), status (text)
//
// Translate this query to a FLIN query plan:
//   'users who signed up last week and made a purchase'
//
// Return JSON with entity, filters, joins, order, and limit."

The LLM's response is a JSON query plan, not raw SQL. This intermediate representation is safe -- it can only reference entities and fields that exist in the schema. There is no way for the LLM to generate arbitrary SQL or access data outside the application's entity model.

Stage 4: Query Execution

The query plan is compiled into FLIN's internal query representation and executed against FlinDB:

rustfn execute_query_plan(
    plan: &QueryPlan,
    db: &ZeroCore,
) -> Result<Vec<Value>, QueryError> {
    let mut query = db.entity(&plan.entity);

    for filter in &plan.filters {
        match filter {
            Filter::Comparison { field, op, value } => {
                query = query.where_clause(field, op, resolve_value(value));
            }
            Filter::Join { entity, condition } => {
                query = query.join(entity, condition);
            }
        }
    }

    if let Some(order) = &plan.order {
        query = query.order(&order.field, &order.direction);
    }

    if let Some(limit) = plan.limit {
        query = query.limit(limit);
    }

    query.execute()
}

Stage 5: Results

The query results are returned as typed FLIN values -- the same entity instances you would get from User.all or User.where(active == true).

Return Types

The ask keyword infers its return type from the query:

flin// List queries return entity arrays
users = ask "active users"                    // [User]
orders = ask "orders over 100"               // [Order]

// Aggregation queries return scalars
count = ask "how many users"                  // int
average = ask "average price"                 // float

// Single-result queries return optional entities
newest = ask "most recent user"               // User?

The type inference is based on keywords in the query: - "how many" or "count" -> int - "average", "sum", "total" -> float - "most recent", "newest", "first" -> optional entity - Everything else -> entity list

Chaining with Ask

ask results can be further processed with FLIN's standard collection operations:

flin// Get users, then extract names
users = ask "premium users"
names = users.map(u => u.name)

// Filter further
vips = (ask "active users").where(total_spent > 1000)

// Combine with UI
<div class="dashboard">
    {for user in ask "users who signed up today"}
        <p>{user.name} -- {user.email}</p>
    {/for}
</div>

Caching

Intent translations are cached to avoid repeated LLM calls for the same query:

flin// First call: AI translation + execution (200-500ms)
users = ask "active premium users"

// Same query later: cached translation, just execution (1-5ms)
users = ask "active premium users"

The cache key is the hash of the query string plus the current schema version. If the schema changes (a new entity or field is added), the cache is invalidated and queries are re-translated.

rustpub struct IntentCache {
    translations: HashMap<u64, QueryPlan>,
    schema_version: u64,
}

impl IntentCache {
    pub fn get(&self, query: &str, current_version: u64) -> Option<&QueryPlan> {
        if self.schema_version != current_version {
            return None; // Schema changed, invalidate all
        }
        let key = hash_query(query);
        self.translations.get(&key)
    }
}

Ambiguity Handling

Natural language is inherently ambiguous. When the Intent Engine encounters ambiguity, it makes reasonable assumptions based on common patterns:

flin// "recent" -> last 7 days by default
recent = ask "recent orders"

// "popular" -> ordered by a popularity metric (views, sales, etc.)
popular = ask "popular products"

// "expensive" -> top percentile by price
expensive = ask "expensive items"

These defaults are documented and predictable. A developer who writes ask "recent orders" can expect orders from the last 7 days, not the last 30 minutes or the last year.

For precision-sensitive queries, the ORM-style syntax is always available as a fallback:

flin// If "recent" is ambiguous, be explicit
orders = Order.where(created_at >= now - 3.days).order(created_at, "desc")

Use Cases

Analytics Dashboard

flinrevenue_today = ask "total revenue today"
new_customers = ask "customers who signed up this week"
top_products = ask "best selling products this month"
at_risk = ask "customers who haven't ordered in 30 days"

<div class="dashboard">
    <Metric label="Revenue" value={revenue_today} />
    <Metric label="New Customers" value={new_customers.count} />
    <List title="Top Products" items={top_products} />
    <Alert title="At Risk" count={at_risk.count} />
</div>

Customer Support

flinurgent = ask "high priority tickets from this week"
overdue = ask "tickets open for more than 3 days"
unassigned = ask "unassigned tickets"

Content Management

flindrafts = ask "draft articles by current user"
trending = ask "most viewed articles this month"
scheduled = ask "articles scheduled for tomorrow"

Privacy and Security

The Intent Engine is designed with privacy as a constraint:

Schema only, no data. The entity schemas (names and types) are sent to the LLM for translation. The actual entity data (user emails, passwords, financial records) never leaves the FLIN runtime.

Parameterized execution. The LLM generates a query plan with parameterized values. The values are resolved locally, not by the LLM. This prevents data extraction through prompt engineering.

Allowlist enforcement. The query plan can only reference entities and fields that exist in the schema. The LLM cannot invent entities or access system tables.

Offline Fallback

When the AI provider is unavailable, ask falls back to keyword matching:

flin// Online: full AI translation
users = ask "active users who signed up in January"
// -> User.where(active == true).where(created_at >= january_1).where(created_at < february_1)

// Offline: keyword-based fallback
users = ask "active users"
// -> User.where(active == true)  (matches "active" to boolean field)

The fallback is less sophisticated but handles simple queries correctly. Complex queries that require understanding temporal expressions or joins will fail with a clear error message suggesting the developer use ORM-style queries until AI is restored.

The Intent Engine is FLIN's most forward-looking feature. It bridges the gap between what humans think ("show me recent orders") and what databases understand (SELECT * FROM orders WHERE created_at >= '2026-03-19'). In the next article, we explore the other side of this AI integration: semantic search and vector storage.


This is Part 116 of the "How We Built FLIN" series, documenting how a CEO in Abidjan and an AI CTO designed and built a programming language from scratch.

Series Navigation: - [115] Custom Guards and Security Middleware - [116] The Intent Engine: Natural Language Database Queries (you are here) - [117] Semantic Search and Vector Storage - [118] AI Gateway: 8 Providers, One API

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles