Back to flin
flin

File Storage With 4 Backends

How FLIN provides unified file storage across local filesystem, Amazon S3, Cloudflare R2, and Google Cloud Storage -- all with zero configuration.

Thales & Claude | March 25, 2026 8 min flin
flinfile-storages3r2gcsbackends

Every web application eventually needs to store files. User avatars, PDF reports, uploaded spreadsheets, invoice scans -- the list grows quickly. And the moment you need file storage, you face a decision that will haunt you for years: where do the files go?

Store them on the local filesystem and you cannot scale horizontally. Store them on S3 and you are locked into AWS pricing. Store them on Google Cloud and your European customers worry about data residency. Store them on Cloudflare R2 and you save on egress but lose some S3 compatibility edge cases. Every choice has trade-offs, and switching later means rewriting your entire storage layer.

FLIN eliminates this decision. You configure a storage backend in one line, and the entire file system -- uploads, downloads, deduplication, signed URLs, previews, compression -- works identically regardless of where the bytes physically live. Four backends, one API, zero migration pain.

The Problem With File Storage

Most web frameworks treat file storage as an afterthought. You get a multipart parser and maybe a helper to write bytes to disk. Everything else -- deduplication, content-addressable storage, signed URLs, access control -- is your problem. You wire together half a dozen libraries, write glue code, handle errors, and pray that your abstraction holds when you need to switch providers.

FLIN's approach starts from a different premise: file storage is a core language concern, not an application concern. The language runtime knows about files the same way it knows about integers and strings. This means file operations are type-checked, storage is pluggable, and the developer writes the same code whether files land on a local SSD or in a cloud bucket 5,000 kilometers away.

Content-Addressable Storage

Before discussing backends, it is worth understanding the storage model that all four backends share. FLIN uses content-addressable storage (CAS): files are identified by their SHA-256 hash, not by their filename or path.

flinentity Document {
    title: text
    file: file
}

// Upload a PDF
doc = Document.create({
    title: "Q4 Report",
    file: body.file
})
save doc
// File stored at: {shard}/{sha256_hash}/data.pdf

When you save a file, FLIN computes its SHA-256 hash and stores the content under that hash. The directory structure uses sharding -- the first two hex characters of the hash become a directory prefix -- to prevent any single directory from containing millions of files:

.flindb/blobs/
    ab/
        abcd1234ef56.../
            data.pdf
    cd/
        cdef5678ab12.../
            data.jpg
    ...

This design gives you automatic deduplication for free. If two users upload the same file, the hash is identical, and FLIN stores only one copy. The second upload detects that the hash already exists and returns immediately, saving both storage space and upload time.

The Four Backends

FLIN ships with four storage backends out of the box. Each one implements the same trait, so switching between them requires changing one configuration value and zero application code.

Local Backend

The local backend stores files on the server's filesystem. It is the default, requires no external services, and is the fastest option for development and single-server deployments.

flin// flin.config
storage {
    backend: "local"
    directory: ".flindb/blobs"
}

The local backend handles content-addressable paths, directory sharding, and HMAC-SHA256 signed URLs for secure time-limited file access. It is production-ready for applications that run on a single server, and it is the backend that FLIN uses during development.

S3 Backend

The S3 backend stores files on Amazon S3 or any S3-compatible service (MinIO, DigitalOcean Spaces, Backblaze B2). It uses the rust-s3 crate for HTTP communication.

flin// flin.config
storage {
    backend: "s3"
    bucket: "my-app-files"
    region: "us-east-1"
    access_key: env("AWS_ACCESS_KEY_ID")
    secret_key: env("AWS_SECRET_ACCESS_KEY")
    prefix: "uploads"
}

S3 presigned URLs allow clients to download files directly from the bucket without routing traffic through the application server. For high-traffic applications serving large files, this eliminates the server as a bottleneck.

Cloudflare R2 Backend

Cloudflare R2 is S3-compatible but charges zero egress fees. For applications that serve a lot of files -- image galleries, document repositories, media platforms -- R2 can reduce storage costs by an order of magnitude compared to S3.

flin// flin.config
storage {
    backend: "r2"
    bucket: "my-app-files"
    account_id: env("R2_ACCOUNT_ID")
    access_key: env("R2_ACCESS_KEY")
    secret_key: env("R2_SECRET_KEY")
    prefix: "uploads"
}

The R2 backend uses the same rust-s3 crate as the S3 backend, with R2-specific endpoint formatting and path-style addressing. It supports the same presigned URLs and deduplication as every other backend.

Google Cloud Storage Backend

GCS is the enterprise option. It integrates with Google's IAM, supports fine-grained access control, and offers multi-region replication for organizations that need it.

flin// flin.config
storage {
    backend: "gcs"
    bucket: "my-app-files"
    credentials: "/path/to/service-account.json"
    prefix: "uploads"
}

Unlike S3 and R2, GCS uses service account authentication with RSA-SHA256 signed JWTs. FLIN handles the entire OAuth2 token exchange -- loading the service account JSON, signing the JWT, exchanging it for an access token, caching the token, and refreshing it before expiry. The developer never touches authentication code.

GCS signed URLs use Google's V4 signing algorithm, which involves building a canonical request, hashing it with SHA-256, signing with RSA, and appending the signature. FLIN implements this algorithm from scratch using the rsa and pkcs8 crates, avoiding heavy SDK dependencies.

How It All Fits Together

The key to making four backends interchangeable is a Rust trait that defines the contract every backend must fulfill. The application code never references a specific backend -- it works with the trait interface:

rustpub trait StorageBackend: Send + Sync {
    fn put(&self, hash: &str, data: &[u8], extension: &str) -> StorageResult<String>;
    fn get(&self, path: &str) -> StorageResult<Vec<u8>>;
    fn delete(&self, path: &str) -> StorageResult<()>;
    fn exists(&self, hash: &str, extension: &str) -> StorageResult<bool>;
    fn url(&self, hash: &str, extension: &str) -> String;
    fn signed_url(&self, hash: &str, extension: &str, duration: Duration) -> StorageResult<String>;
    fn backend_type(&self) -> &'static str;
}

A factory function reads the configuration and creates the appropriate backend:

rustpub fn create_backend(config: StorageConfig) -> Box<dyn StorageBackend> {
    match config {
        StorageConfig::Local { directory, secret } =>
            Box::new(LocalBackend::new(directory, secret)),
        StorageConfig::S3 { bucket, region, access_key, secret_key, prefix } =>
            Box::new(S3Backend::new(bucket, region, access_key, secret_key, prefix)),
        StorageConfig::R2 { bucket, account_id, access_key, secret_key, prefix } =>
            Box::new(R2Backend::new(bucket, account_id, access_key, secret_key, prefix)),
        StorageConfig::Gcs { bucket, credentials, prefix } =>
            Box::new(GcsBackend::new(bucket, credentials, prefix)),
    }
}

From this point forward, the entire FLIN runtime -- the HTTP server, the VM, the file type system, the garbage collector -- works with Box<dyn StorageBackend>. It does not know or care whether files are on disk or in the cloud.

The Developer Experience

From the FLIN developer's perspective, none of the backend complexity is visible. You declare a file field, upload files through HTTP, and access them through typed properties:

flinentity Invoice {
    client: text
    pdf: file
    amount: money
}

// Upload
route POST "/invoices" {
    validate {
        client: text @required
        pdf: file @required @document @max_size("10MB")
    }

    invoice = Invoice.create({
        client: body.client,
        pdf: body.pdf,
        amount: body.amount
    })
    save invoice

    respond { id: invoice.id, url: invoice.pdf.url }
}

// Serve with signed URL
route GET "/invoices/:id/download" {
    invoice = Invoice.find(params.id)
    url = file_signed_url(invoice.pdf, 30.minutes)
    redirect url
}

This code works identically on local storage, S3, R2, and GCS. The signed URL format changes -- local uses HMAC query parameters, S3 uses presigned URLs, GCS uses V4 signatures -- but the FLIN code does not change at all. You migrate from local to R2 by changing one line in flin.config.

Test Coverage

Each backend comes with comprehensive tests that verify behavior without requiring cloud credentials. The local backend tests actually write and read files. The cloud backend tests verify URL formatting, key construction, hash validation, and path sharding using unit tests that run offline:

BackendUnit TestsIntegration Tests
Local15--
S3(via existing backup.rs)--
R2122 (requires credentials)
GCS132 (requires credentials)

The integration tests for R2 and GCS are ignored by default and run only when the appropriate environment variables are set. This means cargo test always passes without cloud accounts, but the cloud paths are tested in CI with real credentials.

By the end of Sessions 212 through 218, FLIN's file storage system reached 100% completion on the storage backends milestone: 16 out of 16 tasks done, 3,029 tests passing. The foundation was laid for everything that follows in this arc -- compression, garbage collection, previews, and the entire document intelligence pipeline.

In the next article, we dive deeper into the Rust trait pattern that makes all of this possible: the StorageBackend trait design, the decisions behind its method signatures, and why Send + Sync matters more than you might think.


This is Part 126 of the "How We Built FLIN" series, documenting how a CEO in Abidjan and an AI CTO designed and built a programming language from scratch.

Series Navigation: - [125] Search Analytics and Result Caching - [126] File Storage With 4 Backends (you are here) - [127] The Storage Backend Trait Pattern

Share this article:

Responses

Write a response
0/2000
Loading responses...

Related Articles