Skip to main content

Models

Biom ships with prebuilt scientific models and lets you add any model from a GitHub repository.

Add a model from GitHub

Any public GitHub repository containing a model can be added to Biom. On the Models tab, click the GitHub button, paste the repository URL, and Biom will build and configure it automatically. This lets you bring your own models — or any model you find on GitHub — without writing adapter code.

Prebuilt models

These models come ready to use out of the box:

SAM3

Segment Anything Model 3 — universal segmentation with point, box, and text prompts.

Cellpose

GPU-accelerated cell and nuclei segmentation with automatic diameter detection.

DeepLabCut

Markerless pose estimation for animal tracking in videos.

Suite2p

Calcium imaging — motion correction, ROI detection, spike deconvolution.

SpikeInterface

Spike sorting with 7 sorter backends including Kilosort4.

PlantCV

Automated plant phenotyping — morphology, shape, and color analysis.

SExtractor

Astronomical source extraction — detect stars, galaxies, and measure fluxes.

How models run

Models execute through a two-tier adapter pattern:
  1. Model Adapters define what a model does — parameter validation, input preprocessing, output parsing
  2. Provider Adapters define where/how to execute — cloud GPU, local Docker, HPC cluster, etc.

Execution flow

Your file → Job created in database
         → ModelAdapter.processInput() preprocesses the file
         → ProviderAdapter.execute() runs the model
         → ModelAdapter.parseOutput() parses results
         → Artifacts stored to S3 + database
         → Results displayed in your viewer

Running a model

There are several ways to run a model:
  1. AI Agent — describe what you want in natural language (e.g., “segment all cells”)
  2. Suggested actions — when you drop a file, Biom suggests compatible models
  3. Model panel — open the models panel, select a model, configure parameters, and run
  4. Pipeline builder — add model steps to a multi-step pipeline
  5. GitHub import — paste a GitHub URL on the Models tab to add and run any model

Job management

Each model run creates a job that you can track:
  • Status — queued, running, completed, failed, cancelled
  • Progress — real-time progress updates and log streaming
  • Artifacts — output files (masks, tracks, statistics) are stored and downloadable
  • Cost — estimated compute cost displayed before execution

Job limits (per user)

LimitDefault
Concurrent jobs2
Queued jobs5
Max input size500 MB
Max job runtime3600 seconds
Daily compute cost$5/day
Monthly compute cost$25/month