Skip to main content

Pricing

When using Modal Labs (the default cloud provider), compute is billed per second of GPU usage. Biom automatically selects the cheapest GPU that meets the model’s VRAM requirements.

GPU pricing

GPUVRAMPrice
T416 GB$0.59/hr
L424 GB$0.80/hr
A10G24 GB$1.10/hr
A100 40GB40 GB$2.10/hr
A100 80GB80 GB$2.50/hr
H10080 GB$3.95/hr
CPU only$0.04/hr

Cost estimation

Before running a model, Biom estimates the compute cost:
Cost = (estimated_duration_seconds / 3600) × hourly_rate
The estimate appears in the job confirmation dialog so you know the cost before committing.

Typical costs per model

ModelGPUDurationEstimated Cost
SAM3 (image)T4~20 sec~$0.003
SAM3 (video)A100~60 sec~$0.035
CellposeT4~30 sec~$0.005
DeepLabCutT4~300 sec~$0.049
Suite2pA10G~600 sec~$0.183
SpikeInterfaceT4/A10G~900 sec~0.1470.147–0.275
PlantCVCPU~30 sec~$0.0003
SExtractorCPU~20 sec~$0.0002

Per-user limits

LimitDefault
Concurrent jobs2
Queued jobs5
Max input size500 MB
Max job runtime3600 seconds
Daily compute spend$5/day
Monthly compute spend$25/month
These limits can be adjusted by workspace administrators.

Other providers

  • Local Docker — free (uses your own hardware)
  • HPC/SLURM — costs determined by your institution
  • User GPU Server — costs determined by your infrastructure
  • HuggingFace Spaces — free for public spaces, paid for private