Pricing
When using Modal Labs (the default cloud provider), compute is billed per second of GPU usage. Biom automatically selects the cheapest GPU that meets the model’s VRAM requirements.GPU pricing
| GPU | VRAM | Price |
|---|---|---|
| T4 | 16 GB | $0.59/hr |
| L4 | 24 GB | $0.80/hr |
| A10G | 24 GB | $1.10/hr |
| A100 40GB | 40 GB | $2.10/hr |
| A100 80GB | 80 GB | $2.50/hr |
| H100 | 80 GB | $3.95/hr |
| CPU only | — | $0.04/hr |
Cost estimation
Before running a model, Biom estimates the compute cost:Typical costs per model
| Model | GPU | Duration | Estimated Cost |
|---|---|---|---|
| SAM3 (image) | T4 | ~20 sec | ~$0.003 |
| SAM3 (video) | A100 | ~60 sec | ~$0.035 |
| Cellpose | T4 | ~30 sec | ~$0.005 |
| DeepLabCut | T4 | ~300 sec | ~$0.049 |
| Suite2p | A10G | ~600 sec | ~$0.183 |
| SpikeInterface | T4/A10G | ~900 sec | ~0.275 |
| PlantCV | CPU | ~30 sec | ~$0.0003 |
| SExtractor | CPU | ~20 sec | ~$0.0002 |
Per-user limits
| Limit | Default |
|---|---|
| Concurrent jobs | 2 |
| Queued jobs | 5 |
| Max input size | 500 MB |
| Max job runtime | 3600 seconds |
| Daily compute spend | $5/day |
| Monthly compute spend | $25/month |
Other providers
- Local Docker — free (uses your own hardware)
- HPC/SLURM — costs determined by your institution
- User GPU Server — costs determined by your infrastructure
- HuggingFace Spaces — free for public spaces, paid for private