HomeGuidesChangelog
Guides

Local AI Server Deployment

Deploy a RegScale AI Inference Server locally to support RegML functionality. This option is for air-gapped or private infrastructure environments. Our SaaS platform includes this by default for cloud users.

System Requirements

Host Requirements

  • Docker Engine with NVIDIA Container Toolkit also installed.
  • NVIDIA GPU with latest drivers supporting CUDA ≥ 12.8
  • Linux hosting.

Hardware Requirements

  • GPU: We benchmark on A100 (40GB) GPU and support Ampere or later architectures.
  • RAM: 32GB
  • CPU: 16+ cores
  • Storage: 50GB+ Free

Model Download

We provide a custom version of Microsoft’s Phi-4 model.