Local AI Server Deployment
Deploy a RegScale AI Inference Server locally to support RegML functionality. This option is for air-gapped or private infrastructure environments. Our SaaS platform includes this by default for cloud users.
System Requirements
Host Requirements
- Docker Engine installed
- NVIDIA GPU with latest drivers supporting CUDA ≥ 12.8
- Linux with NVIDIA Container Toolkit
- Windows is not recommended, but GPU containers can be tested via Docker Desktop GPU Support
Hardware Requirements
- GPU: 24GB+ VRAM
- RAM: 32GB
- CPU: 16+ cores
- Storage: 50GB+ Free
Model Download
We provide a custom version of Microsoft’s Phi-4 model.
Updated 7 days ago
