Setup Overview
Step 1: Generate an AI Inference Server API Key
Use the provided utility script to generate an API key for your RegScale AI Inference Server.
Step 2: Update your Environment Variables
Setup and installation is extremely simple, using a special version of our standalone docker compose file, provided by your Solutions Engineer. The docker compose file spins up your RegScale instance, your database, and now your RegScale AI Inference server.
Each application contains a corresponding environment file:
- ai.env - Environment variables for the RegScale AI Inference Server
- LOGLEVEL - sets the Inference Server's log level.
- LOCAL_MODELS_PATH - Defines where on the containerized app the app will look to load the models. The value defaults to "/app/local_models" and should not be changed.
- LANGUAGE_MODEL_NAME - Defaults to a configuration value for Phi-4 and should not be changed.
- INFERENCE_API_KEY - Put the api key you generated here.
- atlas.env Governs the regscale application. Here are the AI Specific values that should be set
- regmlEnabledEnv=true
- regmlModelSelector=AzureAI.OpenAI.gpt-4o
- RegmlAiModelRegistryModelsAzureAI.OpenAI.gpt-4o__regmlInferenceApiKey=<Your_Api_Key>
- RegmlAiModelRegistryModelsAzureAI.OpenAI.gpt-4o__regmlInferenceEndpoint=http://reg-ai-inference:8000/regml/query/
- CoreSettingsRegmlEmbeddingsApiUrl=http://reg-ai-inference:8000/regml/embeddings/
CoreSettingsRegmlEmbeddingsApiKey=<Your_Api_Key>
- db.env - Used for the database
Step 3: Start With Docker Compose
A single command will launch all 3 containers:
sudo docker compose -f docker-compose-regscale-standalone.yml up -d
Updated 3 days ago
Did this page help you?