📦 Installation Guide
This guide provides instructions for deploying the uxopian-ai service. We cover two methods:
- Docker Compose (Recommended): Deploys the full stack (AI Service, Gateway, OpenSearch) using a provided starter kit.
- Java Application: Manual deployment for specific custom environments.
🐳 Docker Deployment (Recommended)
The easiest way to get started is using the provided Docker Compose starter kit. This sets up the AI service, a secured Gateway, and a dedicated OpenSearch node for vector storage.
🔹 Step 1: Download the Starter Kit
Download the uxopian-ai_docker_example.zip archive from the release repository.
Archive Structure: Once extracted, you will see the following structure. This folder contains the stack definition and all necessary configuration files.
uxopian-ai_docker_example
├── config
│ ├── application.yml # Core application settings
│ ├── goals.yml # Predefined AI goals
│ ├── llm-clients-config.yml # LLM Provider settings (OpenAI, Azure, etc.)
│ ├── mcp-server.yml # Model Context Protocol settings
│ ├── metrics.yml # Observability configuration
│ ├── opensearch.yml # Vector database connection config
│ └── prompts.yml # System prompts definitions
├── gateway-application.yaml # Configuration for the API Gateway
└── uxopian-ai-stack.yml # Docker Compose definition
🔹 Step 2: Pull the Docker Images
Ensure you have access to the Artifactory and pull the required images.
docker pull artifactory.arondor.cloud:5001/uxopian-ai/ai-standalone:latest
docker pull artifactory.arondor.cloud:5001/uxopian-ai/gateway:latest
# OpenSearch is pulled from the public registry automatically by the compose file
Image Tags
You may need to update the image: fields in uxopian-ai-stack.yml to match the full path of the images you just pulled (e.g., replace image: 'ai-standalone' with artifactory.arondor.cloud:5001/uxopian-ai/ai-standalone:latest).
🔹 Step 3: Configuration
Before starting the stack, you must configure your LLM providers and environment.
1. LLM API Keys
Edit config/llm-clients-config.yml to add your API keys (e.g., OpenAI, Anthropic), or pass them as environment variables in the uxopian-ai-stack.yml file.
2. Service Configuration
The uxopian-ai-stack.yml file orchestrates three services:
- OpenSearch: Stores vector embeddings.
- Gateway: Handles routing and exposure (Port
8085). - AI Standalone: The core intelligence engine.
Key Environment Variables in uxopian-ai-stack.yml:
| Variable | Description | Default / Example |
|---|---|---|
OPENSEARCH_HOST |
Hostname of the vector DB | uxopian-ai-opensearch-node1 (Internal Docker DNS) |
UXOPIAN_AI_PORT |
Internal port for the AI service | 8080 |
APP_BASE_URL |
URL where the gateway is reachable | http://localhost:8085 |
SPRING_PROFILES_ACTIVE |
Active Spring profile | dev (Disables authentication for testing) |
Production Warning
The example stack uses SPRING_PROFILES_ACTIVE=dev, which disables authentication. For production deployments, remove this variable and configure proper security in the gateway.
For a detailed reference of every file inside the config/ directory, please refer to the Configuration Files documentation.
🔹 Step 4: Start the Stack
Navigate to the extracted folder and start the services.
Verification:
- Gateway: Accessible at
http://localhost:8085 - Health Check:
http://localhost:8085/uxopian-ai/actuator/health
☕ Java Application Deployment
If you cannot use Docker, you can run the service directly as a Java application.
!!! important "Prerequisites" * Java 21 Runtime Environment (JRE) installed. * OpenSearch 2.x installed and running separately.
🔹 Step 1: Download the Package
Download the installation ZIP file from the Arondor Artifactory:
🔗 ai-standalone-[version].zip
🔹 Step 2: Configure
- Unzip the package.
- Navigate to the
configdirectory. - Edit
opensearch.ymlto point to your existing OpenSearch instance. - Edit
llm-clients-config.ymlto provide your API keys.
Detailed configuration options are available here: Configuration Files.
🔹 Step 3: Run
From the root of the unzipped directory, execute: