Release Notes: uxopian-ai v2026.0.0-ft2
Release Date: February 2026 Version: 2026.0.0-ft2
This release introduces dynamic LLM provider management, a standalone Gateway service, and significant improvements to the admin panel, statistics, and developer experience.
๐ Highlights
๐ง Dynamic LLM Provider Configuration
LLM provider configurations are now dynamic entities stored in OpenSearch, replacing the previous static YAML-only approach. Providers, models, and their parameters can be created, updated, and deleted at runtime โ per tenant โ without restarting the service.
Key changes:
- New provider configuration entity with global settings and per-model overrides.
- Full CRUD via the Admin API (
/api/v1/admin/llm/provider-conf). - Per-tenant configuration with merge strategies:
OVERWRITE,MERGE,CREATE_IF_MISSING. - AES-GCM encryption for API secrets at rest.
- YAML bootstrapping still supported โ configurations defined in
llm-clients-config.ymlare loaded into OpenSearch at startup, then managed dynamically.
See Configuration Files โ Dynamic Provider Configuration and LLM Provider Management.
๐ก๏ธ Standalone Gateway Service
The BFF Gateway is now a standalone service, deployed independently from the AI service. The runtime architecture remains the same (Gateway authenticates, injects headers, proxies to AI service), but the Gateway can now be scaled and updated independently. This simplifies deployment and allows independent scaling of the security layer.
See Security Model.
๐ Statistics Improvements
The statistics API has been expanded from a single endpoint to 5 dedicated endpoints, each with a configurable time interval parameter:
GET /api/v1/admin/stats/globalโ Aggregate countersGET /api/v1/admin/stats/timeseries?interval=DAYโ Time-series trendsGET /api/v1/admin/stats/llm-distributionโ Model usage breakdownGET /api/v1/admin/stats/top-prompts-time-savedโ ROI rankingGET /api/v1/admin/stats/feature-adoptionโ Advanced feature usage rates
Supported intervals: HOUR, DAY, WEEK, MONTH, YEAR.
See Statistics & ROI and REST API Reference.
โจ New Features
๐งช Prompt Tester
The admin panel now includes a Prompt Tester that lets you execute prompts directly from the UI:
- Automatically detects Thymeleaf variables in the prompt template.
- Provides input fields for each variable (text or image).
- Executes the prompt against the configured LLM and displays the result.
- Generates the equivalent cURL command for easy reproduction.
See Prompt Management โ Prompt Tester.
๐ฅ๏ธ LLM Provider Admin UI
A complete management interface for LLM provider configurations:
- Provider List โ Table with search, filter, and CRUD actions.
- Provider Editor โ Form to configure provider identity, global settings, and per-model overrides.
- Connection Tester โ Test connectivity per model with live status badges.
๐ Fast2 Authentication Provider
New built-in Fast2Provider for the Gateway. It validates JWT tokens issued by Fast2 by fetching the public key from a configurable remote endpoint. Configure it as provider: Fast2Provider in the Gateway route configuration.
๐ Swagger / OpenAPI Improvements
- All admin controllers now use consistent
Admin -tag prefixes for better organization. - Complete request/response schema documentation on all endpoints.
- Swagger UI is publicly accessible (no authentication required) โ ideal for API exploration during development.
๐๏ธ Technical Changes
๐ค ModelProvider Interface Update
The ModelProvider interface has been simplified:
- Before:
createChatModelInstance(String modelName),getDefaultModelName(),getSupportedModels() - After:
createChatModelInstance(LlmModelConf params),createStreamingChatModelInstance(LlmModelConf params)
The getDefaultModelName() and getSupportedModels() methods have been removed โ model metadata is now managed via dynamic provider configurations. Custom providers should extend AbstractLlmClient and use params.getModelName(), params.getApiSecret(), etc.
See Adding a New LLM Provider.
โ๏ธ Parameter Precedence (5 Levels)
The parameter resolution hierarchy has been extended from 3 to 5 levels:
- API Call Parameters โ Values passed directly in the request.
- Prompt Defaults โ
defaultLlmModel,defaultLlmProvideron the Prompt entity. - Provider Model Config โ Per-model settings in
LlmModelConf. - Provider Global Config โ Shared settings in
LlmProviderConf.globalConf. - YAML Global Defaults โ
llm.default.*inllm-clients-config.yml.
๐ฆ Dependency Upgrades
| Dependency | Previous | Current |
|---|---|---|
| Spring Boot | 3.5.x | 3.5.10 |
| LangChain4J | 1.x | 1.11.0 |
| OpenSearch Client | 2.x | 3.5.0 |
| Docker Base Image | 1.0.x | 1.0.4 |
๐จ Frontend Changes
- Improved state management for better performance and responsiveness.
- Fixed edge cases in markdown rendering within chat responses.
- Improved auto-scroll behavior during streaming responses.
๐ Migration Notes
From v2026.0.0-ft1-rc2
-
LLM Configuration: The
llm-clients-config.ymlformat has changed. The previoussupported-modelslists under each provider section are no longer supported. You must migrate your provider and model definitions to the newllm.provider.globals/llm.provider.tenantsstructure. These configurations are loaded into OpenSearch at startup and can then be managed dynamically via the Admin API or UI. See Configuration Files โ Dynamic Provider Configuration for the new format and a full YAML example. -
Gateway Deployment: The Gateway is now deployed as a separate service. Update your Docker compose to use the dedicated Gateway image (
uxopian-ai/gateway-service). The configuration format (application.ymlwith routes) remains the same. -
Custom LLM Providers: If you have custom
ModelProviderimplementations, update them to acceptLlmModelConfinstead ofStringin factory methods. ExtendAbstractLlmClientfor convenience. See the updated guide. -
API Secret Encryption: Set the
APP_SECURITY_SECRET_KEYenvironment variable (Base64-encoded AES key) to enable encryption of provider API secrets stored in OpenSearch. If not set, secrets are stored in clear text.
Ready to start? Check out the Quick Start or the full Installation Guide.