Skip to content

Welcome to uxopian-ai

uxopian-ai is a complete, standalone framework designed to accelerate and simplify the integration of powerful AI features into any enterprise application.

Built on a solid foundation of Java 21 LTS and Spring 3.5, it goes far beyond a simple library by providing a full suite of tools — from backend services to frontend components — to create sophisticated, reliable, and scalable AI solutions.


The uxopian-ai Advantage: More Than Just a Library

While uxopian-ai uses the excellent Langchain4j library as its core for LLM interactions, it builds a complete enterprise-ready ecosystem around it. Here's the added value:

  • Standalone Service, Not Just Code: A pre-packaged, deployable service that saves you months of development and infrastructure setup.
  • Ready-to-Use UI Components: Instantly integrate AI with web-components (IIFE compiled, scoped CSS), plus plug-and-play integration scripts.
  • Advanced Orchestration Engine: The unique Goal system enables dynamic prompt selection based on context — no need to build this from scratch.
  • Complete Conversation Management: Persistent conversations with cost tracking, response regeneration, and user feedback support.
  • Data-Driven Insights: A comprehensive admin panel to monitor ROI, token usage, and adoption trends.

Key Features at a Glance

Effortless & Scalable Integration

  • Standalone Service: Deployable via Docker or as a Java 21 application.
  • Multi-Tenant Architecture: Designed for internal deployments with clear logical separation and distinct tenant management.
  • Web-Component UI: Lightweight, embeddable components for any web app.
  • Rich REST API: Fully documented (Swagger) for seamless integration.

Powerful Admin & Analytics

  • Granular Token Monitoring: Visualize input and output token consumption globally, by specific users, or per conversation.
  • ROI & Efficiency Tracking: Specific metrics allow you to view the number of times a prompt is used and estimate the total time saved.
  • Usage Trends: Analyze activity over time (requests per week), monitor LLM model distribution, and track the adoption of advanced features like multi-modal capabilities.

Intelligent Orchestration

  • Goal System: Define context-aware workflows using filters and priorities. Example: A "comparison" goal automatically picks a legal prompt for contracts, and a generic one for others.
  • Templating Engine: Dynamic data injection, custom Java services, and conditional logic with Thymeleaf.
  • Template Helpers: Add your own Java functions to enrich prompts.

Robust LLM Interaction

  • Broad Support: Compatible with many LLM providers out-of-the-box.
  • Custom Connectors: Add private or fine-tuned models easily.
  • Advanced Features: Native support for function calling, multi-modal requests (text + image), and streaming/non-streaming responses.
  • MCP Server Client: Acts as a client for Model Context Protocol (MCP) servers.

Complete Conversation Management

  • Persistent History: Conversations and messages are stored with full context.
  • Feedback Loop: Gather specific user feedback (Good/Bad/Neutral) on responses to improve prompt quality.
  • Rich UX: Regenerate, copy, and manage conversation content easily.

Reading Paths

Choose the path that matches your role:

New to uxopian-ai?

  1. Quick Start — Your first AI exchange in 5 minutes.
  2. Core Concepts — Understand Prompts, Goals, and Conversations.
  3. Architecture Overview — See how the components fit together.

Operator / DevOps?

  1. Deploy with Docker — Set up the full stack.
  2. Configuration Files — YAML reference for all config files.
  3. Environment Variables — Quick reference for Docker deployments.
  4. Backup and Recovery — Protect your data.

Integrator?

  1. Architecture Overview — Understand the BFF pattern.
  2. Embedding in a Web Page — Add AI to any web app.
  3. Integrating with ARender — Add AI buttons in ARender.
  4. Integrating with FlowerDocs — Add AI features in FlowerDocs.

Java Developer?

  1. Core Concepts — Understand the domain model.
  2. The Templating Engine — Master dynamic prompt authoring.
  3. Creating Custom Helpers — Inject your own data into prompts.
  4. Creating Custom Tools — Give the LLM the ability to take actions.