Creating custom AI providers using AIChatProvider
Suppose you are using an AI model or service not currently supported by Brightspot (like alternatives to Amazon Bedrock or Google Vertex). In that case, this guide will show you how to implement your own custom provider using the AIChatProvider interface.
The AIChatProvider interface allows you to integrate custom AI chat providers. This document provides a guide and code examples for implementing the interface.
1. Implementing AIChatProvider
To create a custom AI provider, implement the AIChatProvider interface and override its methods.
Example implementation:
1@Recordable.DisplayName("my-custom-provider")2public class MyCustomAIProvider extends Record implements AIChatProvider {34@ToolUi.ValueGeneratorClass(AIChatProviderModelIdValueGenerator.class)5private String model;67public String getModel() {8return model;9}1011public void setModel(String model) {12this.model = model;13}1415@Override16public Set<String> getModelNames() {17// Return a set of supported model names.18return Set.of("model-v1", "model-v2");19}2021@Override22public void invokeLlm(AIChatRequest request) {23// Simulate invoking an LLM synchronously.24String prompt = getPrompt(request);25String response = "Simulated response for: " + prompt;26request.getResponse().setText(response);27request.getChat().save();28}2930@Override31public CompletableFuture<?> invokeLlmAsync(AIChatRequest request) {32// Simulate invoking an LLM asynchronously.33return CompletableFuture.runAsync(() -> {34String prompt = getPrompt(request);35String response = "Simulated async response for: " + prompt;36request.getResponse().setText(response);37request.getChat().save();38});39}4041@Override42public Message templatedContextMessage(Prompt prompt, Message contextMessage) {43// Create a templated context message based on the prompt and context.44Message message = new Message();45message.setText(prompt.getText());46message.setTemplatedText("User question: " + prompt.getText() + " with context: " + contextMessage.getText());47return message;48}4950private String getPrompt(AIChatRequest request) {51// Simulate prompt generation52return request.getConversationHistory().stream()53.map(m -> m.getUser().toString() + " " + m.getTemplatedText())54.collect(Collectors.joining("\n"));55}56}
2. Method details and providers
getModelNames
The getModelNames method returns a set of supported model names. These names are tied to the @ToolUi.ValueGeneratorClass annotation, which enables the models to appear as selectable options in the UI.
Key Integration:
1@ToolUi.ValueGeneratorClass(AIChatProviderModelIdValueGenerator.class)2private String model;
- Annotation purpose—The
@ToolUi.ValueGeneratorClassannotation ensures that the values returned bygetModelNamespopulate the dropdown or selection options in the UI for the model field. - User interaction—In the UI, users can choose from the set of models defined by the
getModelNamesmethod.
Example usage:
1AIChatProvider provider = new MyCustomAIProvider();2System.out.println("Supported models: " + provider.getModelNames());
Output:
1Supported models: [model-v1, model-v2]
invokeLlm
Invoke the LLM synchronously and populate the response in AIChatRequest.
Example usage:
1AIChatRequest request = AIChatRequest.newBuilder()2.newChat()3.withUserPrompt("Hello, AI!")4.build();56AIChatProvider provider = new MyCustomAIProvider();7provider.invokeLlm(request);89System.out.println("LLM Response: " + request.getResponse().getText());
Output:
1LLM Response: Simulated response for: Hello, AI!
invokeLlmAsync
Invoke the LLM asynchronously and handle the response with a CompletableFuture.
Example Usage:
1AIChatRequest request = AIChatRequest.newBuilder()2.newChat()3.withUserPrompt("Hello, async AI!")4.build();56AIChatProvider provider = new MyCustomAIProvider();7CompletableFuture<?> future = provider.invokeLlmAsync(request);89future.thenRun(() -> {10System.out.println("Async LLM Response: " + request.getResponse().getText());11});
Output:
1Async LLM Response: Simulated async response for: Hello, async AI!
templatedContextMessage
The templatedContextMessage method allows you to format and customize how the context is inserted into a prompt. This ensures the AI receives structured and relevant information to generate accurate responses.
Example usage:
1Prompt prompt = new Prompt("What are some good Italian restaurants?");2Message contextMessage = new Message();3contextMessage.setText("I am in downtown San Francisco and prefer outdoor seating.");45AIChatProvider provider = new MyCustomAIProvider();6Message templatedMessage = provider.templatedContextMessage(prompt, contextMessage);78System.out.println("Text: " + templatedMessage.getText());9System.out.println("TemplatedText: " + templatedMessage.getTemplatedText());
Output:
1Text: What are some good Italian restaurants?2TemplatedText: User question: What are some good Italian restaurants? Context: I am in downtown San Francisco and prefer outdoor seating.
3. Additional Notes
- Thread safety—Ensure thread safety if your implementation involves shared resources.
- Asynchronous handling—Handle CompletableFuture cancellations gracefully in invokeLlmAsync.
- Custom templating—Customize templatedContextMessage based on your AI provider's requirements and templating logic.
With this guide and examples, you can implement your own custom AI provider using the AIChatProvider interface.