Use GenAI Models

This handler (com.rierino.handler.langchain4j.LangChainModelEventHandler) provides ability to use LangChain models from various providers

Handler Parameters

Parameter
Definition
Example
Default

model.state

Name of the state manager with model configurations

genai_model

-

model.domain

Name of the model domain to include

chat

-

saga.handler

Name of the saga event handler that executes tool sagas

-

saga

saga.state

Name of the state manager with saga definitions (used as tools)

-

saga

Models

Models used by this event handler are stored in a regular state manager with the following model parameters:

  • Class: Java class name for the LLM provider model (e.g. dev.langchain4j.model.openai.OpenAiChatModel)

  • Methods: List of methods and their parameters to call for initializing LLM provider model (e.g. { "apiKey": ["TBD"] } ).

  • Memory: State manager that should serve as the memory for assistants.

  • Memory Size: Maximum number of messages to retain in assistant's memory.

circle-info

Method parameters can use value and secret injection using ${{}} and #{{}} notation, similar to event runner configuration loaders, allowing use of Secret entries for securing credentials.

Actions

LLMChat

Performs a chat interaction with a target LLM model provider. Event metadata fields applicable for this action are as follows:

Field
Definition
Example
Default

Domain

ID of the model to use

chatgpt_chat

Input Element

Json path for the input in request event payload

message

-

Output Element

Json path for the output in response event payload

output

-

Input element can include the following fields:

circle-info

Base64 contents can always be passed as data URI as well, with the correct data prefixes.

With event metadata parameters as:

Parameter
Definition
Example
Default

Message Pattern

Used for tool sagas, allowing merging data from original event to saga call (with arguments as agent generated input and payload as original payload)

merge(arguments, {user: payload.user})

-

Full Result

Whether response should include full AI call details (e.g. token counts, tool executions)

true

false

Json Response

Whether model response should be automatically parsed as a json object

true

false

LLMGenerateImage

Performs an image generation with a target LLM model provider. Event metadata fields applicable for this action are as follows:

Field
Definition
Example
Default

Domain

ID of the model to use

dalle_gen

Input Element

Json path for the input in request event payload

{prompt: "", imageCount: ""}

-

Output Element

Json path for the output in response event payload

imageBase64

-

With event metadata parameters as:

Parameter
Definition
Example
Default

Full Result

Whether response should include full AI call details (e.g. token counts)

true

false

LLMEditImage

Performs an image edit with a target LLM model provider. Event metadata fields applicable for this action are as follows:

Field
Definition
Example
Default

Domain

ID of the model to use

dalle_gen

Input Element

Json path for the input in request event payload

{prompt: "", image: { revisedPrompt: "", base64Data:"", mimeType: "", url: ""}, mask: ""}

-

Output Element

Json path for the output in response event payload

output

-

With event metadata parameters as:

Parameter
Definition
Example
Default

Full Result

Whether response should include full AI call details (e.g. token counts)

true

false

Last updated