-
Notifications
You must be signed in to change notification settings - Fork 1
Home
AI Tasks is an Open Source, AI integrated workflow orchestration for Liferay DXP. By integrating large language models into task management, with low-code approach using a visual designer, it streamlines the process of managing even complex business operations. Its goal is to substantially ease, simplify and accelerate the AI integration in Liferay and, get rid of the boilerplate code, to lower the costs and reduce the time to market.
- Workflow Orchestration: Design workflows with a visual editor. Export & import as JSON configuration files.
- Liferay Integration: Manage access, permissions and policies with Liferay. Trigger and consume the tasks through REST API.
- AI Integration: Integrate with Vertex AI, OpenAI, Ollama or HuggingFace large language models
- Data locality: Local LLMs supported through Ollama.
- 3rd Party Integration: Integrate to 3rd party workflow automation tools. Let 3rd party tools to integrate to it via REST APIs.
- Open Source: Extend and customize the task node types and function callback to your needs
- Create (specialized) chatbots on Liferay
- Add AI translation capabilities to Liferay pages, display templates or applications
- Add AI summarization capabilities to Liferay pages, display templates or applications
- Add image & content generation capabilities to Liferay pages, display templates or applications
- Create RAG chatbots grounded to local Liferay semantic search or for example to Google Search
- Integrate Liferay OOTB workflows with AI
- Create chat interfaces. Let the chatbot or a voice agent to operate with Liferay services. For example, let users to manipulate their user account, create content or fetch PTO days available from HR system through a chat prompt
- Create an AI Task, for example a simple LLM chat using the visual designer
- From the provided test chatbot, a Liferay display template, client extension, your own frontend or from anywhere, consume the AI Task through its
generate
REST endpoint:
For example, to send the text Please tell me a joke
for an AI Tasks with external reference code sampleJokingChatbotUsingVertexAI
, one would send a payload like this:
curl -X 'POST' \
'http://localhost:8080/o/ai-tasks/v1.0/generate/sampleJokingChatbotUsingVertexAI' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-H 'x-csrf-token: kPNn0CrS' \
-d '{
"input": {
"text": "Please tell me a joke"}
}'
If everything was set up correctly, the response, with debug on, should look something like this:
{
"debugInfo": {
"1": {
"userMessage": "Please tell me a joke",
"executionTime": "1921ms",
"inputTokenCount": 21,
"systemMessage": "You are a polite assistant and try to help people as good as you can.",
"finishReason": "STOP",
"totalTokenCount": 72,
"outputTokenCount": 51
}
},
"output": {
"text": "Why did the golfer wear two pants? Because he's afraid he might get a \"Hole-in-one.\" \n\nThis is a classic punny joke! Let me know if you'd like to hear another one. 😊"
},
"took": "1921ms"
}
- Liferay DXP 7.4
- Java 17
- External or local AI services / accounts set up
AI Tasks has been built and tested with DXP 7.4 U128. Althrough U128+ is not strictly required, it's recommended especially if building RAG workflows grounding on Liferay semantic search, because U128 adds the OpenAI and Vertex AI embeddings support.
AI Tasks is using LangChain4J for LLM integration Currently supported language models are:
Image models:
(Deployable artifacts coming soon)
- Clone the repository:
[email protected]:peerkar/ai-tasks.git
- Set up the Liferay bundle. If using the workspace bundle, go to the
liferay-workspace
folder and run:
gw initBundle
- Start up the DXP and deploy the license
- In both the
modules
andclient-extensions
directory run:
gw deploy
- Verify from Liferay logs that all the modules were deployed and started succesfully
- Access the AI Tasks Admin app in the Applications menu
Set up the LLM API or service as needed:
- Ollama
- HuggingFace
- OpenAI API
- Vertex AI. In local development environment, the gcloud CLI is required.
After setting up the Liferay bundle, AI Tasks modules and the desired LLM service, the provided samples can be tested:
- Go to the AI Tasks Admin app
- Use the
Import
to import AI Tasks from the provided samples - Open the imported task for editing
- Right click on the task nodes and set up at least the API keys and other necessary information.
- Use either the chat preview in the AI Tasks Admin app or the local API test page to test the workflow.
- Use the provided Chat test application (client extension) to test the task
(Coming soon)
The API keys and any text values in the configuration can be set as environment variables to give better protection. To use environment variables, prefix the value in the task node configuration with env:
.
For example, to configure the OpenAI authentication key as an environment variable, one could enter in the task configuration:
env:OPEN_AI_AUTHENTICATION_KEY
(Coming soon)
At the moment, there are two options to get details of the task execution:
- Set the
debug: true
in the configuration. This will add LLM debug output to the HTTP response. - Set
logRequests: true
and/orlogResponses: true
for a task node. This will output the LLM communication to the Liferay log.
Currently there are 3 extension points:
- Nodes
- Tools
- Chat model listeners
See the SPI module.
Always use quotas when integrating to external LLM service providers.
LLM calls can take a long time to execute, meaning blocked threads. This issue will be mitigated with https://github.com/peerkar/ai-tasks/issues/2
AI Tasks is using the LangChain4J AI Services, which don't currently support multimodal input. Please see https://github.com/langchain4j/langchain4j/issues/938
Authentication to Vertex AI doesn't work if the gcloud authentication is not done before starting the DXP. To authenticate, enter on the command line:
gcloud auth application-default login
gcloud auth login
If the authorization key is timed out. The portal needs to be shut down before reauthenticating.