LLM Node
LLM Node
LLM nodes let you weave a single prompt and multiple prompts together to automate tasks.
Single Prompt
This method instructs the LLM to perform the action directly through specific instructions. It is effective for clear and specific tasks and is suitable for single tasks such as translation and summarization.

1. Select an LLM model
Choose an LLM model suitable for the task you want to perform.
Select based on each model's characteristics and performance.
2. Select a prompt
Choose the prompt to use when running the LLM model.
A prompt instructs the model on the task to perform or the content to generate, and you can create a new one in the prompt management tab.
3. Enable options
3.1 Use previous messages as context
Insert information from previous conversations into the LLM prompt to enable accurate answers for ongoing dialogues and follow-up questions.
3.2 Reply to the user with the generated result
You can either output the LLM execution result directly to the user or save it to a variable for use in subsequent actions.
How to enable:
Direct output: Enable the "Reply to the user with the generated result" option.
Save to variable: Disable that option and then save the result to a specific variable.
3.3 Specify output format as JSON
Specify the LLM's output format as JSON to make it useful for data processing and system integration.
4. Save to a variable
Save the LLM execution result to a specific variable to use in subsequent actions. You can create variables in the project settings menu or select and save a variable directly from the dropdown menu.
Agent
This approach delegates planning to the LLM to determine the user's intent and call various tools to perform appropriate actions. It is effective for complex problem solving and multi-step tasks, and useful for complex tasks such as customer support chatbots, data analysis, and report writing.
.

1. Select an agent
Choose the agent you want to use.
The agent creation feature is not currently provided separately within the app builder; agents are provided through periodic updates. If you need a specific agent, please contact your account manager.
2. Select an LLM model
Choose an LLM model suitable for the task you want to perform.
Agents can only be used with models that support function calls, and available models are limited. Models that can be selected are labeled "Agent compatible," so please check and select accordingly.
3. Enable options
3.1 Reply to the user with the generated result
You can either output the LLM execution result directly to the user or save it to a variable for use in subsequent actions.
How to enable:
Direct output: Enable the "Reply to the user with the generated result" option.
Save to variable: Disable that option and then save the result to a specific variable.
3.3 Specify output format as JSON
Specify the LLM's output format as JSON to make it useful for data processing and system integration.
4. Save to a variable
Save the LLM execution result to a specific variable to use in subsequent actions. You can create variables in the project settings menu or select and save a variable directly from the dropdown menu.
Last updated