Upload documents and use nodes that generate answers from documents

Upload documents and use nodes that generate answers from documents

Using the Answer Generation node, you can leverage an LLM (large language model) to generate high-quality answers based on internal documents, Q&A, and external data. To use the Answer Generation node with documents, you must first have documents uploaded in the Documents tab of the Knowledge Base menu.

Uploading documents is simple. In the Documents tab, click the 'Upload Document' button. When uploading many documents, you can use the 'Add New Folder' button to group and organize documents by type/permission.

When creating a folder, you can grant edit and view permissions via the access list settings.

Click the file name to display detailed information on the right

Preview screen

Uploaded documents can be viewed through the preview screen.

Now let's learn how to create an interactive app based on the documents you actually uploaded.

Go to the 'App Management' tab in the Dashboard menu and click +Create New App > Create Interactive App at the top right.

Click the Start button and select the 'Answer Generation' button from the node list.

The Answer Generation node has various fields and features. Let's go through them one by one.

There are two types of Answer Generation nodes: 'Agent' and 'Group Prompt'.

Use the Agent when you want the model to ask necessary questions and derive the optimal answer through interaction with the user. Use a Group Prompt when you want to produce a one-time answer from the uploaded documents based on a specified prompt (you can set multiple constraints according to internal rules). [When selecting Agent]

Question input method : This option lets you choose whether the user can input a question within the conversation or whether it will be automatically input via variables.

Question input : You can enter the message to display to the customer. For example, you might write: 'Hello. This is the internal chatbot AI. Please enter your question.'

+ Always display message when entering node: An option to repeatedly display the entered message or question each time the node is used.

Default model : You can select the LLM model the Agent will use. Because the Agent operates in multi-turn rather than singleton mode, only models that support function calls can be used. Models that support function calls are labeled *Agent Compatible, so please select one of those models.

Agent : Currently, only the Alli Works Agent is provided. Using this agent, the model can ask the user for required information or proceed with follow-up questions to utilize multi-turn functionality. You can also view the intermediate reasoning of the answer (how the Agent arrived at this answer) within the conversation.

Search source : An option to search for answers from Q&A and documents uploaded to the account or from websites (external). It lets you specify the scope of where to search and generate answers from.

Q&A : If Q&A is set as the search source, you can choose whether to include or exclude specific hashtags and variables (the variable will be used if its value matches the hashtag). Leave blank to search the entire Q&A list.

Documents : If Documents is set as the search source, you can choose whether to include or exclude configured folders and specific hashtags and variables (the variable will be used if its value matches the hashtag). Leave blank to search all documents.

Internet : An option to include or exclude web search as a search source and target. Detailed features like specifying or excluding particular web pages will be added later.

Save answer to the following variable : You can save the answer to the specified variable. Variables can be created in the Project Settings menu or directly created via the dropdown menu.

After answer generation : You can choose whether to repeat the current node or move on to the next node.

[When selecting Group Prompt]

Question input method : This option lets you choose whether the user can input a question within the conversation or whether it will be automatically input via variables.

Question input : You can enter the message to display to the customer. For example, you might write: 'Hello. This is the internal chatbot AI. Please enter your question.'

+ Always display message when entering node: An option to repeatedly display the entered message or question each time the node is used.

Default model : You can select the LLM model to use for answer generation

Group Prompt : You can select the prompt to use for answer generation. The default is the 'Answer Generation' prompt. You can also modify this prompt to customize it.

Search source : An option to search for answers from Q&A and documents uploaded to the account or from websites (external). It lets you specify the scope of where to search and generate answers from.

Q&A : If Q&A is set as the search source, you can choose whether to include or exclude specific hashtags and variables (the variable will be used if its value matches the hashtag). Leave blank to search the entire Q&A list.

Documents : If Documents is set as the search source, you can choose whether to include or exclude configured folders and specific hashtags and variables (the variable will be used if its value matches the hashtag). Leave blank to search all documents.

Internet : An option to include or exclude web search as a search source and target. Detailed features like specifying or excluding particular web pages will be added later.

Save answer to the following variable : You can save the answer to the specified variable. Variables can be created in the Project Settings menu or directly created via the dropdown menu.

After answer generation : You can choose whether to repeat the current node or move on to the next node.

Add branch option when correct answer not found : An option to add a separate node branch when the response 'Answer cannot be found' is generated. If you turn this option ON, you can specify actions for when the LLM model cannot find the correct answer.

Last updated