Alli Generative Answer API

The system has the ability to understand a question, and subsequently generate accurate and comprehensive responses by scanning through vast collections of source documents.

Kindly be advised that the utilization of Alli generative answer is not universally accessible to all clients. Should you express interest in utilizing Alli generative answer, we recommend that you reach out to your designated account manager for further information and guidance.

Getting the REST API KEY

Please provide your API key in the request header API-KEY. Your REST API key can be found in your dashboard Settings menu, under the General tab.

Providing User Information

You can add user information in the request header to specify who makes the call.

A user ID can be provided in the request header OWN-USER-ID. The user ID can be either a new one or an existing one, and if a new user ID is provided, Alli will create a new user with that ID. Any future API calls with the same OWN-USER-ID header will be considered as they're from the same user.

If you want to update the user's email address at the same time, you can provide the email address in the request header USER-EMAIL. Below is an example.

-H 'OWN-USER-ID: 5f1234567a409876c082487z' \

You cannot use non-ASCII characters for OWN-USER-ID. If the user ID includes any non-ASCII characters, you can encode the ID to base64 and use base64:CONVERTED_ID.

You can find saved user ID and email information in your Alli dashboard Customers menu.

Error Messages

Please read the error message you get if you don't get the response that you expected. For example, if you don't use the right HTTP method for the API, you'll get this type of error as response:

{“errors”: “Method Not Allowed PUT: /webapi/generative_answer”}

Generative Answer


Generative Answer API finds an answer from your unstructured text documents, Q&A, and even from complex tables.





Your API key can be found in your dashboard Settings menu, under the General tab.

OWN-USER-ID (optional)


If you want to specify a user, you can provide the user's ID here. Existing User ID's can be found on the Conversations menu in your dashboard. If you use a User ID that does not exist, a new User will be created. If OWN-USER-ID is not entered, the default User ID per project is used and the threadId feature is disabled.

Request Body




This is a query string.



Utilize the LLM of your choice when generating answers. Default model is GPT 3.5 Turbo. Please see table below for available options.



Determine the format of the response given for easier integrations. Acceptable values are DRAFTJS and MARKDOWN. Default format is DRAFTJS



To use follow-up question, previous conversation history is required. The conversation history is managed by threadId. If you set isStateful option to True and enter a threadId, the query will be rewritten with reference to the previous conversation history Default = False.


string (UUID)

threadId is used when isStateful = True.

The first time you start a conversation, send it empty, and from the next query on, send it using the threadId from the output.

However, if you want to set the threadId from the beginning, write it as a UUID and send it.

Example UUID - 36e7bb2b-1063-47ec- Default = None



Select which group prompt from your project to use for generating responses. The ID is located within the URL when viewing the group prompt within the Settings page.

This is a very helpful option for a project with multiple different group prompts that are altered for specific tasks. The URL should look something like this and the ID is identified below:



Whether to output data as stream or sync. Current available values: sync OR stream (default=sync)

When in stream mode, .json strings with the same output format as sync are outputted as streaming.



Whether to include the text of the document used as a clue. Default = False

ONLY works if clues is enabled.



Whether to include clues in the output that will be used to create a generative answer. Default = False



Utilize this feature to scope the search via hashtags on documents or Q&A's. You can include or exclude certain hashtags as well as provide an option of and/or for the hashtags selected. See example below: { "qnaInclude" : ["hash name", "hash name2"], "qnaIncludeOption" : "and"/"or", "qnaExclude" : ["hash name", "hash name2"], "qnaExcludeOption" : "and"/"or", "docsInclude" : ["hash name", "hash name2"], "docsIncludeOption" : "and"/"or", "docsExclude" : ["hash name", "hash name2"], "docsExcludeOption" : "and"/"or", }



Specifies the range of source data for Generative Answer to find data from. Currently available values are: web, qna, document, and enter data in the form of a list.

For example -> ["web", "qna"]

You will get the result as the following JSON format:

answer: It's the answer extracted from uploaded documents in the dashboard.answerHash: This is used to identify the answer to give or cancel feedback.

answerHash: This is used to identify the answer to give or cancel feedback.

confidence: It shows the confidence value from AI model. Shown as a number value between 0 and 1.

effectiveConfidence: When you provide feedback to the query result, it automatically trains the AI model. Effective confidence is the adjusted score with these users' and agents' feedback. Shown as a number value between 0 and 1.

documentId: The document's ID where the answer is extracted from.

documentName: The document's name where the answer is extracted from.hashtags: The hashtags attached to the document.

pageNo: The page number in the document where the answer was derived.

createdDate: The date when the document was uploaded.

agentFeedback: The feedback from agents (through dashboard and REST calls) about this document

userFeedback: The feedback from users about this document

body: If returnPreview is true in the request, the html body of the document search preview is displayed.

css: If returnPreview is true in the request, the css of the document search preview is displayed.

pdfPreview: If returnPdfPreview is true in the request, the URL to a PDF preview of the document with the answer highlighted is displayed. Note that the URL can be restricted using the Download IP Allowlist setting.

folder: If the answer is found from a folder under the Documents database, the folder name will be returned here.

editor: List of the agents' email who has the editor access to the documents (if they're set in the Alli dashboard).

viewer: List of the agents' email who has the viewer access to the documents. (if they're set in the Alli dashboard).

The number of search results and the threshold follows your dashboard setting. Please check your setting in Documents > Settings.

    "result": [
            "answer": "ANSWER_1",
            "answerHash": "ANSWER_HASH_1",
            "confidence": CONFIDENCE_1,
            "effectiveConfidence": EFFECTIVE_CONFIDENCE_1,
            "documentId": "DOCUMENT_ID_1",
            "documentName": "DOCUMENT_NAME_1",
            "hashtags": [],
            "pageNo": PAGE_NUMBER,
            "createdDate": "DATE",
            "agentFeedback": {
                "positiveCount": 0,
                "negativeCount": 0
            "userFeedback": {
                "positiveCount": 0,
                "negativeCount": 0
            "body": "PREVIEW_HTML_BODY",
            "css": "PREVIEW_CSS",
            "pdfPreview": "PDF_PREVIEW_URL",
            "folder": "folder 1", 
            "editor": ["", ""], 
            "viewer": [""]
Available LLM Models


OPENAI GPT3.5 TURBO 16K (turbo_16k)

OPENAI GPT3.5 TURBO INSTRUCT (turbo_instruct)

OPENAI GPT4 (gpt4)

OPENAI GPT4 TURBO (gpt4_turbo)

OPENAI GPT4 TURBO VISION (gpt4_turbo_vision)

OPENAI GPT4 32K (gpt4_32k)

OPENAI GPT-4o (gpt4_o)

AZURE GPT3.5 TURBO (azure_turbo)

AZURE GPT3.5 TURBO 16K (azure_turbo_16k)

AZURE GPT3.5 TURBO JA (azure_turbo_ja)

AZURE GPT4 (azure_gpt4)

AZURE GPT4 TURBO (azure_gpt4_turbo)

GOOGLE PALM2 (vertex_text_bison)

NAVER HYPER CLOVA X (hyper_clova_x_lk_0)

ANTHROPIC CLAUDE 2 (anthropic_claude_2)

ANTHROPIC CLAUDE 3 OPUS (anthropic_claude_3_opus)

ANTHROPIC CLAUDE 3 SONNET (anthropic_claude_3_sonnet)

ANTHROPIC CLAUDE 3 HAIKU (anthropic_claude_3_haiku)

GEMINI PRO (gemini_pro)

GEMINI PRO VISION (gemini_pro_vision)

ALPHA F V2 EEVE (alpha_f_v2_eeve)

Output Items
answer : The answer generated as a result of running Generative Answer API

intent : Generative Answer's intent (either SEARCH or END_OF_CONVERSATION)

clues(list) : List of clues used to generate the answer (only if clues in PARAMETER is True).

clueId : Unique ID of the clue managed by Alli.

source : Clue's type (DOCUMENT or FAQ).

title : The title of the document or FAQ stored in Alli.

pageNo : (Only when type is document) Page number of the document.

kbId : (Only when type is document) The unique id of the document, as managed by Alli.

faqId : (Only when type is FAQ) The unique ID of the FAQ, managed by Alli.

text : (Only when type is document and clueText is true) The text of the document used as the clue.

threadId : If a threadId is given as input (only if is_stateful in PARAMETER is True), the value is returned as is, otherwise a new thread id is created and returned.

fuQuestion : If a new query was created using the query and response results within the same threadId (only if is_stateful in PARAMETER is True), the value is returned.

If a question "What is Hong Gil-dong's hometown?" was asked immediately before within the same threadId, and then a question "What is his age?" is asked afterwards, a fu_question will be created like "What is Hong Gil-dong's age?", and that question will be used in the query.

Request Example

Please replace REST_API_KEY with your one in the example below. Please see getting-api-key section.

curl -X POST -d '{ 
  "query": "can I disclose the composite ratings?",
  "model": "gpt4",
  "isStateful": "True",
  "threadId": "UUID",
  "mode": "sync" }' \
-H 'Content-Type: application/json' \

Response Example

    "answer": {
        "blocks": [
                "key": "k0",
                "text": "You can disclose the composite ratings to bank management after the examiner-in-charge (EIC) has discussed the recommended component and composite ratings with senior management and, when appropriate, the board of directors near the conclusion of the examination [1]. Generally, in these situations, examiners should contact the regional office overseeing the institution and discuss the proposed ratings with the case manager or assistant regional director prior to disclosing the ratings to management or the board [3].",
                "inlineStyleRanges": [],
                "entityRanges": [
                        "key": 0,
                        "offset": 263,
                        "length": 3
                        "key": 1,
                        "offset": 517,
                        "length": 3
                "type": "unstyled"
        "entityMap": {
            "0": {
                "type": "LINK",
                "mutability": "MUTABLE",
                "data": {
                    "url": YOUR URL ADDRESS
            "1": {
                "type": "LINK",
                "mutability": "MUTABLE",
                "data": {
                    "url": YOUR URL ADDRESS
    "threadId": "UUID", //For the next question, send in the body to remember previous questions.
    "fuQuestion": null

Last updated