# Guardrail features

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2FkXMDqEwl5NizzBadZPd8%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.35.29.png?alt=media&#x26;token=665a0d14-85c6-4860-928e-03960fd19ded" alt=""><figcaption></figcaption></figure>

**What is the guardrail feature?**

A guardrail is a security feature that checks and blocks user-input messages in advance if they contain harmful or inappropriate content. Detected problematic content is not passed to the LLM, providing a safer and more reliable service environment.

#### Types of guardrails <a href="#pdf-page-bcf4f4d2b9ee31bab7ff97425d5d3ef5a15e18d1-types-of-guardrails" id="pdf-page-bcf4f4d2b9ee31bab7ff97425d5d3ef5a15e18d1-types-of-guardrails"></a>

At Alli, we offer the following three types of guardrails.

**1. Keyword-based guardrail**

Pre-registered **specific words or phrases**are used as the basis to detect content.

* Usage examples: specific words, banned words, expressions prohibited by internal policy, etc.
* Keywords are not provided by default; to add new keywords you must create a new guardrail.

***

**2. Regular expression (Regex)-based guardrail**

Using regular expressions (Regex) **to detect input values of a specific format**.

* Usage examples: phone numbers, resident registration numbers, specific code patterns, etc.
* Regular expressions provided by default in Alli cannot be modified; only activation/deactivation can be adjusted.
* If additional edits or new patterns are needed, you must create a new guardrail.

***

**3. AI-based guardrail**

The input content is **automatically analyzed by AI**to determine whether it is harmful. By category, **verification strength settings**are available.

**Categories**

* Violence
* Sexual content
* Self-harm
* Hate

AI-based guardrails cannot be newly created; only the provided items can be adjusted and edited.

#### How to add a keyword-based guardrail <a href="#pdf-page-bcf4f4d2b9ee31bab7ff97425d5d3ef5a15e18d1-how-to-add-a-keyword-based-guardrail" id="pdf-page-bcf4f4d2b9ee31bab7ff97425d5d3ef5a15e18d1-how-to-add-a-keyword-based-guardrail"></a>

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2Fosoxa7MaKvyrSojoYIpG%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.35.58.png?alt=media&#x26;token=3de46e92-e869-44de-a797-a2007ea31c3e" alt=""><figcaption></figcaption></figure>

1. Click the + Add button at the top right and select the keyword-based option.

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2FEpjPhiRmMGZ3R8KvRth0%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.36.48.png?alt=media&#x26;token=ae9a409f-2160-4f13-9571-ed1f61c18b7a" alt=""><figcaption></figcaption></figure>

1. Choose whether to activate the guardrail.

* When activated, the guardrail will be applied to apps within the project that use the LLM.

1. Enter a name for the guardrail.
2. Register the keywords to block.

* You can register multiple keywords at the same time. There is no limit to the number of keywords you can register.

1. Enter a description for the guardrail. (Optional)
2. Click the Confirm button.

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2FDNgmfzuVZFjRu86Ve6cp%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.38.09.png?alt=media&#x26;token=bf70b90a-4fe8-4174-8def-a5b5be350a1a" alt=""><figcaption></figcaption></figure>

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2F7Z9DqwfhSySgFqSIu5ft%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.38.43.png?alt=media&#x26;token=a5e8ab2a-b3bf-41db-901c-a8d9960839a7" alt=""><figcaption></figcaption></figure>

1. Actual behavior can be verified during app testing and app execution. If a pre-specified keyword is detected, the send button is disabled, a warning message is displayed, and the user is prompted to enter a new message.

#### How to edit and add regex-based guardrails <a href="#pdf-page-bcf4f4d2b9ee31bab7ff97425d5d3ef5a15e18d1-how-to-edit-and-add-regex-based-guardrails" id="pdf-page-bcf4f4d2b9ee31bab7ff97425d5d3ef5a15e18d1-how-to-edit-and-add-regex-based-guardrails"></a>

**Edit the default provided regular expressions**

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2FYgzuCIz81nIqxiGfR02n%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.39.24.png?alt=media&#x26;token=85bcd0df-7e70-4ed8-9b07-7249d574c4b5" alt=""><figcaption></figcaption></figure>

Alli provides a total of three regular expression patterns by default. Default provided regular expressions can only be selected for activation or deactivation.

1. To activate a specific default regular expression, click the Edit button.

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2FBEVXH7uvcAxXdhuceZwu%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.40.00.png?alt=media&#x26;token=65e11ccb-8315-4a46-9918-74747e6d888e" alt=""><figcaption></figcaption></figure>

1. After choosing whether to activate the guardrail, press the Confirm button to apply the settings to the project.

**Add a regular expression**

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2FQnKSGWXjSg2H9R0zYiKG%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.40.42.png?alt=media&#x26;token=6e05a6fb-30c5-45d4-87e4-41d725406077" alt=""><figcaption></figcaption></figure>

1. Click the + Add button at the top right and select the regular expression-based option.

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2F97QXoFbN7vf1CHfO456d%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.40.59.png?alt=media&#x26;token=5362c5fa-8d00-4bca-8bd4-658b7b811fa3" alt=""><figcaption></figcaption></figure>

1. Choose whether to activate the guardrail.

* When activated, the guardrail will be applied to apps within the project that use the LLM.

1. Enter a name for the guardrail.
2. Register the regular expression to block.

* [Regular expression validation services](https://www.regextester.com/) etc., to write the regular expressions you want to use internally.

***

**Examples of using regular expressions**

Regular expressions (Regex) are used to detect or filter strings with specific patterns. When setting a guardrail, if an input message matches a regular expression pattern, that rule is applied.

**1. Check whether a specific word is included**

```
Banned word
```

* In a message, `Banned word`detects if the string is included.

**2. Detect if any one of multiple words is included**

```
(curse1|curse2|curse3)
```

* `curse1`, `curse2`, `curse3` Detects if any one of them is included.

**3. Detect regardless of case**

```
(?i)badword
```

* `badword`, `BadWord`, `BADWORD` etc., detected regardless of letter case.

**4. Detect number patterns (e.g., phone numbers)**

```
\d{3}-\d{4}-\d{4}
```

* `010-1234-5678` Detects phone numbers in that format.

**5. Detect email addresses**

```
[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}
```

* Detects common email address formats.

**6. When the message starts with a specific phrase**

```
^I would like to make an inquiry.
```

* If the message `starts with`I would like to make an inquiry.

**it detects only when it starts with that phrase.**

```
7. When the message ends with a specific phrase
```

* If the message `Thankkyou$`

**it detects only when it ends with that phrase.**

* Notes **Regular expressions**operate based only on pattern matching without understanding context.
* Be careful when using overly broad patterns, as normal messages may also be detected.
* Regular expressions can represent a wide variety of patterns, so it is helpful to learn usage by referring to real examples. We recommend looking up regex example resources if needed.

***

1. Enter a description for the guardrail. (Optional)
2. Click the Confirm button.

**How to edit AI-based guardrails**

AI-based guardrails cannot be newly created; you can change settings for the four default provided categories.

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2FcwNH2E4MQkEBb6gnJWGs%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.41.42.png?alt=media&#x26;token=53b2f9b0-ae2b-49bf-b777-15b7d98712fb" alt=""><figcaption></figcaption></figure>

1. Select the AI-based guardrail you want to edit.

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2FxSSxgzIWbkhBCmbI2Lat%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.41.56.png?alt=media&#x26;token=e92ab125-b599-4b6c-a1c5-79cc4cc0d51c" alt=""><figcaption></figcaption></figure>

1. Choose whether to activate the guardrail.
   * When activated, the guardrail will be applied to apps within the project that use the LLM.
2. Verification strength can be chosen from Low / Medium / High, and this uses AI judgment criteria.
3. Click the Confirm button.
4. Depending on the verification strength set in advance for the AI-based guardrail, if the AI analyzes the message content and judges there is a possibility of policy violation, the message send button will be disabled and a warning message will be displayed. In this case, the user will be prompted to enter a new message. **Due to the nature of AI-based evaluation, judgments can vary depending on surrounding context, word combinations, or memory. Therefore, please note that results cannot be strictly dichotomized or guaranteed 100% accurate.**

**Filter lookup**

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2F6Mf0NcTmVKbGC69b9xdG%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.42.20.png?alt=media&#x26;token=e8637f5c-c57c-4869-8951-495d102f30c4" alt=""><figcaption></figcaption></figure>

1. You can use the filter feature to view guardrails separated by item.
2. You can also check which guardrails are applied and which are not applied within the project.

**Applicable scope**

<figure><img src="https://2300099500-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FP12tuGvLWO89lcuqjaYH%2Fuploads%2FX71b7pmNZVlc0C73SCp4%2F%E1%84%89%E1%85%B3%E1%84%8F%E1%85%B3%E1%84%85%E1%85%B5%E1%86%AB%E1%84%89%E1%85%A3%E1%86%BA%202026-02-26%20%E1%84%8B%E1%85%A9%E1%84%92%E1%85%AE%201.44.36.png?alt=media&#x26;token=bf2d69eb-3237-45ac-8fac-5f69cfeaaec2" alt=""><figcaption></figcaption></figure>

**Guardrails operate by detecting and filtering content at the moment messages are delivered to the LLM. Therefore, they are not applied at all nodes, but only at the specific nodes that actually call the LLM.**

For example, guardrails are applied in the following cases.

* Answer generation node When a user inputs a message and that message is sent to the model to generate a response
* Question node + LLM execution node When user input is received in a question node and then the model is called via an LLM execution node based on that content
* Deep research Research and analysis tasks where an internal LLM call occurs when a user inputs a message

Conversely, please note that guardrails do not operate on nodes where no LLM call occurs.
