Support Logout NAV Navbar

Manage a Workspace

Workspaces are logical containers which host your Conversational AI bot/VA . Morfeus supports multi tenancy and you can create multipe bots with a logical separation with diiferent access controls for all bot related configuration including data , workflows and fulfillments , settings and customer data .

A Workspace can be thought of as a place where the user journey for different scenarios is described. It includes the training data for the AI engine and workflow to execute the Actions/Fulfillments.

Workspace Types

We are supporting two types of workspaces.

Either you can configure your bot to give a response only for the user's query by selecting FAQ Workspace or you can configure your bot to perform transactions also by selecting Conversational Workspace.
If you don't select any workspace type then by default it will be FAQs only workspace.

Create a CognitiveQnA (FAQs) Workspace

In this type of workspace, you can configure only FAQs and Small talk, this is apt for information only Conversational AI bots. This workspace will not support transaction flows via intents.

Dashboard create new workspace
Dashboard post login create a new workspace

You can create a CognitiveQnA(FAQs) Workspace by following below steps:

  1. Click on "Add workspace".
  2. Mention the name of the workspace in the prompted window.
  3. Do not select any domain.
  4. Click on create a workspace.

FAQ Workspaces requires SmallTalk, FAQs, Spell Checker and Rule Validator out of all Conversational AI modules.

Create a Conversational Workspace

Converstaional Workspaces facilitate bots to provide on-par human chatting experience for the user and performs transactions along with responding to user's queries. This type of workspace is very useful to create a conversational AI experience.

You can create a Conversational Workspace by following below steps:

  1. Click on "Add workspace".
  2. Mention the name of the workspace in the prompted window.
  3. Select "Banking (Retail Banking, Corporate Banking or Retail lending) / Insurance / Trading / Custom" domain.
  4. Click on create a workspace.

alt_text

Workspace creation will load base data and default data to the workspace as following :

Path of file admin-backend-api/src/main/resources/properties/config.properties

For Conversational Workspaces, all the AI data modules will be in use as mentioned under Conversational AI Modules.

Select a Workspace

  1. Login to the Admin portal to access/create a workspace
  2. Click on desired Workspace
  1. Login to Admin Portal
login screen image
Figure 4 Dashboard login screen
  1. Select your workspace
Dashboard post login workspaces screen image
Figure 5 Dashboard post login workspace selection screen

Export/Import Workspace

If you want to migrate your workspace configuration or clone it you can export and import your workspace. The export zip file contains all the artefacts of the workspace including Channels, Functions, Hooks, Rules, AI Data, Templates, Workflows, etc. If you are importing into an existing workspace the existing configuration will be over writen.

Importing a Workspace

You can import a workspace configuration by following these steps:

  1. Goto Morfeus Admin portal
  2. Click on 'Add Workspace' under My workspace
  3. Enter Workspace Name
  4. Select Product Type
  5. Click on Create Workspace
  6. Go to dashboard
  7. Click on menu icon of the created workspace click on 'Import Workspace'
  8. Upload a ZIP file that contains Configurations, Rules, Hooks, Functions, Templates, etc.
  9. Click 'Yes' on popup. (Are you sure you want to overwrite ?)

Exporting a Workspace

You can export the workspace by following these steps:

  1. Goto Morfeus Admin portal
  2. Go to Dashboard
  3. Navigate to the workspace(which you want to export)
  4. Click on menu icon on that workspace
  5. Click on 'Export Workspace'

It will download a ZIP file containing Channels, Functions, Hooks, Products, Rules, AI Data , Templates, Workflows, etc. alt_text

Deleting a Workspace

If you want to delete the not required or unnecessary workspaces then you can delete those workspaces by following these steps:

  1. Goto Morfeus Admin portal
  2. Go to Dashboard
  3. Navigate to the workspace(which you want to export)
  4. Click on menu icon on that workspace
  5. Click on 'Delete Workspace'
  6. Click 'Delete' on popup. (Are you sure you want to delete workspace?)

It will delete all the configuration of your workspace. And the workspace will be no longer available on your dashboard.

alt_text


Managing Admin Users

This feature will allow you to manage the users for your workspace by adding the users, managing the users to your workspace. So this section will take you through how to add users, manage users, etc.

Eg; If you want to share your workspace with your colleagues or with someone for testing or other perspectives then you can add them as a user to the workspace and also you can manage the users for your workspace.

Note: Super admin, workspace admin or a user with the appropriate security profile can add or manage the user.

Eg; You can give them access to your workspace to the user as admin, Security, Operations, Customer Support, etc to test or contribute in your workspace to make your bot respond better.

Adding Users/ User ACL

This feature will allow you to add a user to your workspace. You can add users to the workspace by following these steps:

  1. Goto Morfeus portal
  2. Click on 'Security'
  3. Click on 'Add User'
  4. Enter the details (Name, phone number, email, country)
  5. Provide access 'role' (Admin, Business, Customer support, Data(Smalltalk/FAQ), Data, Security, Operations, Reports, Tech operations)
  6. Select 'User Verify Type' (Internal, Active Directory)
  7. Select 'Authorization type' (Root, Maker, Checker)
  8. Assign a workspace (you can assign for multiple & all also)
  9. Click on Add

alt_text

Enabling and Disabling Users

You can Enable/Disable a user for your workspace by following these steps:

  1. Goto Morfeus portal
  2. Click on 'Security'
  3. Click on the edit icon (For which user you want to enable/disable)
  4. Click on Enable/Disable

alt_text

User Audit Trail

The user audit trail feature is to audit the violation/access issue that happened on the APIs. You can even check the APIs(which are violated) with the user's name, IP Address, date & time. Also, you can export those Audit Trail by clicking on the 'Export' button.

Eg; If the user has tried to hit some API & API has refused the connection or sent unauthorized error etc.

  1. Goto Morfeus portal
  2. Click on 'Security'
  3. Navigate to Audit Trail

alt_text

User Authentications

You can add the user to access or manage your workspace. From a user authentication perspective, you can manage the user's authentication type like how you are allowing them to log in to your workspace & manage your workspace. We are supporting two types of authentication(User Verify Type) as Internal or Active Directory.

alt_text

You can provide authentication for the added users or while adding the users as:

  1. Internal: It will authorize the user based on credentials that are stored for that user.

  2. Active Directory: It will authorize the user based on AD/LDAP that will be configured in the AD/LDAP 'Configuration' section under 'Settings'

alt_text

Azure AD SSO

let's start to find out how we can do integrate admin with Azure AD using SAML implementation.

We can divide this into three parts

1) Azure AD portal Configuration

2) SAML-SSO configuration

3) Admin configuration

Azure AD Portal Configuration

Refer to quick starthere

post was done with the configurations kindly add User Attributes and claims as shared in the below screenshot as we are using the same to create a session

adid is added as an additional property u can map any value which we are mapping inside the employeeId of admin user

*Always Reply URL should match with the endpoint which ur requesting in saml-sso

alt_text

SAML-SSO configuration

  1. Get the war fromArtifactory
  2. If u want to do customization clone the project from theBitbucket
  3. deploy to the place where morfeusadmin exists
  4. refer the below properties and update as explained in below

    1. saml.discovery.entity-id=https://sts.windows.net/1ea4687b-53b1-4285-babf-3f92fe915792/
    2. spring.thymeleaf.cache=false
    3. spring.thymeleaf.enabled=true
    4. spring.security.saml2.network.read-timeout=10000
    5. spring.security.saml2.network.connect-timeout=5000
    6. spring.security.saml2.service-provider.basePath=https://localhost:8443/saml-sso/
    7. spring.security.saml2.service-provider.sign-metadata=false
    8. spring.security.saml2.service-provider.sign-requests=false
    9. spring.security.saml2.service-provider.want-assertions-signed=true
    10. spring.security.saml2.service-provider.single-logout-enabled=true
    11. spring.security.saml2.service-provider.encrypt-assertions=false
    12. spring.security.saml2.service-provider.name-ids=urn:oasis:names:tc:SAML:2.0:nameid-format:persistent, urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress, urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified
    13. spring.security.saml2.service-provider.keys.active.name=sp-signing-key-1
    14. spring.security.saml2.service-provider.providers[0].alias=enter-into-saml-sso-alias
    15. spring.security.saml2.service-provider.providers[0].metadata=https://login.microsoftonline.com/1ea4687b-53b1-4285-babf-3f92fe915792/federationmetadata/2007-06/federationmetadata.xml?appid=8e34231d-e07e-44bd-a5b1-732aa0be5974
    16. spring.security.saml2.service-provider.providers[0].skip-ssl-validation=true
    17. spring.security.saml2.service-provider.providers[0].link-text=enter-into-saml-sso-link-text
    18. spring.security.saml2.service-provider.providers[0].authentication-request-binding=urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST
    19. app-context-path=/morfeusweb/#/dashboard

    alt_text

    saml.discovery.entity-id : You can copy the value from Azure Ad Identifier and replace the value as shown in the above image

    spring.security.saml2.service-provider.basePath : Path in which application deployed

    ex:- Our application is deployed in server https://localhost:8443 and war name is saml-sso hence we need to mention https://localhost:8443/saml-sso/

    spring.security.saml2.service-provider.providers[0].metadata : You can copy the value from App Federation Metadata Url and replace the value as shown in the above image

    app-context-path : In which application needs to be redirect post successful authentication.

    can also give complete endpoint as shown below

    https://localhost:8553/morfeusweb/#/dashboard

    rest were optional configuration

  5. All the above were default available values with updated properties u can name the file as saml-sso.properties and place anywhere in the server and mention the path in catalina.sh as shown below

    JAVA_OPTS="$JAVA_OPTS -Dsaml.sso.resources=/Users/userName/Documents/active_apps/develop/properties"

    where properties folder should contain the file named saml-sso.properties which you have created with updated values

Admin configuration(https://localhost:8443/morfeusweb/)

alt_text

  1. Workspace settings > SSO Tab > search for " SSO URL" and provide value in which saml-sso has hosted something like (https://localhost:8443/saml-sso/)( Default Value: NA )

  2. Workspace settings > SSO Tab > search for "Default Role" and u can choose dropdown based on ur requirement, Below table gives Accessibility based on roles ( Default Value: Admin )

Role Accessability
Admin Can access everything
Business Analyse, Manage Product, Manage Template, Manage campaign
Customer Support Customer Support
Data - SmallTalk/FAQ samll talk, faq,self learning
Data Manage AI
Security Security Page
Operations Deploy(AI and Workspace Configuration)
Reports Analyse
Tech Operator Configure workspace

Maker Checker

Maker checker allows us to not directly modify the certain things, in this there are two types of user one is maker and other is checker. A maker user creates/update the data but this first goes to to checker user now checker user can either approve this requerst or rejects the request.

There are 3 classification for maker checker i.e. Live, Pending Approval & archive

maker checker headers
Figure 1 Maker Checker Headers

Actions Supported On Maker/Checker

Filter There are various filters available for both maker checker users to filter data this filter can be accessed by clicking on icon shown below

maker checker headers
Figure 1.1 Maker Checker Filters

We have different filters for each category.

Actions Supported On Data

Once you perform any action using maker user the data will be sent to checker user for approval, Note a checker user is not allowed to perform any above action.
These data can be seen by both maker and checker on pending approval.
Here 4 options are available specifically for checker user i.e. approve, approve all, decline & decline all, and 2 options available for maker user i.e. delete & delete all
We can also compare data if update action is performed with the older data
Once we perform any of the above highlighted action the data will be processed further i.e (added, deleted or updated), this data can be seen on archive screen.

Modules Supported

Smalltalk

smalltalk 1
Figure 2 Smalltalk Live Page
smalltalk 2
Figure 3 Smalltalk Pending Approval Maker
smalltalk 3
Figure 4 Smalltalk Pending Approval Checker
smalltalk 4
Figure 5 Smalltalk Compare
smalltalk 5
Figure 6 Smalltalk Archive

FAQ

faq 1
Figure 7 FAQ Live Page
faq 2
Figure 8 FAQ Pending Approval Maker
faq 3
Figure 9 FAQ Pending Approval Checker
faq 4
Figure 10 FAQ Compare
faq 5
Figure 11 FAQ Archive

Bot Message

bot msg 1
Figure 7 Bot Message Live Page
bot msg 2
Figure 8 Bot Message Pending Approval Maker
bot msg 3
Figure 9 Bot Message Pending Approval Checker
bot msg 4
Figure 10 Bot Message Compare
bot msg 5
Figure 11 Bot Message Archive

Manage Workspace Rules & Manage AI rules

In Rules we can only modify the rules addition and deletion option is not supported.

bot msg 1
Figure 7 Manage Workspace Rules & Manage AI rules Live Page
bot msg 2
Figure 8 Manage Workspace Rules & Manage AI rules Pending Approval Maker
bot msg 3
Figure 9 Manage Workspace Rules & Manage AI rules Pending Approval Checker
bot msg 4
Figure 10 Manage Workspace Rules & Manage AI rules Compare
bot msg 5
Figure 11 Manage Workspace Rules & Manage AI rules Archive

Managing FAQs

Overview

The FAQs (Frequently Asked Questions) are the customer-specific questions that might be asked by the users. FAQa are usually about the business oriented product offerings. FAQs are usually interrogative in nature.You can add FAQs to your workspace as per your requirements so that bot will give a response of those FAQs which user will ask to your bot.

The FAQ screen has been revamped and had new functionalities also for the faq data. It has some enhancement as well for the data curation, duplication check, search functionality, grid tab and filters. The changes are listed down in points.

faq_git_sync_progress
When git sync is in progress, this msg appears

Details

When we create a workspace, then at backend git sync happens. So new screen will have the status update for in progress, error or git sync completion for it.

faq_git_sync_failure
In case if git sync fails at backends

Create faqs and response

Creation of faq UI is changed, here all the response, variant w.r.t language and response type is at one place. Multi-response and multi-channel can be configured from one place only.

Click on add button at the top to create new faq.

faq_git_sync_progress
Create a new FAQ
faq_git_sync_progress
Create a new FAQ page
faq_git_sync_progress
All the template types are listed

From the above page we can configure response of various types by selecting the Fulfilment type. The order is text, template and workflow response. The customer segment response can also be configured from here. The faqs can also be added with existing workflows.

The multi response can also be added by clicking on Add following response which will open same editor as shown to add more responses.

Similarly the edit faq is changed similar to above one.

faq_git_sync_progress
Edit FAQ screen

Filter page

Here the category, date and modified by can be applied together as a criteria to filter. When ontology os enabled then product, product name and product type filter option comes up. This is independent of category and date.

Filter FAQs
Filter FAQs

When we search any question, variant, answer or faq id, if the elastic search index is created for the workspace then the response is shown from the elastic search. It is like a suggestion for whatever text is given.

Search FAQs
Search FAQs

The manage grid option shown will navigate to the Manage knowledge grid. The run duplicate check triggers the duplicate check on the faqs, this is detailed later in this module.

Manage GRID
Manage GRID


DEDUP feature

Description

Dedup features helps to identify the duplicate faqs present in the data. This feature has the dependency of the sniper dedup api and the elastic search url. It exports all the data from the Elastic search index for the workspace. When we import or add new faqs then on “Run Duplicate Check”. It has the metic to find the duplicates that is Similarity and Variant/Non-Variant. Similarity is the percentage how much one question is similar to the existing one. Variant/Non-variant is classified as main question or variants.

Feature is explained in more detail in below steps :

  1. Import/add faq on Faq screen
Import FAQs
Import FAQs
  1. Once import done click on Run Duplicate Check
Run duplicate checks
Run duplicate checks
  1. If duplicates are present for the data then we get the below shown message
Duplicate FAQs present
Duplicate FAQs present
  1. To view duplicates click on View Duplicates inside three dots above or click on notification bell which is highlighted because the duplicates are present.
View Duplicate FAQs
View Duplicate FAQs
Run Duplicate Checks
Run Duplicate Checks
  1. Once we click on the view duplicates then the popup will appear with list of all the duplicates combination from the data.

On the right hand side is the question and on left is the respective duplicate. There are three user actions - Delete question - It will delete the question and retain the duplicate as the main question. Retain Duplicate - it retain the duplicate as the main question in the database and elastic search Merge Duplicate - It will merge the duplicate as the variance to the question on the left. Delete Duplicate - This will delete the duplicate question which is present in the database as an independent question

Note - All the above four actions will make changes in the database and the elastic search index for the respective workspace.

List of all Duplicates
List of all Duplicates
Delete Question
Delete Question
Merge Question
MERGE THE DUPLICATE TO THE QUESTION ON LEFT
Merge Question
DELETE THE DUPLICATE WHICH IS AN FAQ IN THE DATA
  1. Once we Retain the duplicate it will be store in the history. Firstly, On second check it will be asked to remove the history and proceed or proceed without clearing history. In the check the retained question will not be sent for dedup check if history is not cleared. Secondly, we have an option to clear the retained history. Note - All the above four actions will make changes in the database and the elastic search index for the respective workspace.
Merge Question
On second check, clear history option comes if on first faqs are restored.
Merge Question
Clear retained question history

Note -

  1. When we delete all faqs, the duplicates and the history is also cleaned
  2. on import of data the previous history and the duplicates all are removed.
  3. before import, it is mandatory to update the Elastic search URL, otherwise the feature will not work.
  4. On dedup check if the messages are as follows:
  5. ES index not created yet - means the index is not created on the import or creation is in progress.
  6. At 20 faqs must be present for dedup check - this is the constraint on the feature to have minimum 20 faqs for the check.
  7. DeDup Rules not configured - It checks in the bot rules whether the sniper dedup api and similarity index rules are configured. We can customise it from admin as well. Go to Configure workspace -> manage workspace rules -> General tab -> configuration. Two rules are Similarity index for faqs and Duplication check Url.

Formatting Responses

The response will be rendered on the bot normally in a textual format. You can add a text response in your FAQs by entering the responses in the bot Response section. If you are not selecting any template so by default is Text.

Templates

Template editor supports formatting the FAQ response from the bot to render good look and feel to your bot responses so that the conversations will be more interactive & user-friendly.

You can format the responses as Text, Card, Image, Carousel, List, Button, Video & Custom by following these steps:

  1. Select your workspace and Click on Manage AI -> Click Setup FAQs -> Select Add FAQ.
  2. Enter FAQ ID, FAQ Question, select Response Type as Text
  3. Enter the response
  4. Click Add.

Following are the types of template response for FAQs

  1. Text Template
  2. Card Template
  3. Carousel Template
  4. Button Template
  5. List Template
  6. Image Template
  7. Video Template
  8. Custom Template

Workflow

Workflow helps to define step by step conversation journeys. The intents and entities derivation is mandatory to identify the correct response to user, but all the needed information may not be available all the time, during these cases workflow can be configured to prompt the user for more input that is needed to respond correctly.

Ex: If a user asks, How to apply for a debit card?, the defined workflow can ask for the various card selection like Rupay Card, Master Card, Visa Card, etc.

In a workflow, each entity is handled by a node. A node will have atleast a prompt and a connection. A prompt is to ask for user input and connection to link to another node. In a typical workflow that handles n entities, there would be n+2 nodes (One node per entity with a start and a cancel node).

In the workflow, we expect user inputs in a sequence, but by design, a workflow can handle any entities in any order or all in a single statement. Out of the box, workflow supports out of context scenarios.

You can configure a workflow by following these steps:

  1. Select your workspace and Click on Manage AI -> Click Setup FAQs -> Select Add FAQ.
  2. Enter FAQ ID, FAQ Question, select Response Type as Workflow
  3. Configure the desired workflow
  4. Click Save on the template editor
  5. Click Add on the FAQ popup screen.

Keyphrases

A Keyphrase is a word/phrase that must be present in the user's query to consider and predict a question as potential candidate for deriving the desired response. Example “How to pay by credit card” can have a “credit card” as the key phrase.

Since users may type the key phrase in different ways you may also want to declare “credit crd”, “CC” as variants to these Keyphrases so they are all treated equally.

Keyphrases are used to narrow down the potential candidates against which the CognitiveQnA module runs matches. Quality Key phrases are important for accurate responses of the bot.

Kephrase Guidelines

  1. Post adding/modifying/deleting keyphrases, Training AI data is required
  2. Maximum three words are permitted under keyphrases where the single space(s) should be replaced _ (underscore)
  3. Root value of the keyphrase should not contain the single or multiple space rather should be replaced with _ (underscore)

Adding Keyphrases

You can either add the keyphrases manually by entering the key phrase & the respective synonyms or import a CSV file of keyphrases.

Adding Keyphrases Manually

To add keyphrases, one has to follow below steps:

  1. Select your workspace and Click on Manage AI -> Click Setup FAQ -> Click on 3 dots -> Click on -> Manage FAQ Metadata
  2. Screen with two headers namely "keyphrases" and "Synonyms" will be shown
  3. Under header "keyphrases" within the box with navigational text "Type Keyphrase name", populate the root value of the keyphrase
  4. Under header "Synonym" within the box with navigational text "Add synonym", populate the synonyms of the Original keyphrase root value
  5. Post populating the content, click on Save
  6. .

Keyphrases CSV Import File Structure

Column Name Description
Keyphrase Name Rootword of the Keyphrase
Status if it is added
Language Language of the Keyphrase
Send To Train Keyphrase is to send for Data training or not
Synonym1 First Synonym of the Rootword
Synonym2 Second Synonym of the Rootword
... ...
SynonymN ...

Importing Keyphrases

You can import the list of keyphrases by following these steps:

  1. Select your workspace and Click on Manage AI -> Click Setup FAQ -> Click on 3 dots -> Click on -> Manage FAQ Metadata
  2. Select Import, Select "Do you want to delete existing Keyphrases if any?"
  3. Click on Choose files
  4. Select the CSV file of keyphrases to import.

Note: The CSV file should have a column for Keyphrases & synonyms (You can add multiple synonyms)

Exporting Keyphrases

You can also export the keyphrases by following these steps:

  1. Select your workspace and Click on Manage AI -> Click Setup FAQ -> Click on 3 dots -> Click on -> Manage FAQ Metadata
  2. Select Export
  3. keyphrases CSV file will be downloaded.

It will download a CSV file containing keyphrase & synonyms columns

Deleting Keyphrases

You can also delete the keyphrases which you don't want by following these steps:

  1. Select your workspace and Click on Manage AI -> Click Setup FAQ -> Click on 3 dots -> Click on -> Manage FAQ Metadata
  2. Select the keyphrases to Delete
  3. Click on the delete
  4. Or can click on Delete All (Delete All will delete keyphrases added by the customer and all the keyphrases for the bot (preloaded keyphrases) also)
  5. Note: Keyphrases can be added for all the supported language data. For eg: Arabic keyphrases.

    Actions

    Any action present in the user's query can be added as actions. Example: for a FAQ 'How to open an account', the action in this is 'Open' and the same can be added in Actions. Other features set in this page like Add, Update, Import and Export operations will remain same as described under the Keyphrases module.

    Product Actions

    Note: Actions can be added for all the supported language data. For eg: English Actions -

    Actions in English

    Product Attributes

    Product attribute is a word that can be present in the user's query to categorise the attributes of a product. Example “How to pay bill of visa credit card” can have a “visa” as a product attribute. Other features set in this page like Add, Update, Import and Export operations will remain same as described under the Keyphrases module.

    Note: Product Attributes can be added for all the supported language data. For eg: Arabic Attributes -

    Grid FAQs

    Grid faqs help to add more product-related answers to the faqs. It helps to add more dimension by adding an answer to the generic faq response with details of products having the latest updates.

    You can manage Grid FAQs by following these steps:

    • Goto your workspace
    • Navigate to 'Manage AI'
    • Click on 'Manage Knowledge Grid'

    In the opened screen there are Add, Edit, Search, Export & Import options available. The data will be shown based on the selected products.

    grid_faq

    There are certain rules to be followed while adding, updating, or importing the data in a grid as follows:

    Rules:

    1. Special Characters are not allowed (Except <, >, #, etc.)
    2. The contents of each cell should have a minimum of 2 and a maximum of 200 characters.
    3. To get the dynamic values assigned to attribute messages at the time of fulfillment, the data must be added in the following ways:

      • When a single value to be placed in the attribute message it should be as

        • Attributemessage - The Account Charges are .
        • AttributeValue - Rs 200.

        In the above scenario, the AtttributeValue entered as Rs 200 will be placed at at runtime. We have to mention wherever the data has to be picked from the attribute value.

      • When multiple values have to be placed in the attribute message then it must be as

        • AttributMessage - The rate of interest for on salary account is percent.
        • AttributeValue - 2 years#10.8

        Here inside attribute value, the subsequent values have to be separated by #. So accordingly the attribute message will be formed as “The rate of interest for 2 years on salary account is 10.8 percent.

    Add Grid Data:

    You can add grid data manually by following these steps:

    • Goto your workspace
    • Navigate to 'Manage AI'
    • Click on 'Manage Knowledge Grid'
    • Click on 'Add Data' ( '+' icon)
    • Enter 'Select Product' (Eg; Super Card)
    • Enter 'Product Type' (Eg; Savings)
    • Enter 'Product Name' (Eg; Credit Card)
    • Enter 'Attribute' (Eg; Annual Charges)
    • Click on the Arrow mark to expand
    • Enter 'Attribute message' (Eg; Annual Charges is )
    • Enter 'Attribute Value' (Eg; Rs. 500)
    • Click on the 'Save' icon

    grid_faq

    Import Grid Data:

    You can import the Grid FAQ CSV file, the file should contain the following columns:

    • Product
    • Sub-Product
    • Product-Type
    • Product-Name
    • Attribute-Heading
    • Attribute-Message
    • Attribute-Value
    • Language

    To import Grid data you can follow these steps:

    • Goto your workspace
    • Navigate to 'Manage AI'
    • Click on 'Manage Knowledge Grid'
    • Click on the import icon
    • Select a CSV file (That contains all the columns specified above)
    • The Grid Data will be imported and the data will be shown based on the product type

    Export Grid Data:

    You can export the Grid data file by following these steps:

    • Goto your workspace
    • Navigate to 'Manage AI'
    • Click on 'Manage Knowledge Grid'
    • Click on the export icon
    • Select 'Export as .csv'
    • A CSV file will be downloaded with the columns (Product, Sub-Product, Product-Type, Product-Name, Attribute-Heading, Attribute-Message, Attribute-Value, Language, etc.)

    grid_import_export

    Link Grid Data to a FAQ:

    You can link the added/imported grid data to your existing/new FAQs to get the response based on grid data by following these steps:

    • Goto your workspace
    • Navigate to 'Manage AI'
    • Click on 'Setup FAQs'
    • Select any question
    • Click on 'Grid FAQ'
    • Select Product, SubProduct, Type, Product Name
    • Select attribute(s)
    • Click on 'Finish'
    • Click on 'Save'

    Note:

    1. You can arrange the order of attributes/messages by dragging them in any order to show on the bot
    2. You can also arrange the order of product, sub-product, type, product name by dragging them in an order
    3. To get the updated answer along with the grid, Training must be done before all the grid-related linking. Then after linking we have to quick train the faqs.

    Edit Grid Data to a FAQ:

    To edit the linked Grid Data in a FAQ you can refer to these steps:

    • Goto your workspace
    • Navigate to 'Manage AI'
    • Click on 'Setup FAQs'
    • Select any question
    • Select 'Edit or Link Faq' (from the dropdown in the Edit FAQ popup right top corner)
    • Edit Product, SubProduct, Type, Product Name
    • Edit attribute(s)
    • Click on 'Finish'
    • Click on 'Save'

    edit_grid_faq

    Unlink Grid Data from a FAQ:

    If you want to remove linked grid data from the FAQ, so you can unlink those grid data by following these steps:

    • Goto your workspace
    • Navigate to 'Manage AI'
    • Click on 'Setup FAQs'
    • Select a question for which you want to unlink the grid data
    • Select 'Unlink Faq' (from the dropdown in the Edit FAQ popup right top corner)
    • Click on 'Save'

    unlink_grid_faq

    Handling RTL

    Many human languages support the Right to the left layout of the writing of the sentences, so as we are supporting multi-lingual bots we have provision for RTL layout for those kinds of languages (Eg; Urdu, Arabic, Persian, etc.)

    The RTL is handled in the various places on the bot as follows:

    • FAQs: The Question and answers will be shown in the RTL layout for those languages as shown in the following image.

    RTL_faqs

    • Smalltalk: The Smalltalk will be shown in the RTL layout for those languages as shown in the following image.

    RTL_smalltalks

    • Chat History: The chat history which customer had with the bot in those specific languages those chat history will be shown in RTL layout as shown in the following image.

    RTL_chat_history


    Managing AI

    Small Talk

    Small talks are the friendly conversations and informal type of discourse that does not cover any functional topics of conversation or any transactions. SmallTalk data can include questions like "Hi", "Who are you" and "What is your name".

    Small Talk is stored in two separate categories namely; Base and Custom. Base Small Talks are provided as part of System. Custom Small Talks are those which are created by the business users for enriching the customer experience based on the self learning reports.

    Small Talk Guidelines

    1. All the greetings, bot capability questions and bot introduction questions possibilly be asked by the users should be though of
    2. Customer specific Small Talks should be provided by the business user along with the appropriate neutral answers
    3. Small Talk works based on the lookup, thus all different variations to a query should be included and mapped with an appropriate answers
    4. Smilies can be used in the answer for which unicode mapping should be provided in the bot configuration.

    Adding Small Talk

    You can add the Smalltalk manually by entering Small Talk & the response for that Smalltalk or import a CSV file of Small Talk that should have Id, Question, Answer, category columns.

    You can add the Small Talks manually by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup Small Talk
    2. Select the Language you are interested in
    3. Click on Add Small Talk
    4. Add Question/Query in 1st box (where navigational text is present as "Small Talk" in grey colour)
    5. Add Response in 2nd box (where navigational text is present as "Enter Response" in grey colour)
    6. To add Multiple response to a single Question/Query press enter after each response and add the +nth reponse
    7. Click on Save.

    alt_text

    ** Import File Structure**

    Column Name Description
    ID Identification number / serial number
    Question Users smalltalk query
    Answer SmallTalk Answer/response
    Category all the category should be by default marked with "greetings"
    Type Type should be always marked as "B"
    Language Desired Language code should be populated. Ex: en for English, es for Spanish

    Importing Small Talk

    You can add Small Talk to your workspaces using import feature by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup Small Talk
    2. Click on Import
    3. Select CSV file adhering to the above File Structure of the Small Talk
    4. Click Yes on the popup (*Are you sure you want to overwrite?*)
    5. Click on Save.
    6. Post successful import of .csv file, a message Small Talk uploaded successfully will be shown

    alt_text

    Exporting Small Talk

    You can export the Small Talks as a CSV file by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup Small Talk
    2. Click on Export

    It will download a CSV file containing Id, Question, Answer, Category, Type, Language columns.

    alt_text

    Deleting Small Talk

    You can delete the Small Talks using Delete and Delete All features by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup Small Talk
    2. Select the Small Talk(s) which you want to Delete and click Delete
    3. Select Delete for (*Are you sure you want to delete the selected smalltalk?*)
    4. For clearing all Small Talks, please use Delete All

    alt_text

    Intents

    Users's conversations or utterances from the bot will be tried to match with the best intents. Intents are the high level grouping of user's intentions which will give a clear direction for the subsequent action to be executed. Certain user requests/utterances may span multiple intents to fulfil the user's requests based on the total conversation.


    Intents (PrimaryClassifier) classifies the user's input into broader categories based on the action and attribute-based use case utterances.

    The intents are the purpose or influence about a subject in a particular situation. The translation from the user's request to the response starts with matching it to a particular intent. The matched intent then goes through a workflow based on the use cases.

    Sample Utterance Best Possible Intent
    Wish to pay my mobile bill Money Movement
    Want to do funds transfer Money Movement)
    Recharge my mobile number 8347423748 Recharge

    Adding Intents

    You can add intents manually by following these steps:

    Manage AI - Setup intents - Add New Intent
    Manage AI - Setup intents - Add New Intent
    1. Select your workspace and Click on Manage AI -> Click Setup Intents
    2. Enter the name for intent
    3. Click on Add Intent
      • Good Intent naming practise: Start Intent name with "qry-" for enquiry intents / "txn-" for transaction intents
    4. Id - Name of the intent
    5. Description - Optional Description related to the intent.
    6. Category - Category of the intent is the brief two to three words about the intent.
    7. Click on Add. Now the Intent will be listed.

    Adding Utterances

    To add incremental data for existing intent or populate the data for new intent, Click on Setup Utterances located just below the desired intent name.

    1. Select the Language you are interested in
    2. Key in the utterance (Navigational text - Type Here...) and click on Add button on the right side of the box to add the utterance. (*Minimum 20 utterances should be added for a intent*)
    Manage AI - Setup intents - Setup Utterances
    Manage AI - Setup intents - Setup Utterances

    alt_text

    Intent CSV Import File Structure

    Column Name Description
    Intent Id Intent name
    Intent Name Intent name
    Intent Category Brief 2-3 words about the intent
    Utterance Name Enter Actual utterances of the respective intent
    Annotated Utterance Utterance annoatated with Entity name like {sys.date}
    Utterance Type Should always be populated with "p" (i.e. "p" in LOWER case only) for all primary utterances or user's first utterances which will trigger the flow
    Language Two letter language code should be populated in LOWER case
    Main Utterance This should be always populated with UPPER case letter "N"
    Learning Type Learning type should be always populated with UPPER case letter "C"
    Training Type Training type should always be populated with UPPER case letter "B"
    Answers Reserved
    Tags Reserved
    Product Type Reserved
    Short Question Reserved
    Expected Entity Reserved
    Prompt Reserved

    save the file as ".csv"

    Importing Intents

    You can import the intents by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup Intents
    2. Click Import CSV
    3. Click 'Yes' on the popup. (*Are you sure you want to overwrite?*)
    4. Select the CSV intents file to import

    Note: The CSV file should have at least Intent Id, Intent Name, Intent Category, Utterance Name, Annotated, Utterance, Utterance Type, Language, Main Utterance, Learning Type, Training Type, Answers, Tags, Product Type, Short Question, Expected Entity, Prompt columns.

    Exporting Intents

    You can also export those intents by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup Intents
    2. Click Export CSV

    It will download a CSV file containing Intent Id, Intent Name, Intent Category, Utterance Name, Annotated, Utterance, Utterance Type, Language, Main Utterance, Learning Type, Training Type, Answers, Tags, Product Type, Short Question, Expected Entity, Prompt columns.

    alt_text

    Deleting Intent

    If you don't want some the intents that are added then you can delete those intents by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup Intents
    2. Click on menu icon of the particular intent
    3. Click on Delete intent, Click Delete on the popup. (*Are you sure you want to delete unsupported intent?*)

    alt_text

    Dialog

    Queries or transaction intent can have more than one dialog turn to complete the user's request. Any intent can have multiple dialogs turns to complete the information accumulation.

    Dialog utterances are viable and possible responses which can be keyed in by user to answer bot's question for fulfillment execution. Bot question is always linked with an entity (dictionary/train/regex). User has to populate the ```Bot question``` under ```Bot says``` box and user responses to the bot question with all possible variations based on the defined parameters under "Dialog".

    One dialog turn involves providing the following three information

    • Expected entity
    • Bot says
    • User says

    Adding Dialog

    You can add the dialogs for you intents by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup Intents
    2. Click on Setup Utterances of the desired intent -> Click Dialog tab -> click Add
    3. To prepare the dialog utterances, first key in the entity for which a bot questions the user (navigational text Add entity
    4. After registering the Entity and Bot Question, add sample user answers by click on Add Prompt button. (*can add multiple utterances that user might ask**)
    5. Click Save
    Add dialog utterances
    Add dialog utterances

    alt_text

    Delete Dialog

    You can delete the dialogs which you don't want to keep by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup Intents
    2. Click on Setup Utterances of the desired intent -> Click Dialog tab
    3. Click on the delete icon for which you want to Delete
    4. Click 'Yes' on the popup. (*Are you sure you want to delete?*)

    alt_text

    Entities

    Entities are a mechanism for identifying and extracting useful data from natural language inputs and user's generated content (UGC). Named Entity Recognition (NER) aids to this processing and helps in deriving the entities, which are the basic building blocks for executing the flows. Named Entity Recognition (NER) is used to extract / capture noun / noun phrases / attributes from the user's input.

    While intents allow your workspace to understand the motivation behind a particular user input, entities are used to pick out specific pieces of information that your users mention — anything from personal names to product names or amounts with units. Any important data you want to get from a user's request will have a corresponding entity.

    If a user is asking for their account status then the intent will be account inquiry. And the user will enter their account number that will be one of the entity of that ***intent***.

    Lets take an example, Transfer 500 to Charu from my account

    From the above statement, we require three information namely 500, charu, and account. These values will be extracted from the utterance and will be used to execute the client's API.

    Entity Types

    1. Dictionary (All finite values)
    2. RegEx (Regular Expression Pattern)
    3. Train (Infinite Values)

    Dictionary Entity

    Dictionary entities will take finite set of attributes.

    Ex:banking.product-type, banking.product-account-type, banking.product-name

    1. product_type in which card type do you list? - Visa, AMEX Master, UPOP, JCB or Rupay
    2. Account Type show current acc balance - Current, savings, salary are the finite account types.
    Dictionary entity - Manage AI - Setup Entities (post add)
    Dictionary entity - Manage AI - Setup Entities (post add)

    Train Entity

    Trained/Train Entity takes of entity extraction for dynamic infinite values from the user's input.

    Ex: banking.product-account-number, banking.product-card-number, sys.person-phone-number, sys.amount

    1. Mobile number in recharge my mobile number 8130927472 - 10 digit mobile number which is infinite in nature
    2. Amount in transfer 500 rupees - 1 to 8 digit amount which is infinite in nature
    3. Payee name transfer to jacob - payee name are regular expression which is infinite in nature.

    Train Entity Guidelines

    Train entity name and data preperation involves the below procedures:

    1. PrimaryClassifier data preperation - need to annotate the train entities within curly braces like; _"{<train entity name>}"_
    2. Dialog data preperation - need to annotate the train entities within curly braces like; _"{<train entity name>}"_
    3. While defining train entities, need to prepare a .samples file for AI data training
    4. All sample data will not be visible in the ui as the sample values are in thousands. To check the sample values one has to click on Export XML button.
    Train entity - Manage AI - Setup Entities (post add)
    Train entity - Manage AI - Setup Entities (post add)

    Regex Entity

    RegEx (regular expressions) entity is mainly used for details which can be captured by recognizing the pattern. The pattern should be universal or atleast universal within a specific geographical area.

    Ex: sys.email, banking.otp, banking.pin, sys.itemNumber

    1. email id in send an e-statement to neo@active.ai - email if is universal which can easily be captures with a RegEx pattern
    2. Item Number within a transactional flow select 5th account - Item Number within the user's input can easily be extracted by using RegEx pattern.
    RegEx entity - Manage AI - Setup Entities (post add)
    RegEx entity - Manage AI - Setup Entities (post add)

    Selecting Entities

    Dashboard setup dictionary entities screen image
    Manage AI - Setup Entities

    To select/see an entity, follow below Steps:

    1. Select your workspace and Click on Manage AI -> Click Setup Entities
    2. Click on Dictionary / Annotation / Regex to load the entities.

    Adding Entities

    You can add the entities by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup Entities
    2. Select Dictionary / Annotation / Regex tab for defining the Entity
    3. Click on Add New Entity
    4. Name - Name of the entity
    5. Code - Code of the entity
    6. Description - Description of the entity
    7. Category - Respective category Ex: sys, banking
    8. Class - Class of the entity can be populated here. All entity may not require or have the class
    9. Ontology (Knowledge Graph) Type - Here by default ontology will be set as None. If Knowledge Graph enabled for the workspace and want to enable at the entity level then can configure here with one of the types Product, type, name, attribute or Action.

    Now, to see the entity, click on the entit(ies) and expand. The expanded entity window will look as shown below.

    Dashboard add attributes to added or existing entities screen image
    Add attributes to entities

    Enter the below information for adding entity details.

    1. Enter a SubType : SubType is used when we need to enter different types of attribute groups for the same entity.
      1. banking.product-name for credit card : SubType can be populated as Credit Card
      2. banking.product-name for debit card : SubType can be populated as Debit Card
    2. Enter an entity : The root value of an attribute to be stored here
    3. Add a synonym : Synonym(s) should be populated where one variation should be the root value itself and rest should be actual synonym(s)
    4. Click on Save button located on the right top corner.
    Add SubType and respective attributes
    Add SubType and respective attributes

    alt_text

    Importing Entities

    You can import the entities in CSV or XML format.

    Import Entities by CSV

    You can import a CSV file of entities by following these steps:

    CSV file format should contain EntityName, EntityCode, EntityCategory,EntityType, ProductType, OntologyType, Language, SubType, EntityValue, Synonyms columns.

    1. Select your workspace and Click on Manage AI -> Click Setup Entities
    2. Click on Import CSV, Select the CSV entities file.

    alt_text

    Import Entities by XML

    1. Select your workspace and Click on Manage AI -> Click Setup Entities
    2. Click on Import XML, Select the XML file of entities

    alt_text

    Export Entities

    You can export the entities in Either CSV file or XML file.

    1. Select your workspace and Click on Manage AI -> Click Setup Entities
    2. Click on Export XML or Click on Export CSV

    If Export XML is used, then downloaded XML file will contain all the entities configuration with samples data.

    If Export CSV is used, then downloaded CSV file will contain data with EntityName, EntityCode, EntityCategory, EntityType, ProductType, OntologyType, Language, SubType, EntityValue, Synonyms columns.

    alt_text

    Delete Entities

    If you don't want some the entities then you can simply delete those entities by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup Entities
    2. Select the entities to delete by type and Click Delete

    alt_text

    SpellChecker

    SpellChecker maintains Acronym to Abbreviation Mapping for functional domain data maintained in your workspaces. We provision spell checking for Finance / Banking / Trading / Insurance (FBTI) domains.

    In general the "WHO" can signify multiple meanings like World Health Organization or can be a question oriented who? word. Spell Checker comes into play to resolve these ambiguity.

    We support 1-word based rootwords for correction. The rootword who will be substituted as World Health Organisation and mutliple word based rootwords are not supported. Our SpellChecker engines would not support changing variations from World Health Organization to WHO.

    We can add three different types of mapping as mentioned below:

    1. Acronym to abbreviation  = account=ac.,a/c
    2. Map the regular expression words with the same value to stop the auto correction  = Amit=Amit
    3. Spell typo to correct word mapping = balance=balnc,blance
    Dashboard spell check acronyms
    Dashboard spell check acronyms screen

    You can do the setup for SpellChecker by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup SpellChecker
    2. Select the Language you are interested in
    3. Click on Add Entity
    4. The new empty box will prompt at the end of the Grid. Enter the Root Value (On Navigational text "enter an entity"
    5. Add Response in 2nd box (where navigational text is present as "Enter Response" in grey colour)
    6. Synonym(s) should be added in the second box (On Navigational text "Add a synonym"). Multiple entries can also be mapped to a single root value for which after each synonym one has to use the "ENTER" key and then add another synonyms
    7. Click Save.

    alt_text

    Import SpellChecker CSV File Structure

    Column Name Description
    EntityName This column values should always be populated with "Acronyms" (attribute is a case sensitive)
    EntityCode This column values should always be populated with "spellchecker_acronyms" (attribute is a case sensitive)
    EntityCategory This column values should always be populated with "SpellChecker" (attribute is a case sensitive)
    EntityType This column values should always be populated with "SpellChecker" (attribute is a case sensitive)
    Language Two letter language code always in lower case
    EntityValue Root value in which a input word should be converted can be captured
    Synonyms Synonym(s) can be stored here under "Synonyms" column. Multiple Synonyms can be stored using "comma" (",") as a separator

    should be saved as ".csv"

    Import SpellChecker

    You can also import the SpellChecker using CSV file by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup SpellChecker
    2. Click on Import CSV, CSV should comply the above mentioned File Structure
    3. Click 'Yes' on the popup (*Are you sure you want to overwrite?*)
    4. Select the CSV file of SpellChecker
    5. Click Save.

    The CSV file should contain EntityName, EntityCode, EntityCategory, EntityType, ProductType, OntologyType, Language, SubType, EntityValue, Synonyms columns.

    alt_text

    Export SpellChecker

    You can export the SpellChecker by following these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup SpellChecker
    2. Click on Export CSV

    SpellChecker CSV file containing EntityName, EntityCode, EntityCategory, EntityType, ProductType, OntologyType, Language, SubType, EntityValue, Synonyms columns will be downloaded

    alt_text

    Delete SpellChecker

    If you want to remove SpellChecker, please follow these steps:

    1. Select your workspace and Click on Manage AI -> Click Setup SpellChecker
    2. Select SpellCheker which you want to Delete, click Delete

    alt_text

    AI Rules

    AI rules are the settings of your workspace, based on these settings your bot will respond to the queries. You can manage your AI rules as per your requirement.

    Manage your workspace’s functionality by configuring business rules.

    A. General Rules

    1. AI Engines related rules

    Rules
    Description
    Ai Engine It specifies the type of message processor.
    Auto Suggest Corpus Variants It specifies whether all or main variants to be added to the corpus.
    Context Change Detector It specifies the AI engine to be used to detect Context Change.
    Elastic Search Index You can set the Elastic Search Index for auto-complete.
    Enable Confirmation Entities Handling It specifies if the Confirmation Entities Handling is to be enabled.
    Enable Elastic Search as Fallback It specifies if the Elastic Search API call to be enabled.
    Enable External FAQ Fallback It specifies if the external FAQ fallback to be enabled.
    Enable FAQ response local lookup It specifies if to check FAQ response locally before hitting KBS.
    Enable fuzzy search It enables postback handling as text input.
    Enable Local Lookup for Classification It specifies if the local lookup of classification is enabled.
    Enable logging for Fallback It enables/disables logging for Fallback.
    Enable message translation from other languages to English before processing To support other languages, use the translator to translate the message to English and respond to other languages. To enable this also set Language Detector.
    Enable Preprocessor It specifies if the PrePreprocessor algorithm is to be enabled
    Enable Query Parser It specifies if the Query Parser algorithm is to be enabled
    Enable response translation from English to other languages after processing It translates response back to original language from English. Will require Message translation also to be enabled.
    Enable Sentiment Analysis It specifies if the Sentiment analysis is to be enabled.
    Enable Split Query It specifies if the Split Query algorithm is to be enabled to handle compound queries.
    Enable Split Query For FAQs It specifies if the Split Query algorithm is to be enabled to handle compound queries for FAQs.
    Enable tagged FAQ lookup It specifies if the Tagged FAQ lookup to be enabled.
    Enable Triniti FAQ Web Search It specifies if the Triniti FAQ web search API call to be enabled.
    Enable Web Content Elastic Search as Fallback It specifies if the Elastic Search API call to be enabled for the fallback based on web content.
    Enable/Disable FAQ response lookup from Morfeus database If enabled, FAQ response will be picked from the controller database, instead directly using AI Engine FAQ Answer.
    Entity Extractor It specifies the NLP engine to be used.
    Fuzzy search for FAQ web search cutoff It returns answer directly if web search question fuzzy matches user utterance.
    If ES API calls should use system proxy It specifies if ES API calls should use system proxy.
    KBS MLT Cards Display Limit It specifies the number of KBS MLT cards to be displayed when enabled.
    KBS MLT Cards Fuzzy Search Score Minimum Threshold It specifies the minimum threshold of Fuzzy Search Score for KBS MLT Cards.
    Language Detector It specifies the AI engine to be used to detect language.
    Language Translator It specifies the AI engine to be used to translate the message.
    Message Converter It specifies whether to translate or transliterate.
    Mode to handle manual chat It defines how to handle or reply manual chat
    Number of suggestions to show in auto-complete It will show the number of suggestions that are set here.
    Primary Classifier It specifies the Primary Classifier engine to be used.
    Secondary Classifier It specifies the Secondary Classifier engine to be used.
    Show KBS MLT Cards It shows cards in case of an ambiguous response from KBS.
    Show Related FAQ Queries It shows Related FAQ Queries if the FAQ answer is found.
    Show Related FAQ Queries after Fallback It shows Related FAQ Queries if the FAQ answer is found using any fallback.
    Show Related FAQ Queries After FAQs with CTA It shows related queries also for the FAQs which got click to action buttons.
    Smalltalk/FAQ Handler It specifies the AI engine to be used to handle Smalltalk and FAQs.
    Smart FAQ ambiguity handling It Uses a product from context to handle the ambiguity of KBS response.
    Solution to use for suggestions It specifies the solution to be used to suggest FAQs.
    Web Search Elastic Search Index Elastic Search Index for Web Search.

    2. Configuration related rules

    Rules
    Description
    Base Data Version It specifies the version of the base data.
    FAQ/Non-FAQ ES Index It specifies the elastic search index for Faq/Non-Faq.
    Handle Unmapped/Unsupported Intents as a FAQs If enabled fulfill intent whose fulfillments are not configured as a FAQ.
    Secondary Language Bot Bot id of the Secondary Language Bot.
    Smalltalk Paraphrasing Support If enabled, Smalltalk paraphrasing support will be associated.
    Synonyms support for AutoSuggest FAQ If enabled, Keyphrases and Acronyms will be added as Synonyms for AutoSuggest Elasticsearch index.

    3. Elastic Search Index related rules

    Rules
    Description
    Enable Customer Segment to filter FAQs in ElasticSearch If enabled, FAQs are filtered for a customer segment.

    4. Placeholders related rules

    Rules
    Description
    Break row placeholder Placeholder for break row in FAQ response.

    5. Threshold related rules

    Rules
    Description
    Alternative Matches It specifies the number of similar actions to be shown to the user in case his primary question was not confidently identified by the classifier.
    Customer Support Fallback Threshold It specifies the number of failed AI conversations before the system falls back to a customer support agent.
    Customer Support Fallback Time Interval (in minutes) It specifies the time interval in which the fallback threshold is measured e.g 2 failed attempts in 5 mins.
    Enable Grain Type Verification It enables verification of grain type for FAQs. The top candidate's grain type would be compared with the user query's grain type.
    FAQ Core Labels Core Labels included bouncing the request to intent.
    Max Adversity Score For FAQ Web Search It specifies the adversity score above which FAQ web search will not happen.
    Max Adversity Score For Five And Above Word Message It specifies the adversity score above which intent classified is invalid.
    Max Adversity Score For Four Word Message It specifies the adversity score above which intent classified is invalid.
    Max Adversity Score For One Word Message It specifies the adversity score above which intent classified is invalid.
    Max Adversity Score For Three Word Message It specifies the adversity score above which intent classified is invalid.
    Max Adversity Score For TWO Word Message It specifies the adversity score above which intent classified is invalid.
    Max Confidence It specifies the confidence percentage which defines an unambiguous (confident) intent detection of an input conversation
    Min Confidence It specifies the confidence percentage threshold which defines the lower boundary below which the classifier cant confidently predict the intent of a conversational input. For conversations with intent confidence levels between max and min levels are considered as ambiguous and the top 3 intents are displayed back to the user for selection.
    Min FAQ Confidence It specifies the confidence percentage threshold which defines the lower boundary below which the FAQ answer will be considered invalid.
    Min Smalltalk Confidence It specifies the confidence percentage threshold which defines the lower boundary below which the Smalltalk answer will be considered invalid.
    Minimum Threshold to consider Elastic Search result Minimum Threshold to consider Elastic Search result for FAQs.
    Minimum Threshold to consider Elastic Search result for Web Search Minimum Threshold to consider Elastic Search result for Web Search Fallback.
    Minimum Threshold to consider Triniti FAQ Web Search result Minimum Threshold to consider Triniti FAQ Web Search result for FAQs.
    Minimum Threshold to Show Suggestion Minimum threshold to include the question in suggestion.
    Negative sentiment threshold Score above this threshold will consider the message as negative.
    Postback fuzzy search cutoff Adjusting the score for postback search as text input.
    Retry FAQ It Retries answering FAQ if confidence is low.
    Retry Smalltalk It Retries answering Smalltalk if confidence is low.
    Stop Words It Stops words to exclude counting number of tokens in the message.
    Web search for FAQ Fallback to web search if FAQ is not able to answer.
    Web search for unclassified utterances Web search for unclassified utterances if the product exists.

    6. Translation related rules

    Rules
    Description
    Enable message translation for NER from other languages to English before processing To support other languages, use translator to translate the message to English and respond to other languages. To enable this also set Language Detector.
    Enable message translation for Primary Classifier from other languages to English before processing To support other languages, use translator to translate the message to English and respond to other languages. To enable this also set Language Detector.

    alt_text

    B. Triniti Rules

    1. Configuration related rules

    Rules
    Description
    API Key It specifies the API key for Triniti.
    Domains It specifies the domains of the Triniti AI Engines (specified in a comma-separated format for clustered deployments).
    Enable Cache It specifies if Triniti API calls are to be cached.
    If Triniti API calls should use system proxy It specifies if Triniti API calls should use system proxy.
    NER API Key It specifies the API key for Triniti NER.
    NER Domain It specifies the domain of the Triniti NER.
    Relative URL Context Path It specifies the relative context path.
    Triniti Paraphrase URL It specifies the URL for paraphrasing.
    Triniti Translate API Key It specifies the API key for Triniti Translate.
    Triniti Translate API URL It specifies the domain of the Triniti Translate API.
    Triniti Transliterate API Key It specifies the API key for Triniti Transliterate.
    Triniti Transliterate API URL It specifies the domain of the Triniti Transliterate API.
    Triniti worker process used only when loading after successful data training

    2. Deployment related rules

    Rules
    Description
    Deployment Mode It specifies the deployment mode of Triniti.
    Deployment Type It specifies if Triniti API calls are to be cached.
    Elastic Search Index used for loading Primary Classifier configuration for Quick Training It Quick Train Elastic Search index used for Intent classification.
    Trainer URL It specifies the Triniti instance URL which is used for training in a cluster deployment.
    Training Data Format It specifies the data format used to train the instance for Triniti version greater than 1.x
    Triniti API Key It specifies the X-API-KEY value to be embedded in all cloud Triniti calls.
    Triniti Manager URL It Specifies the URL of the Triniti manager for cloud deployment.

    alt_text

    C. Translation Rules

    1. AI Engine related rules

    Rules
    Description
    Enable message translation for NER from other languages to English before processing To support other languages, use translator to translate the message to English and respond to other languages. To enable this also set Language Detector.
    Enable message translation for Primary Classifier from other languages to English before processing To support other languages, use translator to translate the message to English and respond to other languages. To enable this also set Language Detector.

    2. Configuration related rules

    Rules
    Description
    Enable message translation from other language to English before processing To support other language, use translator to translate message to English and response back to other language. To enable this also set Language Detector.
    Enable response translation from English to other language after processing It translates the response back to original language from English. Will require Message translation also to be enabled.
    Google Service Account Credentials It specifies the Service Account JSON Credentials of Google Cloud API for translation.
    Language Detector It specifies the AI engine to be used to detect language.
    Language Translator It specifies the AI engine to be used to translate message.
    Message Converter It specifies whether to translate or transliterate.
    Secondary Language Bot Bot id of the Secondary Language Bot
    Translate API Max utterances It specifies the Max Number of utterances to be translated by configured Translate API.
    Yandex API Key It specifies the API key for Yandex

    3. Triniti related rules

    Rules
    Description
    Triniti Translate API Key It specifies the API key for Triniti Translate
    Triniti Translate API URL http:///translate/process
    Triniti Transliterate API Key It specifies the API key for Triniti Transliterate
    Triniti Transliterate API URL It specifies the domain of the Triniti Transliterate API

    alt_text

    D. Git Rules

    1. Configuration related rules

    Rules
    Description
    AI Data Sync source (Git or Zip) It pecifies the Sync source for AI data

    2. Data related rules

    Rules
    Description
    Branch It specifies the branch for triniti data parser.
    Password It specifies the password for triniti data parser.
    URL It specifies the URL for triniti data parser.
    Username It specifies the username for triniti data parser.
    Workspace It specifies the workspace for triniti data parser.

    3. ZIP related rules

    Rules
    Description
    Zip Password It specifies the password for triniti data parser zip file.
    Zip URL for Triniti Data import It specifies the Zip URL for Triniti Data import.
    Zip Username It specifies the username for triniti data parser zip file.

    alt_text

    E. Knowledge Graph Rules

    1. Configuration related rules

    Rules
    Description
    Elastic Search Index for Knowledge Graph It shows Elastic Search Index for Knowledge Graph.
    Enable Knowledge Graph It specifies whether the Knowledge Graph lookup is enabled.
    Enable Knowledge Graph Lookup for FAQ Fallback It specifies if the Knowledge Graph lookup to be used as FAQ fallback.
    Enable Query Parser using Knowledge Graph It specifies if the Query Parser using Knowledge Graph is to be enabled.
    Knowledge Graph Beautification phrases You can add the knowledge graph beautification phrases in this field.
    Knowledge Graph Beautification phrases for Attributes You can add the knowledge graph beautification phrases for the attribute in this field.
    Knowledge Graph Response Beautification If enabled, Knowledge Graph suggestions will be phrased as near meaningful Questions.
    Maximum Knowledge Graph suggestions It specifies the maximum number of Knowledge Graph suggestions in the display.
    Maximum number of words for Knowledge Graph Maximum number of words for Knowledge Graph
    Strict Search Knowledge Graph Threshold Specifies threshold for matching irrelevant results. Higher the value stricter the search. 0 to switch off this check.
    Time in seconds to keep Suggestion and Offset in cache Time in seconds to keep Suggestion and Offset in the cache, This is for Show More functionality.

    alt_text

    F. Self Learning Rules

    1. Self Learning Configuration related rules

    Rules
    Description
    Enable Self Learning 2.0 It will enable/disable the advance self-learning
    Number of rows to be created per XLSX workbook It specifies the number of rows to be created per XLSX workbook based on heap size, maximum being 1048576.
    Scroll search page size It will specify the scroll search page size.

    alt_text

    G. Spotter Rules

    1. Spotter Configuration related rules

    Rules
    Description
    API Key It specifies the API key for Spotter.
    Enable masking of numeric values The Spotter will mask the numbers in the responses if you enable this rule.
    Enable Spotter derived KeyPhrases If enabled, Spotter derived KeyPhrases will be updated.
    Spotter Context Path It specifies the relative context path for the spotter.
    Spotter URL It specifies the endpoint of Spotter.
    Spotter worker process used only when loading after successful data training ---
    Use Spotter Bounce-To Response as Intent If enabled, Spotter Bounce-To intent will be used as the main Intent.
    User ID It specifies the USER ID for Spotter.

    alt_text

    H. Triniti Unified API v2 rules

    1. Unified API related rules

    Rules
    Description
    API Key It specifies the API key to access Unified API.
    API Secret It specifies the secret key to access Unified API.
    Context Path It specifies the Context Path of Unified API.
    Enable Context Handling For Products Prerequisite Context handling is enabled. Only for specified products context handling will work. Set value to ALL to enable for all products
    Enable/Disable Compression It specifies if compression is enabled.
    Enable/Disable Context Handling It specifies if Context handling is enabled. Previous inputs will be passed to Triniti to provide context information
    Enable/Disable Debug It specifies if debug is enabled.
    Enable/Disable Discourse It specifies if the discourse is enabled.
    Enable/Disable Fragments It specifies if fragments are enabled.
    Enable/Disable Pragmatics It specifies if pragmatics are enabled.
    Enable/Disable Semantic Rules It specifies if semantic rules are enabled.
    Enable/Disable Similar Queries It specifies if similar queries are enabled.
    Endpoint URL It specifies the Unified API endpoint URL.
    Triniti Cloud Backend Domain URL ---
    Triniti Cloud Basic Auth Encoded Credential ---
    Unified API v2 process used only when loading after successful data training ---
    Unified API Version It specifies the version of Unified API.
    X-SESSION-ID It specifies X-SESSION-ID for Triniti.
    X-USER-ID It specifies X-USER-ID for Triniti.

    alt_text

    Import AI Rules

    You can even import the AI rules as per your requirement by following these steps:

    • Goto your workspace
    • Click on 'Manage AI'
    • Select 'Manage Rules'
    • Click on 'Import'
    • Select the file (JSON file)
    • Click 'Yes' on popup. (Are you sure you want to overwrite?)

    alt_text

    Export AI Rules

    You can also export the AI rules by going through these steps:

    • Goto your workspace
    • Click on 'Manage AI'
    • Select 'Manage Rules'
    • Click on 'Export'

    A JSON file will be downloaded for AI rules containing all the AI rules configuration

    alt_text

    NER

    NER is used to map the entities. You can upload a CSV file of entities by following these steps:

    1. Goto your workspace
    2. Navigate to 'Manage AI'
    3. Click on 'Setup NER'
    4. Click on 'Upload'
    5. Click on 'Browse'
    6. Selct a CSV file of entities
    7. Click on Done

    alt_text

    Training & Deployment

    To get the proper response from your bot you will have to perform Train & Deploy. Before train make sure that you have done all the following settings in your workspace rules:

    Set AI Rules

    1. General

    • Goto your workspace
    • Click on 'Manage AI'
    • Navigate to 'General'
    • Select 'Unified API v2' in the following fields:
      • AI Engine
      • Context Change Detector
      • Entity Extractor
      • Primary Classifier
      • Click on save

    alt_text

    2. Triniti

    • Goto your workspace
    • Click on 'Manage AI'
    • Navigate to 'Triniti'
    • Select Deployment Type as Cloud
    • Enter Triniti Manager URL as http://dev-manager.triniti.ai/v/1
    • Click on save

    alt_text

    3. Unified API V2

    • Goto your workspace
    • Click on 'Manage AI'
    • Navigate to 'Unified API V2'
    • Set Endpoint URL as https://router.triniti.ai
    • Set Unified API version as 2
    • Set Context Path as /v2
    • Click on save

    alt_text

    4. Configure Workspace Rules

    • Language & Country

      • Goto your workspace
      • Navigate to 'Configure Workspace'
      • Select Country & Language
      • Click on save
      • Click on save
      • Click on save
    • Security

      • Goto your workspace
      • Under 'Configure Workspace', click on 'Manage Rules'
      • Navigate to 'Security'
      • Set Access key ID
      • Set Secret Access Key
      • Set Bucket Region (Based on your selected country)
      • Click on save

    alt_text

    Generate & Train

    After setting all the rules mentioned above to follow these steps:

    1. Goto your workspace
    2. Navigate to 'Deploy'
    3. Click on 'AI Ops'
    4. Click on Generate
    5. CLick on TRain (After finishing up the generate)

    alt_text

    Manage Language Translation

    Morfeus supports internalization (i18n) ie; it handles languages like Hindi, French, Chinese, etc. other than English. The bot can answer in any language rather than replying in English only. You can check and manage the translation if the translation is correct and the user is getting the proper response in their language by following these steps:

    Note: The Manage Language Translation will be enabled only for multilingual bots.

    • Goto your workspace
    • Click on 'Manage AI'
    • Click on 'Manage Language Translation'

    Translated Utterances: (In this section, you will get all the translated utterances that bot translated and given the response)

    • Select the language for which you want to check the translation from the language dropdown
    • Select the date range
    • You will get the list of 'Customer Utterance' along with the respectively translated utterance and the bot response.
    • If you feel that the translation is not correct, you can edit those utterances by clicking on the 'Edit' icon
    • Enter the correct translation and click on the save button
    • The edited utterance will be added in the Updated Translated Utterances section with the status 'untrained' utterance

    Note: The updated utterances will be marked as Untrained, you will have to train those utterances after updating them to get the updated response.

    translated_utterances

    Updated Translated Utterances: (In this section you will get all the translated utterances which you updated by clicking on the edit icon from the 'Translated Utterances' section)

    • Select the language for which you want to check the translation from the language dropdown
    • Select the date range
    • You will get the list of 'Customer Utterance' along with the respectively 'Translated Utterance' & 'Updated Utterance'.
    • If you want to edit the updated utterance you can edit by clicking on the edit icon
    • And If you want to delete the updated utterance, you can delete it by clicking on the delete icon and selecting 'Yes' from the popup.

    updated_utterances


    Managing Self Learning

    What is self learning

    • Manage self-learning is a feature helps to trace the utterances which are ambiguous, unrelated or unclassified from all sorts of data. This classification can be traced based on Products, FAQ or Small Talk in specific date range as well.

    Why Self learning

    • As a human, you may speak and write in English, Spanish or Chinese. But a computer’s native language – known as machine code or machine language – is largely incomprehensible to most people. At your device’s lowest levels, communication occurs not with words but through millions of zeros and ones that produce logical actions.

    • As a result there will be many such uttrances that will not be classified by the Ai engine. It will be a very awful task to go through each and every unclassified uttrances and train them seperately. Self learning comes into play when you want to categorized all the uttrances under certain category. Post these categorization it becomes easy to get all the uttrances by applying various category, channel, date, Message type, and also confidence range.

    • For every uttrance that is being audited falls under certain category. Some of the categories are

    • Feedback

    • Ontology

    • Profanity

    • Unclassified

    • Unsupported

    • Failed

    • Ambiguous

    • Self learning reduces the manual intervation of classifying the uttrances under these categories.

    Category Description
    Feedback After every answer by the bot user gets an option to give Feedback to the answer given by the bot. Feedback can be thumbs up or thumbs down. ie positive or negative. All the feedback utterances given by user will fall under this filter in self-learning
    Ontology Ontology is a hierarchical mapping (Tree Structure) of the products and services an enterprise has to offer for customers. As AI classifiers are probabilistic by nature so they may not be suited for 2-3 words utterances. For all such utterances, you will get suggestions based on a knowledge graph. All the utterances which are classified by ontology graph will be filtered in ontology category.
    Profanity User conversations sometimes quickly derail and become inappropriate such as hurling abuses, passing rude and discourteous comments on individuals or bots. Inappropriate messages or comments are turning into an online menace slowly degrading the effectiveness of user experiences.
    Unclassified With all the power of machine learning and natural language processing, there will be some utterances that will not be classified by the bot due to various reasons such as data training, etc. All the utterances that are not classified by the bot will fall under Unclassified filter.
    Unsupported There may be some scenarios where users can send some unsupported files or unsupported media. All such utterances with unsupported data will fall under Unsupported category.
    Failed All the Faqs or transactions that may be failed due to various reasons such as wrong pin, shortage of fund in the users account, etc transaction may be failed in such scenarios. All these utterances will be categorized under failed category.
    Ambiguous When utterances being classified by the Ai engine, certain confidence is set for the classified result. This rating is given out of 100. If some utterances with low confidence levels will be classified under Ambiguous category.

    A. Search By Data

    Search by data is nothing but checking the utterances of:

    • None
    • Intents
    • FAQs
    • Smalltalk

    B. Search By Type

    You can check the type of utterances like:

    • Feedback
    • Ambiguous
    • UnSupported
    • UnClassified
    • Ontology
    • Profanity
    • Unanswered

    C. Search By Channels

    You can search the data based on channels, that will show the utterances that are asked on the particular channel. Eg; WebApp, Facebook, Skype, WhatsApp, etc.

    D. Search By MessageType

    Using this filter you can filter out the responses based on the following message types

    • Live Agent (Interaction with live agent eg; FreshChat, LiveChat, etc.)
    • Post Back (Responses received from buttons eg; Similar Queries, Related FAQs, Button templates, etc.)
    • Text (The text responses which are asked by the user by entering their input on the bot input box)
    • Voice (The responses which are received from IoT devices eg; Google Assistant, Alexa, etc.)

    E. Search By Languages

    You can also filter out the utterance which is asked in different languages Eg; French, Arabic, Chinese, etc.

    F. Search By EventCodes

    The utterances could be filtered out based on event codes eg; Cognitive QnA, Fallback answered, Default messages, etc.

    G. Search By Utterance

    You can find out the utterances by entering the utterance in the search box.

    H. Search By Date

    Search by Statement is a combination of selecting the utterances of FAQs, Small Talk, Banking with respective to Channels or Type of Answers using key statements provided in the input field.

    manage_selflearning

    I. Search By Sentiments

    The utterances could be filtered out based on sentiments eg; Positive, Negative or Neutral.

    sentiment-analysis

    sentiment-analysis

    Add To Training

    In the manage self-learning, we do show all the utterances which user has asked on the bot. Though if we feel that the bot has not responded with the correct answer or if we have any unanswered utterances then we can send those utterances back for the training using this feature.

    To train the failed utterances/data we will have to follow these steps:

    • Goto the workspace
    • Goto Manage SelfLearning
    • Get the utterances based on various filters (ie; Date, Data, Type, Channels, Message type, Language, Segment, search text, etc.)
    • Once we get the utterances, please click on the edit icon of that particular utterance.

    add-to-train

    • On click of the edit icon we will get the 'Add To Training' popup.

    add-to-train

    • Please select the 'Data Category' as 'FAQ'
    • We can choose to add the selected utterance as 'New FAQ' or 'Existing FAQ'

      • Add to new FAQ: add-to-train-new
      • Add to existing FAQ: add-to-train-existing
    • We will get the selected utterance and the 'FAQ Category' and 'FAQ list' of the selected language/utterance language.

    • Once we add the utterance as 'New FAQ'/'Existing FAQ' we can save the changes and train the bot.

    Note:

    • Add the utterance as a new FAQ => It will redirect to the FAQs Screen where we can add the selected utterance as a fresh new FAQ with a proper response.
    • Add the utterance as existing FAQ => It will add the selected utterance as the variant for the selected FAQ from the dropdown list.

    Managing Knowledge graph

    Overview

    A knowledge graph is a hierarchical mapping (Tree Structure) of the products and services an enterprise has to offer for customers.

    AI Classifiers are probabilistic by nature, so they may not be best suited for short 2-3 word utterances. Knowledge Graph provides a deterministic alternative with configurable probing in case of ambiguity.

    This is a keyword-based user journey. Here user input is analyzed based on a pre-defined graph and the user is taken to a fulfillment.

    Design

    Knowledge Graph design constitutes of the below basic building blocks.

    Design Factor
    Description
    Products Defines unique business offering like products, goods or services Ex: Account.
    Product Types Defines the base classification for the defined business products. Ex: Savings, Current, Salary.
    Product Names Specifies the marketed name for the business offering. Ex: DBS Multiplier, OCBC 360
    Product Attribute Groups Association of prominent feature grouping of a product. Ex: Statements, Balances
    Product Attributes Features or characteristics to be associated for the defined products, types and names Ex: e-Statement, Eligibility, Benefits, Nominee
    Action Action is the effect which has to be executed on the configured Product/Service Ex: view, download, transfer, activate

    Retail Banking Knowledge Graph Design

    alt_text

    You can design a knowledge graph by following these steps:

    1. Goto your workspace
    2. Navigate to 'Manage AI'
    3. Click on 'Setup Knowledge graph'
    4. Click on 'Add' in the Design section
    5. Enter all the required details (Product, synonyms, product type, etc.)
    6. Expand the product (which you have added in last step)
    7. You can add more Product types, Names & Attribute groups.

    alt_text

    Importable Design Workbook

    Sample Knowledge Graph xlsx for domain Retail Banking. This can be imported from the Knowledge graph UI using the import function.

    Sample KnowledgeGraph Workbook

    File Structure

    Sheet Name
    Description
    Product Constitudes definition of Products, Types, Names and Attributes
    Product Fullfillment Defines the desired result business user want to take the customer to. Default, Messages, Templates, Intent, FAQs, Worflow
    Product Synonyms Defines the stand-in replacement when customer types a word in the bot. Ex: Credit card product is synonymous to CC, visa, master, credit
    Action Synonyms Defines the stand-in replacement work to be done on the defined product or serivce. Ex: view Action is synonymous to show, get

    Probing

    Probing is the process where the user-provided phrases or information will be analyzed and appropriate suggestive listings will be provided by Knowledge Graph based on the business user-provided product/service configurations and fulfillments. The result of probing will be the next steps that will guide the users with the possible upcoming suggestions. The probing will be done for the enabled products only.

    alt_text

    1. Action & Product Attributes Probing:

    Assume that the business user has configured the Knowledge Graph for retail banking products like Credit card, debit card, account. So, if the user asks for synonym "cc" for Credit card, the bot suggests linked actions and attributes.

    Ex:

    • If you have asked for cc, debit, then the bot will ask you for the actions like apply, activate, replace, etc. based on the configuration.
    • If you have asked for account, then the bot will show you the product attribute like payee, biller, balance, etc.

    alt_text

    2. Product Probing:

    When user asks for any action and if that action is linked to multiple products, then bot will suggests the multiple linked products. Then based on the suggestion selected by the user further probing or fulfillment will be carried out.

    Ex: If you ask the bot to show, then the bot will give you the option to select like view your loan, view your card, view your account, etc.

    alt_text

    3. Product Type & Product Name Probing:

    When fulfillment is defined for the Action -> Product -> Type -> Name, user have provided either action/product then bot will probe for the linked types followed by associated names. Eg; If you have asked the bot for acc, then the bot will ask you to select from like transfer account, apply for an account, view your account, etc.

    alt_text

    Rules

    These rules will help you to configure your workspace's knowledge graph, and also helps the user to interact with the bot and get proper & appropriate response for the respected query.

    Rules
    Description
    Elastic Search Index for Knowledge Graph This index stores basic definition information which will aid for the discovery of the design blocks and helps in stemming/identifying the proper linguistics.
    Enable Knowledge Graph Enable/Disable Knowledge Graph for the bot
    Enable Knowledge Graph Lookup for FAQ Fallback Knowledge Graph lookup will be used as when no response for utterances classified as FAQ.
    Enable Query Parser using Knowledge Graph It specifies if the Query Parser using Knowledge Graph is to be enabled.
    Knowledge Graph Beautification phrases Suggestions will be beautified with the phrases configured. Ex: Do you want to view the, would you like to view the
    Knowledge Graph Beautification phrases for Attributes Suggestions will be beautified with the phrases configured for attributes Would you like to, Do you want to, Do you like to, Do you wish to
    Knowledge Graph Response Beautification Enable/Disable question phrase creation for suggestions
    Maximum Knowledge Graph suggestions It specifies the maximum number of Knowledge Graph suggestions shown in the bot.
    Maximum number of words for Knowledge Graph Specifies the maximum number of words for which Knowledge graph should be triggered by the bot.
    Strict Search Knowledge Graph Threshold Specifies threshold for matching irrelevant results. Higher the value stricter the search. 0 to switch off this check.
    Time in seconds to keep Suggestion and Offset in cache Time in seconds to keep Suggestion and Offset in the cache, This is for Show More functionality.

    alt_text

    Templates

    1. Knowledge Graph Default Template

    You can add an ontology default template by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Template'
    3. Click on 'Add Card'
      • Enter the name as "ONTOLOGY_DEFAULT"
    4. Enter the required details (Name, Category, code, version, etc.)
    5. Click on 'Next'
    6. Configure the template as per your requirement
    7. Click on Save
    8. Click on the tempalate
    9. Click on the Source
    10. Paste the following source code
    11. Click on save
    {{#if state.slots.ATTR}}
    <div class="panel panel-default button-div border-radius-top-20" data-element="true" data-title="You can find more information about{{#if state.slots.ATTR}} {{state.slots.ATTR}} for{{/if}}{{#if state.slots.PN}} {{state.slots.PN}}{{/if}}{{#if state.slots.PT}} {{state.slots.PT}}{{/if}}{{#if state.slots.P}} {{state.slots.P}}{{/if}} at Axisbank.com" data-subTitle="{{{subTitle}}}" data-image="{{image}}"> 
      {{else if state.slots.PTATTR}}
      <div class="panel panel-default button-div border-radius-top-20" data-element="true" data-title="You can find more information about{{#if state.slots.PTATTR}} {{state.slots.PTATTR}} for{{/if}}{{#if state.slots.PN}} {{state.slots.PN}}{{/if}}{{#if state.slots.PT}} {{state.slots.PT}}{{/if}}{{#if state.slots.P}} {{state.slots.P}}{{/if}} at Axisbank.com" data-subTitle="{{{subTitle}}}" data-image="{{image}}"> 
        {{else}}
        <div class="panel panel-default button-div border-radius-top-20" data-element="true" data-title="You can find more information about{{#if state.slots.PNATTR}} {{state.slots.PNATTR}} for{{/if}}{{#if state.slots.PN}} {{state.slots.PN}}{{/if}}{{#if state.slots.PT}} {{state.slots.PT}}{{/if}}{{#if state.slots.P}} {{state.slots.P}}{{/if}} at Axisbank.com" data-subTitle="{{{subTitle}}}" data-image="{{image}}"> 
          {{/if}}
          <div class="panel-heading card-image"> 
            <img src="{{image}}" class="img-responsive"> 
          </div> 
          <div class="panel-body"> 
            <div class="row"> 
              <div class="col-xs-12 buttondesign"> 
                <h5 class="uob-amex card-title" style="font-weight: bold;">{{title}} 
                </h5> 
                <p class="account-number">{{subTitle}} 
                </p> {{#if link}} 
                <p class="account-number">{{link}} 
                </p>{{/if}} 
              </div> 
            </div> 
          </div>  
          {{#if state.slots.ATTR}}
          <button type="button" class="btn btn-primary btn-sm btn-block" id="button1" data-button="true" data-title="Click Here" data-type="externalUrl" data-payload="https://www.axisbank.com/search-results?indexCatalogue=axissearch&searchQuery={{state.slots.A}} {{state.slots.ATTR}} {{state.slots.PN}} {{state.slots.PT}} {{state.slots.P}}">{{this}} 
          </button> 
          {{else if state.slots.PTATTR}}
          <button type="button" class="btn btn-primary btn-sm btn-block" id="button1" data-button="true" data-title="Click Here" data-type="externalUrl" data-payload="https://www.axisbank.com/search-results?indexCatalogue=axissearch&searchQuery={{state.slots.A}} {{state.slots.PTATTR}} {{state.slots.PN}} {{state.slots.PT}} {{state.slots.P}}">{{this}} 
          </button> 
          {{else}}
          <button type="button" class="btn btn-primary btn-sm btn-block" id="button1" data-button="true" data-title="Click Here" data-type="externalUrl" data-payload="https://www.axisbank.com/search-results?indexCatalogue=axissearch&searchQuery={{state.slots.A}} {{state.slots.PNATTR}} {{state.slots.PN}} {{state.slots.PT}} {{state.slots.P}}">{{this}} 
          </button> 
          {{/if}}
        </div>
    

    alt_text

    2. Knowledge Graph Suggestion Template:

    The ontology suggestion template will show the related suggestions to the user on the bot based on your configured template. You can add an ontology suggestion template by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Template'
    3. Click on 'Add Card'
      • Enter the name as "ONTOLOGY_SUGGESTION_TEMPLATE"
    4. Enter the required details (Name, Category, code, version, etc.)
    5. Click on 'Next'
    6. Configure the template as per your requirement
    7. Click on Save
    8. Click on the tempalate
    9. Click on the Source
    10. Paste the following source code
    11. Click on save
    <div class="panel panel-default button-div border-radius-top-20" data-element="true" data-title="Did you mean?" data-subTitle="{{{subTitle}}}" data-image="{{image}}"> 
      <div class="panel-body"> 
        <div class="row"> 
          <div class="col-xs-12 buttondesign"> 
            <h5 class="uob-amex card-title" style="font-weight: bold;">Did you mean?
            </h5> 
          </div> 
        </div> 
      </div>
      {{#each this.suggestionTextBody}} 
      <button type="button" class="btn btn-primary btn-sm btn-block" id="button1" data-button="true" data-title="{{suggestionText}}?" data-button-life="OLO" data-type="postback" data-payload='{"type":"ONTOLOGY","data":{"MESSAGE":"{{MESSAGE}}","suggestion":"{{suggestion}}","type":"{{type}}", "INTENT":"{{INTENT}}" }}'>{{this}} {{MESSAGE}}
      </button> 
      {{/each}}
      {{#if this.showmore}}
      <button type="button" class="btn btn-primary btn-sm btn-block" id="button1" data-button="true" data-button-life="OLO" data-title="Show More" data-type="postback" data-payload='{"type":"ONTOLOGY","data":{"MESSAGE":"{{MESSAGE}}","suggestion":"showmore","type":"{{type}}","allowMultipleClicks":false }}'>Show More
      </button> 
      {{/if}}
    </div>
    

    alt_text

    Fulfillment

    Fulfillment is the process of completing the user's request with an execution. Fulfillment is the end result configured by the business user for the below combinations. Fulfillments are generally defined per product.

    Fulfillment Combinations
    Action -> Product Type
    Action -> Product Type -> Product Name
    Action -> Product Type -> Product Attribute
    Action -> Product Type -> Product Name -> Product Attribute

    If no fulfillment configured as paths mentioned above, then default fulfillment will be triggered. The definition of natural ordering will be used when listing the details for the customer in the bot.

    alt_text

    You can add a fulfillment by following these steps:

    1. Goto your workspace
    2. Navigate to 'Manage AI'
    3. Click on 'Setup Knowledge graph'
    4. Select Product in Fulfillment section
    5. Click on 'Add'
    6. Enter all the required details (Action, Type, Name, Attribute group, Attribute, Fulfillment type, fulfillment, etc.)
    7. Click on the save icon.

    Fulfillment Types

    1. Default Fulfillment Type

    If no fulfillment configured as paths for the product definition, then default fulfillment will be triggered.

    The need for default fulfillment being, the business user has provisioned all the product/services hierarchy and expecting the bot to point to the business website based on the user's input. In this case, DEFAULT fulfillment will come in handy and direct the user to the search results from the business website and redirects the user by providing the link to the business website.

    alt_text

    2. Messages Fulfillment Type

    If we want to show Message/Error Message to the user when they select unsupported products then business users can set an error message in the flow.

    How to configure messages? please refer Configure Messages

    alt_text

    3. FAQ_ID Fulfillment Type

    We can set FAQ_ID as fulfillment, so that if the user selects that option then the bot will give the response respected to that FAQ_ID.

    alt_text

    4. Intent Fulfillment Type

    We can trigger an Intent for the action as a fulfillment.

    How to configure intent? please refer Configure Intents

    alt_text

    5.Templates Fulfillment Type

    In some cases, if we want to show a template to the user for action then you can select a template as a fulfillment type & configure the template or if you can any predefined template then you can use that one also.

    Eg; If a user asks for 'Apply for debit card' then you can configure a carousel/list template to show user various cards like Rupay card, Master Card, Visa card, etc.

    alt_text

    6. Workflow Fulfillment Type

    Workflow helps to define conversation journeys. The intent and entity might be enough information to identify the correct response, or the workflow might ask the user for more input that is needed to respond correctly.

    How to configure workflow? please refer Configure workflow

    alt_text

    Action Synonyms

    The actions added in the Fulfillment sections are listed for which synonyms can be added. We can't define an action here.

    alt_text

    • Based on your knowledge graph the bot will show the related utterance/query to the user so that the user can easily select the appropriate answer.

    alt_text

    General Guidelines

    • Don’t repeat the labels across product, type, name, attribute and Action
    • Don’t repeat synonyms, else that will introduce ambiguity
    • Product Attribute Group is only used for the ease of grouping related attributes for defining fulfillments
    • Default probing will start in the following ordering
      • Product -> Action
      • Product -> Action -> Types
      • Product -> Action -> Types -> Names
      • Product -> Action -> Attributes
      • Product -> Action -> Types -> Attributes
      • Product -> Action -> Types -> Names -> Attributes

    Import / Export

    If you want to move the same knowledge graph in another environment, so you can export the knowledge graph or if you have any configured knowledgegarph that you want to configure in your workspace, you can import that knowledgegraph.

    You can download a sample knowledge graph from the Manage AI -> Knowledge Graph -> Download Sample Knowledge Graph to understand more about fields and values of knowlegraph to import.

    sample_kg

    Importing Knowledge Graph

    You can import a knowledge graph by following these steps:

    1. Goto your workspace
    2. Navigate to 'Manage AI'
    3. Click on 'Knowledge Graph'
    4. Click on 'Import'
    5. Click 'Yes' on the popup (Are you sure you want to overwrite?)
    6. Select xlsx file from you system (which should contains Product, Product Type, Product Name, Product Attribute group, Product Attribute, Fulfillment Type, Fulfillment, etc. columns)

    Exporting Knowledge Graph

    You can export a knowledge graph by following these steps:

    1. Goto your workspace
    2. Navigate to 'Manage AI'
    3. Click on 'Knowledge Graph'
    4. Click on 'Export'

    It will download a xlxs file containing Product, Product Type, Product Name, Product Attribute group, Product Attribute, Fulfillment type, Fulfillment etc. columns

    alt_text

    Deleting Knowledge Graph

    If you don't want any knowledge graph then you can delete those knowledge graph by following these steps:

    1. Goto your workspace
    2. Navigate to 'Manage AI'
    3. Click on 'Knowledge Graph'
    4. Click on Delete icon
    5. Or click on 'Delete all' (if you want to delete all the knowledge graph)

    alt_Text

    Loading Knowledge Graph

    If you have configured the knowledge graph & for some reason if it's not loading the graph then you can load the configured knowledge graph by following these steps:

    1. Goto your workspace
    2. Navigate to 'Manage AI'
    3. Click on 'Knowledge Graph'
    4. Click on 'Please Load'

    It will load the knowledge graph based on the configuration.

    alt_Text

    Scenarios

    1 - When user utters a more than one word utterance, and if any of the word is not present in ontology sheet then a message will be shown for matched words as shown below.

    alt_Text

    There is a rule to make that message configurable.

    alt_Text

    2 - If the user utters a single word utterance, then will search for exact match if there is no exact match found then will look for partial match.

    Example : If the user utters for charges, If there is no exact match for charges, then we will fetch results of interest charges, loan charges and goes on.

    3 - If the user utters multiple words then will look for the exact match by putting AND condition between the words if the results are null then will go for split search by searching two words individually with OR case.

    Beautification

    Implementation : Beautification of suggestions will be shown as

    Action + ProductName + ProductType + Product +Attribute

    alt_Text

    Multiple templates

    Different templates can be assigned for different level of probing.

    For ex : If we utter the product name then the product level actions should be shown as suggestions in that case action specific template will be rendered.

    If the suggestion specific template does not exists then default ontology templates will be rendered as in previous versions.

    Template codes for different set of suggestions :

    Attributes : ONTOLOGY_SUGGESTION_TEMPLATE_ATTR

    Product type : ONTOLOGY_SUGGESTION_TEMPLATE_PT

    Product name: ONTOLOGY_SUGGESTION_TEMPLATE_PN

    Action : ONTOLOGY_SUGGESTION_TEMPLATE_A

    Default : ONTOLOGY_SUGGESTION_TEMPLATE


    Manage Use Case

    In Manage use case we can add an AI/Non-AI use case based on the requirement. AI Use Case : In AI use case we use AI Engine to classify the use case, intents, data & fulfillment, etc to get the response. Non-AI Use Case : In Non-AI use case AI Engine doesn't come into a picture to classify the things or to get the response, We get the response based on certain non-ai-usecase-rule which is generated at the time of creating an Non-AI use case.

    Add Use Case:

    We can Add a use case by following these steps:

    AI Use Case:

    • Go to your workspace
    • Go to Manage UseCase
    • Click on Add(+) icon

    add_ai_use_case_01

    • In Definition tab Enter the use case name, description and toggle on the AI Enabled

    add_ai_use_case_02

    • Click on Next (It will give the suggestions related to the function name which you provided or it will go to the next tab if no suggestions available)
    • Select from the given suggestions (If suggestions provided)
    • In Data tab Add the utterances/data related to use case (Note: Minimum 5 utterances are required, but to get the better classification & result, Minimum 20 utterances are required. We can map the word with entity on typing the utterance)

    add_ai_use_case_03 - Click on Next - In Fulfillment tab Select the properties (ie; Channel & Security)

    add_ai_use_case_04 - Click on Save

    Non-AI Use Case:

    • Go to your workspace
    • Go to Manage UseCase
    • Click on Add(+) icon

    add_ai_use_case_01

    • In Definition tab Enter the use case name, description and toggle off the AI Enabled

    add_ai_use_case_05

    • Click on Next
    • In Data tab Add the utterances/data related to use case (Note: Minimum 5 utterances are required, but to get the better classification & result, Minimum 20 utterances are required.)

    add_ai_use_case_05

    • Click on Next
    • In Fulfillment tab Select the channel & Security

    add_ai_use_case_07

    • Click on Save

    The Use Case will be created with the above function name, data and properties and it will be redirected to the Manage Use Case page.

    add_ai_use_case_07

    Post adding the use case you can generate & train the added data and test on the bot.

    Manage Use Case :

    Post creating the Use Case we can go to the particular use case to edit/update by following these steps:

    • Go to your workspace
    • Go to Manage Use Case
    • Click on any use case You will be able to see below use case menu

    Definition

    In the use case definition, you will configure a use case that will handle all the related queries for the intent is set in the use case definition and also provide the provision to handle unsupported functionalities.

    add_ai_use_case_08

    Supported Functionalities :

    1. Manage
    2. Add
    3. Sync
    4. Import(csv)
    5. Export(csv)
    6. Import Zip
    7. Export Zip

    alt_text

    Manage

    1. On click of Manage we can see distinct Supported Products from the available data set which we managed in functions.csv file alt_text
    2. Based on the product selection will show supported use case as shown \ Which is categorised based on function type mentioned in CSV and shows function name to select \ which contains 2 options as explained below and one display based on fulfilment type selection as show in image (Fullfillment)

    alt_text

    1. Supported : On click of this we enable the use case availability in conversation 2. Fulfilment Type : contains 3 options 1. Messages : By default unmapped_intent message code will be used and including parameters with CSV data named as functionName, productType, functionCategory, functionDescription, functionCode, unifiedApiResponse, functionType. Can change the message code if required by clicking on the message which is shown on fulfilment if Message is Current drop down or changing DropDown to Message also we can do as shown

    alt_text

    3. Templates : By default UnSupportedFulFillment template code will be used with payload contains parameters like functionName, productType, functionCategory, functionDescription, functionCode, unifiedApiResponse, functionType along with intent and feature and we can choose any existing templates to customise as shown

    alt_text

    5. FAQ : By default utterance itself we hit to FAQ API (KBS_CLIENT) and also we can choose specific FAQ ID for particular use case as shown

    alt_text

    1. Once we are done with the selection we save the data in functionmstr table and show only supported functions as shown in Functions landing page

    alt_text

    1. Sync button in image can be used to get latest functions added in functions.csv after initial setup
    2. Before testing in the bot go to \ Configure Workspace > General Rules > search for text (Enable to support managing use cases for not supported products) and enable it by default it will be disabled. So that can test in bot for managed functions accordingly

    Sync

    Sync helps us to update the existing records with latest available csv in managed config resource

    Import

    Importing csv file will update the existing records, so that will be available at the time of managing functionalities sample import csv is as follows:

    Function Code,Function Name,Function Category,Function Type,Product Type,Root Product,Action,Attributes,Conversation Type,intent,Supported,Template Code,FAQ Id,Function Description,Status,Auth User,Security Level,Audit Level,Supported Channels \ account_inquiry__savingsaccount,Account Enquiry,Balance Inquiry,Enquiry,Savings Account,account,,,M,qry-accountenquiry,false,,,Retrieving and showing balances for Savings Account is not supported,Y,0,0,0,

    Note : csv managed in product (admin config) is different from what we need to import here

    Export

    Exporting will provide us data in csv format which has the details of what and how we are managing the functionalities.

    Import Zip

    Importing zip will help us to import functionality along with their available resources like messages, templates, hooks

    Export Zip

    Exporting zip will help us to export functionalities including resources like messages, templates, hooks we have export zip feature for individual functionality also

    Fields

    • Below are the fields to be configured while creating a function record

    Name : Name of the function, should be unique

    Description : Describe the function used for and its purpose

    Category : This is a very important field which used to club all the other modules like messages, rules, template and data to bind together

    • Once the record added we need to configure below fields:

    Intents : Configure applicable intents for this function like txn-login etc

    Functionality Type: Define whether this is inquiry or transaction type module

    Channels : It will display all the configured channels for this workspace. Can enable or disable based on the requirement.

    Security > Realm: It used to configure whether this function can be invoked before login or post login by configuring

    alt_text

    Function comprises of following elements:

    Definition

    In the definition, there are four sections:

    1. Description
    2. Channels
    3. Properties
    4. Security

    add_ai_use_case_08

    Description

    Along with the basic details, you need to configure the intent for which you want this function to be triggered. Multiple functions can be configured with the same intent, but while defining the fulfillment, you need to make sure the rules or channels are configured in such a way that there is no ambiguity.

    Channels

    In the channels section, all the channels would be displayed which are configured for the bot. Select all the channels for which function is to be enabled.

    Properties

    Each function can either be transaction or enquiry. An Enquiry is mostly a single-step flow while a transaction is a multi-step flow. Other criteria to decide between transaction and enquiry, is whether a user is allowed to that flow within another flow or not. User is allowed to ask any enquiries within a transaction. But he can have only one transaction at a time.

    Security

    There are three options available to define the security for your function; One Factor Authentication (1FA), Two Factor Authentication (2FA) or Public. As the name suggests, for 1FA and 2FA user needs to be authenticated before it proceeds with the flow, like for balance enquiry. Public functions are accessible by all like enquiry for the foreign exchange rate. For 2FA, based on the policy configured either user would be asked to do 2FA at the starting of the flow itself, or the integration needs to send a flag to ask for 2FA in the middle of the flow.

    Data

    Data are all the utterances for which you have to the use case. It will show all the related utterances for the selected intent in the definition of the usecase. Eg; If you have added a use case for book_ticket then the data section will contain all the utterance related to book_ticket.

    We categorized data into two sections

    Base - Preloaded data which loaded by default when workspace created if any

    Custom - User added utterances

    add_ai_use_case_09

    Dialog

    Based on the intent you have configured in definition, the dialog will show all the entities related to that intent. You can add more dialogs. Queries or transaction intent can have more than one dialog turn to complete the user's request. This type of intent can have multiple dialogs turns. One dialog turn involves providing the following three information:

    • Expected entity
    • Bot says : Expected questions from the bot to the user
    • User says : The way user replies

    To configure dialogs please refer Configure dialogs

    alt_text

    Fulfillment

    Fulfillment is the process of completing the user's request. There are five ways to define fulfillment.

    • Webhooks
    • Workflow
    • Template
    • Camel Route
    • Java Bean

    Refer managing fulfillment.

    Camel Routes

    Integration Editor

    Integration editor where the user can define the integration routes through a user interface that generates the Camel route XML DSL. In retail banking, the existing routes invoke integration routes and passing the domain-specific request objects in POJO (Plain Old Java Object). The request object generally would be our BizApp's (RB, Trading, etc.) canonical model. The user needs to get the required information form the request POJO and map to the client API POJO which intern converted to XML or JSON based on the API specification and get the response either JSON or XML. The output of the integration route is expected to return the domain response POJO with the required information populated. The object of API can be manually defined or can be dynamically introspected if the API supports open standards like OpenAPI 2.0 & 3.0 spec, SOAP or an HTTP schema.

    If you want to add camel routes in your function then you can add by following these steps:

    1. Goto your workspace
    2. Navigate to 'Manage Functions'
    3. Click on 'Edit' icon of the function(for which you want to add camel route)
    4. Navigate to 'Integration'
    5. Choose 'Config' will show some existing routes (If exists)
    6. Choose the 'Domain' under Business Application
    7. Choose the 'Function'
    8. Click on 'Load' button (It will get the API & data and will map to the parameter that will be shown on clicking load)
    9. Under Client API Enter the 'route id' in 'API name' field
    10. Choose the specification type (HTTP specification, Swagger specification, SOAP specification, etc.)
    11. Configure the settings of the selected specification type (*URL, HTTP method, Content type, Request class, Response class, logging message, Next route, required actions,Header parameters, property parameters, etc.)
    12. Click on 'Load'
    13. Map the Business Application parameter with the Client API parameter as per your requirement
    14. Click on save

    alt_text

    Templates

    Templates are a combination of some components like images, text, buttons, etc. It will give a good look and feel to the user better than the only text response.

    E.g., If you want to greet your user with a Greeting/Welcome message, then you can set the welcome message in a text-only as "Hey there! How may I assist you?" but If you add some Images/card, then it will give a good look & feel. That will help the user to spend more time on your bot.

    To configure templates please refer Configure Templates

    alt_text

    Messages

    You can configure some error messages or some default messages, that will be shown if bot will not found any response.

    Example: TRANSFER_ERROR_MESSAGE: Balance not available, please enter an amount less than the available balance.

    You can configure a message code (TRANSFER_ERROR_MESSAGE) and the response that bot will send to the user (Balance not available, please enter an amount less than available balance.). So If the user will try to send amount more than their actual balance, then the bot will send the error message (Balance not available, please enter an amount less than available balance.).

    To configure messages, please refer Configure messages

    alt_text


    Managing Rules

    Some rules can make your workspace more functional & user-friendly to interact with the users. You can manage the rules for your workspace as well as for our AI engine to make your bot respond better.

    System Rules

    These are the System rules that make our AI engine more effective & functional to work well with your bot as well as to analyze, process, filter & send the response to the user through your bot.

    You can manage the following rules for your bot:

    A. General Rules

    1. AI Engines related rules

    Rules
    Description
    Ai Engine It specifies the type of message processor.
    Auto Suggest Corpus Variants It specifies whether all or main variants to be added to the corpus.
    Context Change Detector It specifies the AI engine to be used to detect Context Change.
    Elastic Search Index You can set the Elastic Search Index for auto-complete.
    Enable Confirmation Entities Handling It specifies if the Confirmation Entities Handling is to be enabled.
    Enable Elastic Search as Fallback It specifies if the Elastic Search API call to be enabled.
    Enable External FAQ Fallback It specifies if the external FAQ fallback to be enabled.
    Enable FAQ response local lookup It specifies if to check FAQ response locally before hitting KBS.
    Enable fuzzy search It enables postback handling as text input.
    Enable Local Lookup for Classification It specifies if the local lookup of classification is enabled.
    Enable logging for Fallback It enables/disables logging for Fallback.
    Enable message translation from other languages to English before processing To support other languages, use the translator to translate the message to English and respond to other languages. To enable this also set Language Detector.
    Enable Preprocessor It specifies if the PrePreprocessor algorithm is to be enabled
    Enable Query Parser It specifies if the Query Parser algorithm is to be enabled
    Enable response translation from English to other languages after processing It translates response back to original language from English. Will require Message translation also to be enabled.
    Enable Sentiment Analysis It specifies if the Sentiment analysis is to be enabled.
    Enable Split Query It specifies if the Split Query algorithm is to be enabled to handle compound queries.
    Enable Split Query For FAQs It specifies if the Split Query algorithm is to be enabled to handle compound queries for FAQs.
    Enable tagged FAQ lookup It specifies if the Tagged FAQ lookup to be enabled.
    Enable Triniti FAQ Web Search It specifies if the Triniti FAQ web search API call to be enabled.
    Enable Web Content Elastic Search as Fallback It specifies if the Elastic Search API call to be enabled for the fallback based on web content.
    Enable/Disable FAQ response lookup from Morfeus database If enabled, FAQ response will be picked from the controller database, instead directly using AI Engine FAQ Answer.
    Entity Extractor It specifies the NLP engine to be used.
    Fuzzy search for FAQ web search cutoff It returns answer directly if web search question fuzzy matches user utterance.
    If ES API calls should use system proxy It specifies if ES API calls should use system proxy.
    KBS MLT Cards Display Limit It specifies the number of KBS MLT cards to be displayed when enabled.
    KBS MLT Cards Fuzzy Search Score Minimum Threshold It specifies the minimum threshold of Fuzzy Search Score for KBS MLT Cards.
    Language Detector It specifies the AI engine to be used to detect language.
    Language Translator It specifies the AI engine to be used to translate the message.
    Message Converter It specifies whether to translate or transliterate.
    Mode to handle manual chat It defines how to handle or reply manual chat
    Number of suggestions to show in auto-complete It will show the number of suggestions that are set here.
    Primary Classifier It specifies the Primary Classifier engine to be used.
    Secondary Classifier It specifies the Secondary Classifier engine to be used.
    Show KBS MLT Cards It shows cards in case of an ambiguous response from KBS.
    Show Related FAQ Queries It shows Related FAQ Queries if the FAQ answer is found.
    Show Related FAQ Queries after Fallback It shows Related FAQ Queries if the FAQ answer is found using any fallback.
    Show Related FAQ Queries After FAQs with CTA It shows related queries also for the FAQs which got click to action buttons.
    Smalltalk/FAQ Handler It specifies the AI engine to be used to handle Smalltalk and FAQs.
    Smart FAQ ambiguity handling It Uses a product from context to handle the ambiguity of KBS response.
    Solution to use for suggestions It specifies the solution to be used to suggest FAQs.
    Web Search Elastic Search Index Elastic Search Index for Web Search.

    2. Configuration related rules

    Rules
    Description
    Base Data Version It specifies the version of the base data.
    FAQ/Non-FAQ ES Index It specifies the elastic search index for Faq/Non-Faq.
    Handle Unmapped/Unsupported Intents as a FAQs If enabled fulfill intent whose fulfillments are not configured as a FAQ.
    Secondary Language Bot Bot id of the Secondary Language Bot.
    Smalltalk Paraphrasing Support If enabled, Smalltalk paraphrasing support will be associated.
    Synonyms support for AutoSuggest FAQ If enabled, Keyphrases and Acronyms will be added as Synonyms for AutoSuggest Elasticsearch index.

    3. Elastic Search Index related rules

    Rules
    Description
    Enable Customer Segment to filter FAQs in ElasticSearch If enabled, FAQs are filtered for a customer segment.

    4. Placeholders related rules

    Rules
    Description
    Break row placeholder Placeholder for break row in FAQ response.

    5. Threshold related rules

    Rules
    Description
    Alternative Matches It specifies the number of similar actions to be shown to the user in case his primary question was not confidently identified by the classifier.
    Customer Support Fallback Threshold It specifies the number of failed AI conversations before the system falls back to a customer support agent.
    Customer Support Fallback Time Interval (in minutes) It specifies the time interval in which the fallback threshold is measured e.g 2 failed attempts in 5 mins.
    Enable Grain Type Verification It enables verification of grain type for FAQs. The top candidate's grain type would be compared with the user query's grain type.
    FAQ Core Labels Core Labels included bouncing the request to intent.
    Max Adversity Score For FAQ Web Search It specifies the adversity score above which FAQ web search will not happen.
    Max Adversity Score For Five And Above Word Message It specifies the adversity score above which intent classified is invalid.
    Max Adversity Score For Four Word Message It specifies the adversity score above which intent classified is invalid.
    Max Adversity Score For One Word Message It specifies the adversity score above which intent classified is invalid.
    Max Adversity Score For Three Word Message It specifies the adversity score above which intent classified is invalid.
    Max Adversity Score For TWO Word Message It specifies the adversity score above which intent classified is invalid.
    Max Confidence It specifies the confidence percentage which defines an unambiguous (confident) intent detection of an input conversation
    Min Confidence It specifies the confidence percentage threshold which defines the lower boundary below which the classifier cant confidently predict the intent of a conversational input. For conversations with intent confidence levels between max and min levels are considered as ambiguous and the top 3 intents are displayed back to the user for selection.
    Min FAQ Confidence It specifies the confidence percentage threshold which defines the lower boundary below which the FAQ answer will be considered invalid.
    Min Smalltalk Confidence It specifies the confidence percentage threshold which defines the lower boundary below which the Smalltalk answer will be considered invalid.
    Minimum Threshold to consider Elastic Search result Minimum Threshold to consider Elastic Search result for FAQs.
    Minimum Threshold to consider Elastic Search result for Web Search Minimum Threshold to consider Elastic Search result for Web Search Fallback.
    Minimum Threshold to consider Triniti FAQ Web Search result Minimum Threshold to consider Triniti FAQ Web Search result for FAQs.
    Minimum Threshold to Show Suggestion Minimum threshold to include the question in suggestion.
    Negative sentiment threshold Score above this threshold will consider the message as negative.
    Postback fuzzy search cutoff Adjusting the score for postback search as text input.
    Retry FAQ It Retries answering FAQ if confidence is low.
    Retry Smalltalk It Retries answering Smalltalk if confidence is low.
    Stop Words It Stops words to exclude counting number of tokens in the message.
    Web search for FAQ Fallback to web search if FAQ is not able to answer.
    Web search for unclassified utterances Web search for unclassified utterances if the product exists.

    alt_text

    B. Triniti Rules

    1. Configuration related rules

    Rules
    Description
    API Key It specifies the API key for Triniti.
    Domains It specifies the domains of the Triniti AI Engines (specified in a comma-separated format for clustered deployments).
    Enable Cache It specifies if Triniti API calls are to be cached.
    If Triniti API calls should use system proxy It specifies if Triniti API calls should use system proxy.
    NER API Key It specifies the API key for Triniti NER.
    NER Domain It specifies the domain of the Triniti NER.
    Relative URL Context Path It specifies the relative context path.
    Triniti Paraphrase URL It specifies the URL for paraphrasing.
    Triniti Translate API Key It specifies the API key for Triniti Translate.
    Triniti Translate API URL It specifies the domain of the Triniti Translate API.
    Triniti Transliterate API Key It specifies the API key for Triniti Transliterate.
    Triniti Transliterate API URL It specifies the domain of the Triniti Transliterate API.
    Triniti worker process used only when loading after successful data training

    2. Deployment related rules

    Rules
    Description
    Deployment Mode It specifies the deployment mode of Triniti.
    Deployment Type It specifies if Triniti API calls are to be cached.
    Elastic Search Index used for loading Primary Classifier configuration for Quick Training It Quick Train Elastic Search index used for Intent classification.
    Trainer URL It specifies the Triniti instance URL which is used for training in a cluster deployment.
    Training Data Format It specifies the data format used to train the instance for Triniti version greater than 1.x
    Triniti API Key It specifies the X-API-KEY value to be embedded in all cloud Triniti calls.
    Triniti Manager URL It Specifies the URL of the Triniti manager for cloud deployment.

    alt_text

    C. Translation Rules

    1. AI Engine related rules

    Rules
    Description
    Enable message translation for NER from other languages to English before processing To support other languages, use translator to translate the message to English and respond to other languages. To enable this also set Language Detector.
    Enable message translation for Primary Classifier from other languages to English before processing To support other languages, use translator to translate the message to English and respond to other languages. To enable this also set Language Detector.

    2. Configuration related rules

    Rules
    Description
    Enable message translation from other language to English before processing To support other language, use translator to translate message to English and response back to other language. To enable this also set Language Detector.
    Enable response translation from English to other language after processing It translates the response back to original language from English. Will require Message translation also to be enabled.
    Google Service Account Credentials It specifies the Service Account JSON Credentials of Google Cloud API for translation.
    Language Detector It specifies the AI engine to be used to detect language.
    Language Translator It specifies the AI engine to be used to translate message.
    Message Converter It specifies whether to translate or transliterate.
    Secondary Language Bot Bot id of the Secondary Language Bot
    Translate API Max utterances It specifies the Max Number of utterances to be translated by configured Translate API.
    Yandex API Key It specifies the API key for Yandex

    3. Triniti related rules

    Rules
    Description
    Triniti Translate API Key It specifies the API key for Triniti Translate
    Triniti Translate API URL http:///translate/process
    Triniti Transliterate API Key It specifies the API key for Triniti Transliterate
    Triniti Transliterate API URL It specifies the domain of the Triniti Transliterate API

    alt_text

    D. Git Rules

    1. Configuration related rules

    Rules
    Description
    AI Data Sync source (Git or Zip) It pecifies the Sync source for AI data

    2. Data related rules

    Rules
    Description
    Branch It specifies the branch for triniti data parser.
    Password It specifies the password for triniti data parser.
    URL It specifies the URL for triniti data parser.
    Username It specifies the username for triniti data parser.
    Workspace It specifies the workspace for triniti data parser.

    3. ZIP related rules

    Rules
    Description
    Zip Password It specifies the password for triniti data parser zip file.
    Zip URL for Triniti Data import It specifies the Zip URL for Triniti Data import.
    Zip Username It specifies the username for triniti data parser zip file.

    alt_text

    E. Knowledge Graph Rules

    1. Configuration related rules

    Rules
    Description
    Elastic Search Index for Knowledge Graph It shows Elastic Search Index for Knowledge Graph.
    Enable Knowledge Graph It specifies whether the Knowledge Graph lookup is enabled.
    Enable Knowledge Graph Lookup for FAQ Fallback It specifies if the Knowledge Graph lookup to be used as FAQ fallback.
    Enable Query Parser using Knowledge Graph It specifies if the Query Parser using Knowledge Graph is to be enabled.
    Knowledge Graph Beautification phrases You can add the knowledge graph beautification phrases in this field.
    Knowledge Graph Beautification phrases for Attributes You can add the knowledge graph beautification phrases for the attribute in this field.
    Knowledge Graph Response Beautification If enabled, Knowledge Graph suggestions will be phrased as near meaningful Questions.
    Maximum Knowledge Graph suggestions It specifies the maximum number of Knowledge Graph suggestions in the display.
    Maximum number of words for Knowledge Graph Maximum number of words for Knowledge Graph
    Strict Search Knowledge Graph Threshold Specifies threshold for matching irrelevant results. Higher the value stricter the search. 0 to switch off this check.
    Time in seconds to keep Suggestion and Offset in cache Time in seconds to keep Suggestion and Offset in the cache, This is for Show More functionality.

    alt_text

    F. Self Learning Rules

    1. Self Learning Configuration related rules

    Rules
    Description
    Enable Self Learning 2.0 It will enable/disable the advance self-learning
    Number of rows to be created per XLSX workbook It specifies the number of rows to be created per XLSX workbook based on heap size, maximum being 1048576.
    Scroll search page size It will specify the scroll search page size.

    alt_text

    G. Spotter Rules

    1. Spotter Configuration related rules

    Rules
    Description
    API Key It specifies the API key for Spotter.
    Enable masking of numeric values The Spotter will mask the numbers in the responses if you enable this rule.
    Enable Spotter derived KeyPhrases If enabled, Spotter derived KeyPhrases will be updated.
    Spotter Context Path It specifies the relative context path for the spotter.
    Spotter URL It specifies the endpoint of Spotter.
    Spotter worker process used only when loading after successful data training
    Use Spotter Bounce-To Response as Intent If enabled, Spotter Bounce-To intent will be used as the main Intent.
    User ID It specifies the USER ID for Spotter.

    alt_text

    H. Triniti Unified API v2 rules

    1. Unified API related rules

    Rules
    Description
    API Key It specifies the API key to access Unified API.
    API Secret It specifies the secret key to access Unified API.
    Context Path It specifies the Context Path of Unified API.
    Enable Context Handling For Products Prerequisite Context handling is enabled. Only for specified products context handling will work. Set value to ALL to enable for all products
    Enable/Disable Compression It specifies if compression is enabled.
    Enable/Disable Context Handling It specifies if Context handling is enabled. Previous inputs will be passed to Triniti to provide context information
    Enable/Disable Debug It specifies if debug is enabled.
    Enable/Disable Discourse It specifies if the discourse is enabled.
    Enable/Disable Fragments It specifies if fragments are enabled.
    Enable/Disable Pragmatics It specifies if pragmatics are enabled.
    Enable/Disable Semantic Rules It specifies if semantic rules are enabled.
    Enable/Disable Similar Queries It specifies if similar queries are enabled.
    Endpoint URL It specifies the Unified API endpoint URL.
    Triniti Cloud Backend Domain URL ---
    Triniti Cloud Basic Auth Encoded Credential ---
    Unified API v2 process used only when loading after successful data training ---
    Unified API Version It specifies the version of Unified API.
    X-SESSION-ID It specifies X-SESSION-ID for Triniti.
    X-USER-ID It specifies X-USER-ID for Triniti.

    alt_text

    Import AI Rules

    You can even import the AI rules as per your requirement by following these steps:

    • Goto your workspace
    • Click on 'Manage AI'
    • Select 'Manage Rules'
    • Click on 'Import'
    • Select the file (JSON file)

    alt_text

    Importing System Rules

    You can even import the AI rules as per your requirement by following these steps:

    • Goto your workspace
    • Click on 'Manage AI'
    • Select 'Manage Rules'
    • Click on 'Import'
    • Select the file (JSON file)
    • Click on 'Yes' on popup. (Are you sure you want to overwrite?)

    alt_text

    Exporting System Rules

    You can also export the AI rules by going through these steps:

    • Goto your workspace
    • Click on 'Manage AI'
    • Select 'Manage Rules'
    • Click on 'Export'

    A JSON file will be downloaded for AI rules

    alt_text

    Workspace Rules

    Even you can also configure your workspace properly by managing the workspace rules. It will help you to manage your workspace more effectively. You can manage your workspace’s functionality by configuring the business rules.

    A. General Rules

    • Chat History Configuration related rules
    1. Cache size for chat history, Pagination mandatory for honoring
    2. Chunk size for chat history calls
    3. Enable to activate chat history
    4. Enable to activate pagination of chat history
    5. Enable/ Disable inclusion of single init response in Chat History
    6. Extension of expiry timer for history cache in minutes
    7. Select the mode of chat history
    8. Self destruct timer for history cache in minutes
    • Configuration related rules
    1. Audit Anonymous Users
    2. Channel Based Product Rules
    3. Db update for the only login
    4. Developer Mode
    5. Display Welcome Message
    6. Domain name of One Portal
    7. Enable IP address audit for request
    8. IP Address Header name
    9. Languages
    10. Limit Audit Size In Database
    11. Mail Account Password
    12. Mail Account Username
    13. Mail Server Host
    14. Mail Server Port
    15. Post Login Custom Header
    16. Remote IP Address Header name
    17. Reports Export Path
    18. Support post login action
    19. User Social Profile Refresh Frequency
    • Elastic Search Configuration related rules
    1. Cluster support for Elastic Search
    2. Elastic Search URL
    3. Password For Elastic Search
    4. Username For Elastic Search
    • Image related rules
    1. Image View Height
    2. Image View Width
    • Multilingual Configuration related rules
    1. Enable/ Disable multilingual data management
    • OAuth Config related rules
    1. Login form URL for IOT channels v1
    2. Success form URL for IOT channels v1
    • OAuth2 Proxy Config related rules
    1. Morfeus Domain with Context
    • Post Login Configuration related rules
    1. Select the mode to honor post login response with INIT response
    2. Select to display last login time information
    • Security related rules
    1. Expiry for the partial states of a user's application (days)
    2. Login modes supported for the bot users

    alt_text

    B. Security Rules

    • AWS related rules
    1. Access Key ID
    2. AWS Credentials Source
    3. Bucket Name
    4. Bucket Region
    5. Expiration Offset
    6. Expiration Offset
    7. Kibana URL for viewing managed EC2 Instance logs
    8. Live Log URL for viewing managed EC2 Instance logs
    9. Morfeus API Key
    10. Morfeus API Key Secret
    11. Morfeus API Key Secret
    12. Secret Access Key
    • Configuration related rules
    1. Secret Access Key
    2. Authentication Credentials Refresh Frequency
    3. Domain name of the platform
    4. Link Customer Social Accounts to Login Credentials
    5. Login Policy
    6. Max. requests per session
    7. Maximum Failed Attempts for 2FA Policy
    8. Maximum Message Length
    9. Morfeus Secret Key
    10. Social 1FA Authentication Mode
    11. Social 2FA Authentication Mode
    12. Social Authentication Policy
    13. The URL Path to be used in the Cookie
    14. User 2FA Pending Timeout (secs)
    15. User Session Timeout (secs)
    16. WebSdk Request timeout (secs)

    alt_text

    C. Manual Chat Rules

    1. Chat Agent Provider
    2. Manual Chat Agent Provider API Key
    3. Mode of Manual Chat Agents request to morfeus
    4. Select the agent provider for manual chat fallback
    • Fallback Manual Chat related rules
    1. Agent chat fallback URL
    2. Agent Chat License Key
    3. Agents client id
    4. Agents refresh token
    5. Download Transcript Feature
    6. Enable fallback on failed AI conversations
    7. Enable fallback on sentimental analysis
    8. Enable fallback on user prompt
    9. Enable/disable Customer Support Fallback
    10. Return to bot when manual chat is inactive for a given period of time in seconds
    11. Text inputted by user or agent to end chat
    12. Text seen by agent when invalid response is sent
    13. Text seen by agent when user closes the chat session
    14. Text seen by user when agent closes the chat session
    15. Text seen by user when starts the manual chat session
    16. Zendesk domain URL

    alt_text

    D. Alexa Rules

    • Configuration related rules
    1. Amazon Alexa is literal enabled
    2. Enable to pass OTP successfully while on-boarding

    alt_text

    E. Camel Route Rules

    • Configuration related rules
    1. CRM Interaction Route
    2. CRM Interaction Route Get Status
    3. LMS Route

    alt_text

    F. Campaign Rules

    • Configuration related rules
    1. Enable bizapp response auditing
    2. Enable campaigns
    3. Enable NER call for FAQs for data enrichment

    alt_text

    G. Push Notification Rules

    • Configuration
    1. PUSH BOT DOMAIN
    2. FCM SERVER API KEY
    3. iOS/Android package name for FCM

    alt_text

    I. Login Rules

    • Configuration related rules
    1. AWS KMS Key
    2. Oauth Encryption Type

    alt_text

    J. Triniti.Ai

    • Configuration related rules
    1. Fulfillment webhook url
    2. Secret key the Fulfillment webhook url
    3. Triniti Cloud Basic Auth
    4. Triniti Cloud Domain Name

    alt_Text

    Importing Workspace Rules

    You can import your workspace rules by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Rules'
    3. Click on 'Import'
    4. Click on 'Yes' on popup. (Are you sure you want to overwrite?)

    Note: The rules file should be in JSON format where all the rules are configured in JSON.

    Exporting Workspace Rules

    You can also export your workspace rules by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Rules'
    3. Click on 'Export'

    It will download a JSON file containing all the configured rules.

    alt_Text


    Managing Messages

    Overview

    You can configure a message code (TRANSFER_ERROR_MESSAGE) and the response that bot will send to the user (Balance not available, please enter an amount less than available balance.). So If the user will try to send amount more than their actual balance then the bot will send the error message (Balance not available, please enter an amount less than available balance.). You can configure messages or some error messages, that will be shown if bot will not found any response.

    Eg; TRANSFER_ERROR_MESSAGE: Balance not available, please enter an amount less than the available balance.

    Adding Messages

    You can configure the messages by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Products'
    3. Click on 'Messages'
    4. Click on 'Add'
    5. Enter all the required details (Name, Value, description, category, etc)
    6. Click on Add

    alt_text

    Importing Messages

    You can also import the set of messages from your system by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Products'
    3. Click on 'Messages'
    4. Click on 'Import' or 'Import CSV'
    5. Select a JSON file or CSV file from your system
    6. Click 'Yes' on popup. (Are you sure you want to overwrite?)

    Note:

    • If you are uploading a CSV file it should contain Message Code, Message Category Message Value, Message Description, Customer Segment, Code, Language, etc columns.
    • If you are uploading a JSON file, it should have all the configured messages in JSON format.

    alt_text

    Exporting Messages

    You can also export the set of messages by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Products'
    3. Click on 'Messages'
    4. Click on 'Export' or 'Export CSV'

    If you click on Export it will download a JSON file containing all the configured messages.

    Or if you click on Export CSV it will download a CSV file containing Message Code, Message Category Message Value, Message Description, Customer Segment, Code, Language, etc columns

    alt_text


    Managing Templates

    Overview

    Templates are a combination of some components like images, text, buttons, etc. It will give a good look and feel to the user better than the only text response.

    Eg; If you want to greet your user with a Greeting/Welcome message then you can set the welcome message in a text-only as "Hey there! How may I assist you?" but If you add some Images/card then it will give a good look & feel. That will help the user to spend more time on your bot.

    alt_text

    Adding Templates

    You can either configure templates manually by selecting the required template & entering all the required details or import the templates. The file that will be imported should be a JSON file, where all the templates are configured in JSON format.

    Add Templates Manually

    To add the templates manually follow these steps:

    1. Go to your workspace
    2. Click on 'Manage Templates'
    3. Click on 'Add Card'
    4. Enter the required details (Name, category, code, version, etc.)
    5. Click on 'Next'
    6. Configure your template
    7. Click on save

    You can configure the following templates:

    1. Text Template: How to configure this template? Please refer Configure Text Template
    2. Card Template: How to configure this template? Please refer Configure Card Template
    3. Image Template: How to configure this template? Please refer Configure Image Template
    4. List Template: How to configure this template? Please refer Configure List Template
    5. Button Template: How to configure this template? Please refer Configure Button Template
    6. Carousel Template: How to configure this template? Please refer Configure Carousel Template
    7. Video Template: How to configure this template? Please refer Configure Video Template
    8. Custom Template: How to configure this template? Please refer Configure Custom Template

    Export / Import

    If you have configured some templates & wants to use the same templates in another environment or want to keep that configuration then you can simply export those templates & If you want to use the same configuration in another environment/workspace then you can import the same JSON file rather than configuring again.

    Importing Templates

    Even you can import the list of templates from your system by following these steps:

    1. Go to your workspace
    2. Click on 'Manage Templates'
    3. Click on 'Import'
    4. Click 'yes' on the popup (Are you sure you want to overwrite?)
    5. Select a JSON file from your system containing template configuration in JSON format.

    Exporting Templates

    You can also export the templates by following these steps:

    1. Go to your workspace
    2. Click on 'Manage Templates'
    3. Click on 'Export'

    It will download a JSON file containing all the configured templates.

    alt_text


    Managing Channels

    We have the channels for the communication between your user & the bot. These channels can help you to expand your services on the various channel platform as every user might be using the various channels. So we have provided some configuration to make your bot channel specific to interact with the user. Users can either interact with the bot by typing or can interact with the voice command, the bot will perform any kind of transaction or inquiry that the user has asked.

    Currently, we are supporting the following channels:

    1. Social Channels
    2. Application-based Channels
    3. IoT based Channels

    You can manage your channels from the 'Manage Channels' section under 'Configure Workspace'.

    alt_text

    Common Configutaion Parameters

    You can configure the channels by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'Enabled'
    4. Click on the menu icon (On the channel which you want to configure)
    5. Click on 'Edit Channel'

    alt_text

    After clicking on the 'Edit Channel' you will see the following tabs, so what are these tabs & how to configure these tabs you will see in this section.

    To know more about configuration or confgure these parameters, please refer to Configure Channels

    • General Configuration:

    You can configure the token based or authentication related configurations in this section for the channel. This section consists following parameter that can be configured to enable the channel for the bot.

    Parameters
    Description
    Welcome Message You can enter a welcome message for the bot that will be shown on the bot launch to the user.
    Verify Token You can enter the verification token that will be needed to authenticate the bot with the particular channel platform.
    Auth Token It will be needed for the authentication of the webhhok request from your bot with the channel.
    Refresh Token You can enter the refresh token that will be needed for the authentication with some of the channel
    Token Expiry (Secs) You can set the token expiry time, after that time the token will be expired
    Secret Key The channel secret key will be reauired for the authentication.
    Input Message URL Endpoint It specifies the Morfeus endpoint URL to which messages from customers are sent to
    Base URL ---
    Signature Key ---
    Reponse strategy ---
    Social Authentication policy ---
    Environment You can select the environment for the bot to work.

    alt_Text

    • Media:

    You can enable or disable the various media option for your bot. We are supporting following media option:

    Parameters
    Description
    Voice You can enable this media if you want to support audio input on the bot.
    Video You can enable this media if you want to support video responses on the bot.
    Stickers If enabled, you will allow stickers on the bot as input.
    Location If you want to check for location, you can enable this feature.
    Images You can also allow the images response by enabling this feature.
    Emojis Some users might be using emojis more than text to express their feelings, by enabling this feature you can allow that kind of input.
    Documents You can also share the the documents by enabling this feature.
    Gifs If you want o show some GIFs on your bot then you can enable this feature.

    alt_text

    • Emoji:

    Some users might use emojis to show their feelings instead of expressing their feelings in text. So you can configure your bot to handle these types of inputs from the user. like if a user enters any emoji then how you bot should react or reply to that kind of emojis.

    Eg; If a user has entered any sad, angry, not satisfied with the response kind of emojis then you can configure these type of emojis to show courtesy to the user by entering the courtesy intent in those emojis by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'Enabled'
    4. Click on the menu icon (On the channel which you want to configure)
    5. Click on 'Edit Channel'
    6. Navigate to 'Emojis'
    7. Click on 'Add'
    8. Enter all the required details(emojis(unicode characters), alias, description, intent(that you want to show on the particular emojis), etc.)
    9. Click on 'Save'

    alt_text

    1. Load Default Emojis:

    You can load the default emojis which is already configured and added to the bot by default by following these steps:

    • Goto your workspace
    • Click on 'Manage Channels'
    • Navigate to 'Enabled'
    • Click on the menu icon (On the channel which you want to configure)
    • Click on 'Edit Channel'
    • Navigate to 'Emojis'
    • Click on 'Load Dedaults'

    (It will load all the default emojis)

    alt_text

    2. Link Emojis to FAQs/Intents/Smalltalk

    The emojis can be linked to any FAQ, smalltalk or intent to get the expressive response by emojis, that could make a good impression of the bot response to the user To link FAQs/Intents/Smalltalks please follow these steps:

    • Goto your workspace
    • Click on 'Manage Channels'
    • Navigate to 'Enabled'
    • Click on the menu icon (On the channel which you want to configure)
    • Click on 'Edit Channel'
    • Navigate to 'Emojis'

    (It will show all the emojis which are added)

    Link Emojis to the FAQ:

    • Click on Edit icon
    • Select 'Data Category' as 'FAQ'
    • Select 'FAQ Category'
    • Select 'FAQ ID'
    • Click on save

    link_emojis_faq

    Link Emojis to the FAQ:

    • Click on Edit icon
    • Select 'Data Category' as 'Intent'
    • Select 'Intent Category'
    • Select 'Intent Name'
    • Click on save

    link_emojis_intent

    3. Import Emojis

    You can import or export the emojis also by following these steps:

    • Goto your workspace
    • Click on 'Manage Channels'
    • Navigate to 'Enabled'
    • Click on the menu icon (On the channel which you want to configure)
    • Click on 'Edit Channel'
    • Navigate to 'Emojis'
    • Click on 'Import'
    • Select a CSV file (Which contains Aliases, Description, Emoji, Intent Name, etc. columns)

    alt_text

    4. Export Emojis

    You can import or export the emojis also by following these steps:

    • Goto your workspace
    • Click on 'Manage Channels'
    • Navigate to 'Enabled'
    • Click on the menu icon (On the channel which you want to configure)
    • Click on 'Edit Channel'
    • Navigate to 'Emojis'
    • Click on 'Export' (A CSV file will be downloaded with Aliases, Description, Emoji, Intent Name, etc. columns)

    • Stickers:

    If you want to allow stickers on your bot so that user can interact with stickers as well and how you bot should react on those stickers then you can configure these kind of input as well by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'Enabled'
    4. Click on the menu icon (On the channel which you want to configure)
    5. Click on 'Edit Channel'
    6. Navigate to 'Stickers'
    7. Click on 'Add'
    8. Enter all the required details(sticker, alias, description, intent(that you want to show on the particular sticker), etc.)
    9. Click on 'Save'

    alt_text

    • OAuth Configuration:

    OAuth Configuration will be needed to enable the IoT based channels like Amazon Alexa etc. You can configure following parameters.

    Parameters
    Description
    Client ID It specifies the Client ID of the authorization server.
    Client Secret It specifies the Client Secret of the authorization server.
    Scope It specifies the scope permissible by the authorization server.
    Authorization URI It specifies the Authorization URI.
    Access Token URI It specifies the Access Token URI.
    Profile URI It specifies the Profile URI.
    Morfeus Client ID It specifies the Morfeus Client ID for Alexa & Google Assistant.
    Morfeus Secret It specifies the Morfeus Secret for Alexa & Google Assistant.
    IOT Authorization URI It specifies the Authorization URI for Alexa & Google Assitant.
    IOT Token URI It specifies the Token URI for Alexa & Google Assistant.

    alt_text

    Social Channels

    Social channels give you many opportunities and benefits regarding expand your services and reach to the user. Nowadays almost every user is using social channels. So we have segregated the social channels to make your bot more interactive.

    We are supporting the following Social Channels:

    1. Facebook Channel

    You can configure your bot to interact with the user through Facebook messenger, where user can ask their query to the bot on the Facebook messenger itself.

    To add Facebook to your bot follow these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'Social'
    4. Toggle on the 'Facebook'
    5. Click on 'Add'

    To configure your Facebook channel, please refer to Configure Facebook Channel

    alt_text

    2. Skype Channel

    We also do support skype channels to configure your bot & interact with the user on the skype. You can enable the skype channels for your bot by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'Social'
    4. Toggle on the 'Skype'
    5. Click on 'Add'

    To configure your skype channel, please refer to Configure Skype Channel

    alt_text

    3. Slack Channel

    Slack is also in our channels bucket list, your bot can also interact on slack with the users. To add the slack in your bot follow these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'Social'
    4. Toggle on the 'Slack'
    5. Click on 'Add'

    To configure your slack channel, please refer to Configure Slack Channel

    alt_text

    4. Line Channel

    The bot can answer the user's query on the line channel as well. You can enable & configure the line channel for your bot by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'Social'
    4. Toggle on the 'Line'
    5. Click on 'Add'

    To configure your line channel, please refer to Configure Line Channel

    alt_text

    5. Viber Channel

    Even you can configure your bot to respond to the user's query on the Viber as well by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'Social'
    4. Toggle on the 'Viber'
    5. Click on 'Add'

    To configure your Viber channel, please refer to Configure Viber Channel

    alt_text

    6. Webex Channel

    We are also supporting the Webex channel for the bot. You can enable & configure the WebEx channel for your bot by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'Social'
    4. Toggle on the 'Webex'
    5. Click on 'Add'

    To configure your webex channel, please refer to Configure Webex Channel

    alt_text

    8. Whatsapp by Gupshup Channel

    We have also enabled the What by gupshup channel for the bot as Whatsapp is the most popular social media channel across the world, so it will be more easy to reach the user and interact with them. You can enable Whatsapp by gupshup channel for your bot by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'Social'
    4. Toggle on the 'Whatsapp by Gupshup'
    5. Click on 'Add'

    To configure your Whatsapp by Gupshup channel, please refer to Configure Whatsapp by Gupshup Channel

    alt_Text

    9. Telegram Channel

    We also do support telegram channel, you can enable telegram channel for your bot by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'Social'
    4. Toggle on the 'Telegram'
    5. Click on 'Add'

    To configure your telegram channel, please refer to Configure Telegram Channel

    alt_Text

    Application Based Channels

    We enable you to craft your conversations to be embedded in an existing Web App or Mobile App. You can configure your bot for a mobile app or a web app to interact with the user and get their query resolved by enabling these application-based channels.

    Currently, we are supporting the following application-based channels:

    1. Web App Channel

    Web App is a lightweight messaging SDK which can be embedded easily in web sites and hybrid mobile apps with minimal integration effort. Once integrated it allows end-users to converse with the Conversational AI/bot on the Active AI platform on both text and voice. WebSDK has out the box support for multiple UX templates such as List and Carousel and supports extensive customization for both UX and functionality.

    You can enable the Web App channel by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'App'
    4. Toggle on the 'Webapp'
    5. Click on 'Add'

    To configure your WebApp channel, please refer to Configure WebApp Channel

    alt_Text

    2. Android SDK

    Android Native SDK is a lightweight messaging SDK which can be embedded easily in native mobile apps with minimal integration effort. Once integrated it allows end-users to converse with the Conversational AI /bot on the Active AI platform on both text and voice. The SDK has out the box support for multiple UX templates such as List and Carousel and supports extensive customization for both UX and functionality. The SDK has inbuilt support for banking grade security features.

    You can configure an android SDK channel for your bot by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'App'
    4. Toggle on the 'Hybrid Android SDK'
    5. Click on 'Add'

    To configure your Android SDK channel, please refer to Configure Android SDK Channel

    alt_Text

    3. Hybrid Android SDK Channel

    Hybrid Android SDK is a lightweight messaging SDK which can be embedded easily in web sites and hybrid mobile apps with minimal integration effort. Once integrated it allows end-users to converse with the Conversational AI /bot on the Active AI platform on both text and voice. WebSDK has out the box support for multiple UX templates such as List and Carousel and supports extensive customization for both UX and functionality.

    You can configure a hybrid android SDK channel for your bot by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'App'
    4. Toggle on the 'Hybrid Android SDK'
    5. Click on 'Add'

    To configure your Hybrid Andoroid SDK channel, please refer to Configure Hybrid Android SDK Channel

    alt_text

    4. iOS SDK Channel

    iOS Native SDK is a lightweight messaging SDK which can be embedded easily in native mobile apps with minimal integration effort. Once integrated it allows end-users to converse with the Conversational AI /bot on the Active AI platform on both text and voice. The SDK has out the box support for multiple UX templates such as List and Carousel and supports extensive customization for both UX and functionality. The SDK has inbuilt support for banking grade security features.

    You can configure an iOS SDK channel for your bot by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'App'
    4. Toggle on the 'iOS SDK'
    5. Click on 'Add'

    To configure your iOS SDK channel, please refer to Configure iOS SDK Channel

    alt_text

    5. Hybrid iOS SDK Channel

    iOS Hybrid SDK is a lightweight messaging SDK which can be embedded easily in web sites and hybrid mobile apps with minimal integration effort. Once integrated it allows end-users to converse with the Conversational AI /bot on the Active AI platform on both text and voice. WebSDK has out the box support for multiple UX templates such as List and Carousel and supports extensive customization for both UX and functionality.

    You can configure a hybrid iOS SDK channel for your bot by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'App'
    4. Toggle on the 'Hybrid iOS SDK'
    5. Click on 'Add'

    To configure your hybrid iOS SDK channel, please refer to Configure Hybrid iOS SDK Channel

    alt_text

    IoT Based Channels

    As artificial intelligence is broadening the users are also being more adaptable to artificial intelligence. We have enabled some of the channels that support speech responses to make the user more interactive with the bot. It will reduce more effort from the user to perform any transaction or ask any query by voice command rather than typing on the bot. It will allow the user to perform any action by speeches, the bot will also respond to the user in speeches as well. It will fill the maximum gap between the user & the bot that comes during interaction with the bot in typing, time-consuming, & more importantly thinking to interact with the bot.

    We are currently supporting these IoT based channels:

    1. Amazon Alexa Channel

    As the Amazon Alexa is getting more popular among the user, we have made the amazon Alexa configure with your bot and interact with the user for any kind of transaction or inquiry. Users can directly interact with the bot through amazon Alexa and get their things done or get their response concerned with their queries.

    You can enable the amazon Alexa channel for your bot by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'IoT'
    4. Toggle on the 'Amazon Alexa'
    5. Click on 'Add'

    To configure your amazon Alexa channel, please refer to Configure Amazon Alexa Channel

    alt_text

    2. Google Assistant Channel

    As the most common IoT device is Google Assistant that enables the user to interact by asking or voice command to the google assistant. We have enabled the google assistant to make your conversational bot more interactive so that the user can interact with your bot by providing any voice command, the bot will send the response in speeches as well. You can enable the Google Assistant channel for your bot to interact with the user in speeches by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Channels'
    3. Navigate to 'IoT'
    4. Toggle on the 'Google Assistant'
    5. Click on 'Add'

    To configure your google assistant channel, please refer to Configure Google Assistant Channel

    alt_text


    Managing Hooks

    The end goal of determining a message intends to do some action based on that. For that purpose for each intent/feature, we have to configure a hook. A hook is the final action bot will perform based on the intent. A hook can be of transaction or inquiry type. Transaction type hook means its a multi-step fulfillment like fund transfer. Inquiry type hook is any fulfillment which can be fulfilled in a single step like "what is my balance?". We can have multiple hooks for an intent to have different fulfillment for different channels.

    Adding Hooks

    You can add the hooks by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Hooks'
    3. Click on 'Add'
    4. Enter all the required details
    5. Click on Add
    • Below are the fields to be configured while creating a record

    Name : Name of the Hook to be created

    Service ID : Unique Id to identify the Hook

    Category : This is a very important field which used to club all the other modules like messages, rules, template and data to bind together

    Intents : Configure applicable intents for this function like txn-login etc

    Feature : If we need sub-intent functionality, need to configure feature Id and rule JSON

    Functionality Type: Define whether this is inquiry or transaction type module

    Channels : It will display all the configured channels for this workspace. Can enable or disable based on the requirement.

    Security > Realm: It used to configure whether this function can be invoked before login or post login by configuring

    Fulfillment: To define how this hooks if fulfilled. Options are Camel routes, Workflow, Template, SpringBean and Webhook

    • If there are many intents and fulfillments it easy to configure using Function module has it provides consolidation of all its modules (messages, rules, templates and data) Configure Functions

    alt_text

    Deleting Hooks

    If you don't want any hook then you can delete those hooks by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Hooks'
    3. Click on delete icon of the particular hook(Which you want to delete)

    Import/Export

    If you want to use the hooks of any other workspace in your workspace or if you have configured hooks then you can import that JSON file in your workspace, And If you want to keep the existing configured hooks or use in another environment then you can export those hooks.

    You can also import the hooks from your system by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Hooks'
    3. Click on 'Import'
    4. Click 'Yes' on the pop-up (Are you sure you want to overwrite?)
    5. Select a JSON file from your system which contains configuration of hooks in JSON format.

    You can also export the hooks from your system by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Hooks'
    3. Click on 'Export'

    It will download a JSON file containing Hooks configuration in JSON format.

    alt_text


    Managing Campaigns

    Overview

    A campaign is a process to push any executable information from the server to the client through batch or in the transaction. You can configure a campaign for your workspace by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Campaign'
    3. Click on 'Add'
    4. Enter all the required details in 'Definition section'
    5. Enter all the required details in 'Rules section'
    6. Click on 'Add Campaign'

    alt_Text

    Rules & Parameters

    Definition

    1. Type of campaign: Campaign could be of two types -Internal -External

    Internal campaign is a campaign which will be fulfilled by a Fulfillment service such as Template or Hook. Hook can be workflow, camel route or javabean etc.

    External Campaign is a campaign which can be fulfulled by calling a external service.

    1. Start & End Date: This indicates the date from which campaign will start and campaign will end.

    2. Supported Channels: Morfeus is a omnichannel platform, it supports a variety of channels. This toggle provides right to user to push campaign against various channels that are setup in particular workspace.

    3. State- State defines wether a campaign is enabled or disabled.

    4. Converstation Type- Conversation type for a campaign can be of Faq or Transaction. It defines when will the campaign be triggered, wether in Faqs or in Transaction.

    5. Engagement Type - Engagement type decides wether the campaign to be fulfilled by a Hook or a Template.

    6. Engagement Value - Depending on the engagement type you could select the engagemnet value for the campaing from the drop down. It could be either template value or Hook name.

    Rules

    1. Display when a Customer : This rules provides the ablity to display campaign to customer when customer either starts interacting with bot or when he has finised his interation.

    2. Campaign managament consists of three set of rules

    • Context Rules
    • Historical Rules
    • Derived Rules

    Context Rules - These rules apply when the user is between any process.It will specify the context rule by selecting operand & operator. This will be applied by checking the context of the user. For eg. You select context rules as Billing Date and operator as Between and then specify a date. When the user is performing bill payment and the date falls between the range of specified date then the campaingn will be rendered.

    Historical Rules - It will be used to store the previous transactions by selecting the 'super Aggregation', 'Aggregation', 'Operator', etc. For eg. - select super aggregration as Count ,Aggregation as Average and Transaction amount, operator as between and specified Conversation amount. Then when the transaction amount of the user lies in the specified range of the mentioned amount in rule then the campaign will rendered.

    Derived Rule - Derived rule is aaplied on historical rule if configured otherwise is optional. for eg. You could specify the payee name, if user performs transcation to specified user then rule will get exectuted.

    Reports - The reports will show the graphical representation of the statistics of the campaigns done on the bot.

    alt_Text

    Managing Schedulers

    You can manage the scheduler to auto-trigger the job or job classes on the configured interval (Hour(s), Day(s), week(s), month(s), etc)

    Eg; If you want to keep your workspace up to date with the latest version, then you can set a schedule for your configuration class where it can update all the latest version changes that will be added.

    Adding a Scheduler

    You can add a scheduler for any job by following these steps:

    1. Goto your workspace
    2. Navigate to 'Manage Scheduler'
    3. Click on 'Schedule'
    4. Enter the all the required details(Name, trigger, Start At(date), Class, Description, Interval, etc.)
    5. Click on Schedule

    It will schedule a job that you want to be triggered based on the configured interval.

    Note:

    • Name: The job name which you want to trigger.
    • Class: A java class that will be triggered on the schedule. For that create a java class that extends our interface and implements the methods.
    • Trigger: It will be a job id for that job you are scheduling.
    • Description: The description of the Job.
    • Starts At: The date on which you want your job to be triggered.
    • Interval: For which interval you want to trigger the scheduled job. like every 2 days, weeks, months, etc. You can configure.

    alt_text

    Editing Scheduler

    If you want to change the job or interval then you can edit the schedule by following these steps:

    1. Goto your workspace
    2. Navigate to 'Manage Scheduler'
    3. Select the Scheduler (Which you want to edit)
    4. Click on the 'Edit' icon under 'Action'
    5. Edit the scheduler
    6. Click on Schedule

    Stopping Scheduler

    If you want to stop the scheduler then you can stop by following these steps:

    1. Goto your workspace
    2. Navigate to 'Manage Scheduler'
    3. Select the Scheduler (Which you want to stop)
    4. Click on the 'PAUSE' icon under 'Action'
    5. Click on 'Stop' (on popup)

    alt_text

    Deleting Scheduler

    If you want to remove a scheduler then you can delete the scheduler by following these steps:

    1. Goto your workspace
    2. Navigate to 'Manage Scheduler'
    3. Select the Scheduler (Which you want to delete)
    4. Click on the 'DELETE' icon under 'Action'
    5. Click on 'Delete' (on popup)

    It will delete the configured scheduler.


    Managing Products

    We support some products that can be more effective & essential to your bot for handling banking related queries. These are all the basic banking needs related products, that you can configure to make your banking communication more effective.

    Functions

    You can add some functions to your workspace so that bot can react on that function to give the proper response to the user. Eg; You have configured a function for balance_enquiry where you have configured

    For a function, a default hook will be created with the service category as function category and service id as function code. Data for each function will be retrieved based on the function category. If the function category is separated by ',' like Fund Transfer, Common, balance inquiry then While displaying we display data for all the categories. i.e; Data associated with both the category (Fund Transfer, Common, balance inquiry)

    You can add the function by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Products'
    3. Click on 'Functions'
    4. Click on 'Add Function'
    5. Enter all the required details
    6. Click on 'Create & Proceed'
    7. Configure your function (please refer Configure Function)
    8. Click on save

    Rules

    You can also configure your products rules by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Products'
    3. Click on 'Rules'
    4. Configure the rules as per your requirements
    5. Click on Save

    alt_text

    You can configure the following rules:

    A. General Rules

    1. Configuration rules

    • Context Path
    • Webview Domain
    • Over All daily limit

    2. General rules

    • RB Configurations

    3. Security Rules

    • Encrypt User Password
    • Exponent Value for password Encryption
    • Modulus Value for password Encryption

    B. Bill Payment Rules

    1. Biller Payment Rules

    • Daily Transaction Count
    • Daily Transaction Limit (Amount)
    • Maximum Transaction Limit (Amount)
    • Minimum Interval between Transactions (mins)
    • Minimum Transaction Limit (Amount)

    2. Configuration Rules

    • Biller Type

    3. Consumer Number Rules

    • Consumer Number Validation Pattern
    • Maximum Consumer Reference Number Length
    • Minimum Consumer Reference Number Length

    4. Credit Card Rules

    • Daily Transaction Count
    • Daily Transaction Limit (Amount)
    • Maximum Transaction Limit (Amount)
    • Minimum Interval between Transactions (mins)
    • Minimum Transaction Limit (Amount)

    5. One Time Credit Card Rules

    • Daily Transaction Count
    • Daily Transaction Limit (Amount)
    • Maximum Transaction Limit (Amount)
    • Minimum Transaction Limit (Amount)

    C. Recharge Rules

    1. DTH Recharge Rules

    • Daily DTH Transaction Limit ( Amount )
    • Daily Transaction Limit ( Count )
    • Minimum Interval between Transactions (mins)
    • Minimum Transaction Limit (Amount)
    • Per Transaction Limit (Amount)

    2. Data Card Recharge Rules

    • Daily Datacard Transaction Limit ( Count )
    • Daily Transaction Limit ( Amount )
    • Minimum Interval between Transactions (mins)
    • Minimum Transaction Limit (Amount)
    • Per Transaction Datacard Limit (Amount)

    3. Mobile Recharge Rules

    • Daily Transaction Limit ( Amount )
    • Daily Transaction Limit ( Count )
    • DATA CARD Number Validation Group
    • DATA CARD Number Validation Pattern
    • DTH Number Validation Group
    • DTH Number Validation Pattern
    • Minimum Interval between Transactions (mins)
    • Minimum Transaction Limit (Amount)
    • Mobile Number Validation Group
    • Mobile Number Validation Pattern
    • Per Transaction Limit (Amount)

    D. Fund Transfer Rules

    1. Fund Transfer Rules

    • Amount validation for self bank credit card
    • Time zone for NEFT timing

    2. Inter Bank Rules

    • Daily Transaction Count
    • Daily Transaction Limit (Amount)
    • Daily Transaction Limit (Amount) for IMPS
    • IMPS Maximum Transaction Limit (Amount)
    • IMPS Minimum Transaction Limit (Amount)
    • Minimum Interval between Transactions (mins)
    • NEFT Maximum Transaction Limit (Amount)
    • NEFT Minimum Transaction Limit (Amount)
    • RTGS Maximum Transaction Limit (Amount)
    • RTGS Minimum Transaction Limit (Amount)

    3. Intra Bank Rules

    • Daily Transaction Count
    • Daily Transaction Limit (Amount)
    • Maximum Transaction Limit (Amount)
    • Minimum Interval between Transactions (mins)
    • Minimum Transaction Limit (Amount)

    4. MMID Rules

    • Per Transaction Max Amount
    • Per Transaction Min Amount
    • Transaction amount per day limit
    • Transaction count per day limit
    • Transaction frequency

    5. Quick Pay Rules

    • Per Transaction Max Amount
    • Per Transaction Min Amount
    • Transaction amount per day limit
    • Transaction count per day limit
    • Transaction frequency

    6. Self Rules

    • Daily Transaction Count
    • Daily Transaction Limit (Amount)
    • Maximum Transaction Limit (Amount)
    • Minimum Interval between Transactions (mins)
    • Minimum Transaction Limit (Amount)

    7. Transaction Timing Rules

    • NEFT Saturday Transaction Timings
    • NEFT Transaction Timings from Monday to Friday
    • RTGS Saturday Transaction Timings
    • RTGS Transaction Timings from Monday to Friday

    8. UPI Rules

    • Per Transaction Max Amount
    • Per Transaction Min Amount
    • Transaction amount per day limit
    • Transaction count per day limit
    • Transaction frequency

    E. Alexa Rules

    1. Notification Rules

    • Alexa Push Notification title
    • Alexa Push Notification title
    • Alexa Repeat

    Default Messages

    You can add some default messages for your bot to send a response for a particular scenario if the user asks from one of those messages or some default error message so that bot will send the response that is set in that message code if don't find any proper response.

    To configure the messages please refer Manage messages

    Banking Products

    You can add your banking products like checkbook allowed, FD Open allowed, etc. by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Products'
    3. Click on 'Products'
    4. Click on 'Add Product'
    5. Enter all the required details
    6. Click on Add

    Billers

    You can also set the billers for your bot by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Products'
    3. Click on 'Billing'
    4. Add biller category or choose from an existing one
    5. Click on 'Add Biller'
    6. Enter all the required details (Name, Biller Id, Biller Presence, Late Payment, Customer surcharge, Partial pay, etc.)
    7. Click on Add

    Recharge Billers

    If you want to save the operator for recharges then you can configure the Recharge biller by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Products'
    3. Click on 'Recharge'
    4. Click on 'Add Operator'
    5. Enter all the required details
    6. Click on Add

    IFSC Codes

    You can set IFSC codes for the various regions/areas as per your banking needs.

    1. Goto your workspace
    2. Click on 'Manage Products'
    3. Click on 'IFSC Codes'
    4. Click on 'Add IFSC'
    5. Enter all the required details
    6. Click on Add

    Holiday Calendar

    You can configure bank holidays also by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Products'
    3. Click on 'Holiday'
    4. Click on 'Add Holiday'
    5. Enter all the required details (Holiday date, transaction type, start time, end time, etc.)
    6. Click on Add

    Customer Segment

    You can add a Customer Segment by following these steps:

    1. Goto your workspace
    2. Click on 'Manage Products'
    3. Click on 'Customer Segments'
    4. Click on 'Add Customer Segment'
    5. Enter all the required details
    6. Click on Add

    alt_text


    Managing Exigency Management

    Exigency Management provides admin to mark the bot as down for maintenance. The exigency management uses push notification to send notifications to the users, so all the essential changes to be done for push notification are mandatory here. You can configure any one of these two exigency management Scheduled exigency management and another one is Immediate Exigency Management.

    Scheduled Exigency Management

    Scheduled Exigency Management allows you to add a schedule for a future time when the bot will be down, also this gives you an option to send advance notifications to the users about this downtime. Suppose if you have any planned date for the maintenance then you can simply configure Scheduled Exigency Management.

    Essentially this requires parameters like downtime start time and Down time end time. In case notification has to be set up, add the time by when a notification is to be sent and also the message to be sent, also select the channels for which the notifications to be sent.

    You can configure Scheduled Exigency Management by following these steps:

    1. Goto your workspace
    2. Click on 'Exigency Management'
    3. Click on 'Add'
    4. Enter all the required details (Start time, End Time, Notification, Notify by, Title, message, notify channels, etc.)
    5. Click on Add.

    It will save all the entered data and send the notification to the user at the configured date & time.

    alt_text

    Immediate Exigency Management

    Immediate Exigency Management allows you to mark the bot as down immediately or to enable a bot that is down already, this does not provide any notification set up, rather its effect is immediate. Suppose if there was some issue that came suddenly that can cause the performance of the services then you can immediately disable the bot without sending any notifications.

    You can configure Immediate Exigency Management by following these steps:

    1. Goto your workspace
    2. Enable/Disable bot under 'Configure Workspace' section

    It will disable the bot immediately without sending any notification, similarly you can enable the bot from here also.

    alt_text


    Managing Analytics

    Analytics will help to know the overall statistics of bot & AI Engine, which gives many details in terms of Users, Accuracy, Channels, Domain, etc. You can monitor the requests and responses of the bot to improve the services and serve the user better. You can check the number of users & new users registering on the bot, number of hits receiving on the bot, the accuracy of the knowledge base on a daily, weekly, monthly basis and for a particular date basis as well.

    alt_text

    You can filter the report based on Date (Daily, weekly, Monthly, Yearly) with Domains (FAQ, SmallTalk, etc) & Channel(Web App).

    Using this report, we can do detailed analysis on Accuracy obtaining, No of Logged-in Users, NewUsers, Total Session in range, Live Chat Redirections, Feedback percentage and many more.

    Customer Support has complete details of user based on customer id, phone number, & from Channel have interacted. Using this we can easily find Registered Users or Anonymous. The search option is also available to filter by entering Customer id also.

    No. of logins - Total records/user who logged in for a given date range.

    AI Accuracy - AI accuracy represents the percentage of messages whose confidence is above the minimum threshold

    Users Till Date

    • Registered - Total no. of registered users from inception to till today. If data range is selected, then it is from inception to To-Date.
    • Anonymous - Total no. of anonymous users from inception to till today. If data range is selected, then it is from inception to To-Date.

    Registered Users

    • Active - No. of unique registered user who had interacted with the VA within the filtered date range is considered active
    • Returning - No. of unique registered user who had registered in the system earlier to filter date range but interacted with the VA within the filter date range is considered Returning
    • Logged in - No. of unique registered logged-in user who had interacted with the VA within the filter date range is considered Logged-in

    Unique Users - Registered

    • Launched - No. of users who just launched the VA (includes post login)
    • Interacted - No. of users who just launched & sent messages in VA (includes post login)

    Unique Users - Anonymous

    • Launched - No. of users who just launched the VA (includes pre login)
    • Interacted - No. of users who just launched & sent messages in VA (includes pre login)

    Messages - Total number of message sent via user in chat bot.

    Sessions - Total number of session in the system with the filtered date range

    Avg Session - Total average of session in the system with the filtered date range

    Live chat redirection

    • Online - Total number of live chat redirection to agent and connection to live agent established
    • Offline - Total number of live chat redirection offline i.e. redirection to live agent triggered but connection to live agent was not established, agent is offline.

    Transactions, Service Requests & Enquiry - Statistics for the selected date range.

    Conversations categorised - Unclassified - Unsupported - FAQ - Clicks - Intent - Smalltalk

    Sentiments - Statistics for positive, negative and neutral utterances.

    You can also analyze:

    • How users interaction and AI is performing

    You can check:

    • No. of logins
    • New Users
    • AI Accuracy
    • Users(till date)
    • Registered users
    • Unique users

    alt_text

    • How are the conversations performing?

    You can check:

    • Messages
    • Sessions
    • Transaction Amount
    • Transaction count
    • Service Requests
    • Enquiry
    • Origination
    • Avg. Session Time
    • Channels
    • Conversations categorised

    alt_text

    • How are the users interacting?

    You can check:

    • Feedback
    • Sentiment
    • LiveChat Redirections

    alt_text

    Conversational Analytics:

    The Conversational Analytics shows overall statistics of the AI Engine and what is happening on the bot in a graphical manner, which includes all the answered & unanswered utterances including Ontology, FAQs, Smalltalks, Clicks, Intents, etc. This will contain all the unique counts of the messages based on the category of the utterances (Eg; FAQs, Smalltalks, Intents, etc.), you can expand the total messages to see the category of utterances and their respective unique counts.

    conversational_analytics

    From the graph, you can goto self-learning page to check what are all the utterances there in the respective category. Eg; If you want to check what are all the utterances there in the unanswered category so you can just right click on unanswered (It will show all the attributes) click on any product, select 'Redirect' from the popup and you will be redirected to the self-learning page and can check the unanswered utterances.

    In the conversation analytics, you can filter based on the date, channel, response type, and language. It will also show the top total messages and Insights by Ontology attributes.

    conversational_analytics

    Managing Reports

    The reports play a vital role to improve any services, based on reports you can analyse what improvements are required & how can it be improved to attract more users. You can analyze the reports based on Users, Transactions, Service Requests, Origination & AI. So that you can improve the functionality and accuracy of the bot.

    On each module we provide the functionality to export the data either in CSV/Excel depending on the module. Export also works with filters.

    Users

    We can filter the report based on

    Date (Daily, weekly, Monthly, Yearly) Domains (FAQ, SmallTalk,etc.) Channels

    Users are categorized in Anonymous Users (those who haven't loggedIn) and Registered Users (those users logged In).

    Using this report, we can do detailed analysis on Accuracy obtaining for a particular user and No of Logged-in Users, NewUsers, Total Sessions.

    alt_text

    • Anonymous users

    The users who has interacted with the bot without any login or authentication like Small talk and FAQ's.

    • Goto your workspace
    • Navigate to 'Analyse'
    • Click on 'Users'

    It will show all the anonymous users.

    alt_text

    • Registered users

    The users who has interacted with the bot with some authentication or login.

    • Goto your workspace
    • Navigate to 'Analyse'
    • Click on 'Users'
    • Select 'Registered in the dropdown'

    It will show all the registered users.

    alt_text

    Below are the filters available .

    • We can search with customer ID
    • Registration Date - Date filter based on user registration date
    • Last Access Date - Date filter based on user last access date
    • Active Date - Filter all user active in given date range
    • Login Date - Filter all user logged in given date range
    • Email Id - Filter based on email Id
    • Customer Id - Filter based on customer id
    • First Name - Filter user based on their first name
    • Last Name - Filter user based on their last name
    • Mobile Number - Filter based on mobile number
    • User Type - Filter user based on type i.e. registered or anonymous
    • Channels - Filter user based on channels
    • Status - Filters user based on their status like active, blocked, inactive, unblocked.
    • Not Logged In Date - Filters user who didn't login in a given date range.
    • Onboard Date - Date filter based on user onboarding date

    alt_text

    User Profile

    In this module we are showing user specific data like, user name, Registered Date, Last Access Date, Cust Id along with Channel Details.
    Also we are showing Overview, Chat, Transaction, Service Request, Origination, Interaction, Operations.

    • Overview
      • Overall Message - Show message i.e messages for current date range/ total messages
      • Accuracy - Overall accuracy for all messages from AI
      • Avg Session - Avg session time spent by the user
      • Login - Show login i.e login for current date range/ total login
      • Transaction - Transaction done by customer i.e. total, success, failed
      • Transaction Amount - Transaction amount for the customers

    alt_text

    • Chat - Here we can see all the interactions happened between user and chat bot.
      • Filters:
      • Date
      • Channel
      • Export chat
      • Reset - Resets applied filters

    alt_text

    • Transaction - Lists down all the transactions perfromed by user.
      • Filters:
      • Date
      • Channel
      • Category - Transaction category

    alt_text

    • Service Request - Lists down all the service requests raised by users.
      • Filters:
      • Date
      • Channel
      • Category - Service Request category

    alt_text

    • Origination

      • Filters:
      • Date
      • Channel
      • Category - Origination category
    • Operation - Here you can perform operation like block/unblock user, block/unblock channels

    alt_text

    Transactions

    If you want to analyse how much transactions happened with your bot on a daily, monthly basis then you can analyse that also by following these steps:

    • Goto your workspace
    • Navigate to 'Analyse'
    • Click on 'Transactions'

    alt_text

    Below are the filters available.

    • Transaction Category - Search transaction with different transaction category
    • Search by transaction Id
    • Language - Search transaction with language
    • Customer segment - Search transaction with customer segment
    • Customer Id - Search transaction with customer id
    • EmailId - Search transaction by user email id
    • Payment RefNo - Search transaction with payment reference number.
    • Date - Search with transaction date.
    • Mobile Number - Search transaction with mobile number
    • Channels - Search transaction with channels
    • Status - Search transaction with status i.e. success, pending failed etc.

    Service Requests

    If you want to analyse how many service requests like ''Issue Cheque Book'' raised on your bot on a daily, monthly, yearly and custom date range basis then you can analyse that also by following these steps:

    • Goto your workspace
    • Navigate to 'Analyse'
    • Click on 'Service Requests'

    It will show all the service requests that have been raised on your bot.

    alt_text

    Below are the filters available for service request.

    • Service Request Category - Search transaction with different transaction category
    • Customer Id - Search Service Request with customer id
    • Language - Search Service Request with language
    • Customer segment - Search Service Request with customer segment
    • EmailId - Search Service Request by user email id
    • Reference No - Search Service Request with payment reference number.
    • Request Date - Search with Service Request date.
    • Channels - Search Service Request with channels
    • Status - Search Service Request with status i.e. success, pending failed etc.

    alt_text

    For faster downloads for transaction and service request reports. In this, we will be excluding the reports.json file while processing the records.
    We can still use the reports.json to do that just make change the following rule, goto Manage AI rules -> Reports and look for 'Enable Handlebars engine for reports’.

    Origination

    You can analyse origination also on a daily, monthly basis then you can analyse that also by following these steps:

    • Goto your workspace
    • Navigate to 'Analyse'
    • Click on 'Origination'

    It will show on boarding process of the user.

    alt_text

    alt_text

    AI

    In the AI reports, you can analyse the performance & accuracy of the bot responses, you can also analyse the bot response average time. so that you can improve the bot accuracy & performance to serve better to the user.

    • Goto your workspace
    • Navigate to 'Analyse'
    • Click on 'AI'

    It will show the AI accuracy & performance.

    alt_text

    Functionality Journey

    Steps To Add Functionality

    1. Add Functionality using Manage Function
    2. Create Functionality(Use case) in manage hooks using workflow editor or can add function using manage functions, make sure service id is same as function code

    Once we added functionality either in Transaction or Service Request, Whatever user is started conversation with bot, can be analysed in Functionality Journey, which can be view under ANALYSE > Functionality Journey

    alt_text Home Page

    1. Can choose category we need to analyse, in above (Home Page) apart from highlighted one, rest were static(Pre defined and managed in json file) for RB supported existing Functionalities.
    2. If u add new functionality can search based on function type(Transaction/Service Request)
    3. Now can click on Sub type u’ll be landed with Functionality Selection page as shown below alt_text Functionality Selection
    4. Here you can choose functionality required to analyse and click on view to get functionality journey as shown alt_text Functionality Journey
    5. Functionality journey provides us following data, with these data will help us to identify how we can improve conversation more
      1. How exactly user went with functionality
      2. How many times particular step get executed
      3. Who(User) and all were visited that step with time stamp on click of i as shown below alt_text Customer Info Tab

    Filters

    We have Filters to get report based on our requirement as shown in Functionality journey image

    1. Channels
    2. Language
    3. Date

    Apart from this filter we can also do refresh to get updated data on journey

    If we Add Functionality using workflow editor no need to worry about report when we use proper links inside editor without use of java or camel to decide on what should be the next node, in worst case if we go with camel or java then we needs to maintain reports.json and reportstep.json **in admin config path($ADMIN_CONFIG_PATHconfig/global**), inside this we have already pre-populated data to render reports on rb supported use cases which uses camel route to decide what should be the next node

    Reports

    A JSON file contains Categorisation including visibility of reports

    A Sample content is as follows

    {
      "transactions" : {
        "category" : [
          {
            "name" : "All",
            "key" : "ALL",
            "headers" : ["Date", "Customer Id", "Mobile Number", "Category", "Type", "From Account", "To Account", "Biller/Payee Name", "Amount", "Txn Id", "Payment Ref No", "Response Code", "Response Message", "Channel","User Journey","Status"],
            "exportHeaders" : ["Date", "Customer Id", "Mobile Number", "Category", "Type", "From Account", "To Account", "Biller/Payee Name", "Amount", "Txn Id", "Payment Ref No", "Response Code", "Response Message", "Channel","Status"],
            "values" : ["{{#if txnDateStr}}{{txnDateStr}}{{else}}NA{{/if}}","{{#if userReport}}{{custId}}{{else}}<a data-custId='{{custId}}' data-userIntId='{{userIntId}}' style='cursor:pointer'>{{#if custId}}{{custId}}{{else}}NA{{/if}}</a>{{/if}}","{{#if userMobileNumber}}{{userMobileNumber}}{{else}}NA{{/if}}","{{#ReportsConfig}}{{#category}}{{#is key trxnCateg}}{{#if trxnCateg}}{{name}}{{else}}NA{{/if}}{{/is}}{{/category}}{{/ReportsConfig}}","{{#ReportsConfig}}{{#category}}{{#types}}{{#is key trxnType}}{{#if trxnType}}{{name}}{{else}}NA{{/if}}{{/is}}{{/types}}{{/category}}{{/ReportsConfig}}","{{#if trxnSrcAccountNum}}{{trxnSrcAccountNum}}{{else}}NA{{/if}}","{{#if payeeAccountNum}}{{payeeAccountNum}}{{else if phoneNo}}{{phoneNo}}{{else}}NA{{/if}}","{{#if billerName}}{{billerName}}{{else if payeeName}}{{payeeName}}{{else}}NA{{/if}}","{{#if trxnAmtDis}}{{trxnCurr}}{{trxnAmtDis}}{{else}}NA{{/if}}","{{#if trxnId}}{{trxnId}}{{else}}NA{{/if}}","{{#if paymentRefNo}}{{paymentRefNo}}{{else}}NA{{/if}}","{{#if trxnCode}}{{trxnCode}}{{else}}NA{{/if}}","{{#if trxnFailureText}}{{{trxnFailureText}}}{{else}}NA{{/if}}","{{#Channels}}{{#is (lowercase channelCode) (lowercase channelType)}}{{#if channelType}}<img src='imgs/{{channelLogo}}.png' style='width: 15px !important; height: 15px !important;'>{{else}}NA{{/if}}{{/is}}{{/Channels}}","<a data-transactionIntId='{{instTrxnId}}' data-transactionCategory='{{trxnCateg}}' data-transactionType='{{trxnType}}' data-transactionName='{{trxnType}}' style='cursor:pointer'>View</a>","{{#if trxnStatus}}{{trxnStatus}}{{else}}NA{{/if}}"],
            "exportValues" : ["{{#if txnDateStr}}{{txnDateStr}}{{else}}NA{{/if}}","{{#if custId}}{{custId}}{{else}}NA{{/if}}","{{#if userMobileNumber}}{{userMobileNumber}}{{else}}NA{{/if}}","{{#ReportsConfig}}{{#category}}{{#is key trxnCateg}}{{#if trxnCateg}}{{name}}{{else}}NA{{/if}}{{/is}}{{/category}}{{/ReportsConfig}}","{{#ReportsConfig}}{{#category}}{{#types}}{{#is key trxnType}}{{#if trxnType}}{{name}}{{else}}NA{{/if}}{{/is}}{{/types}}{{/category}}{{/ReportsConfig}}","{{#if trxnSrcAccountNum}}{{trxnSrcAccountNum}}{{else}}NA{{/if}}","{{#if payeeAccountNum}}{{payeeAccountNum}}{{else if phoneNo}}{{phoneNo}}{{else}}NA{{/if}}","{{#if billerName}}{{billerName}}{{else if payeeName}}{{payeeName}}{{else}}NA{{/if}}","{{#if trxnAmtDis}}{{trxnCurr}}{{trxnAmtDis}}{{else}}NA{{/if}}","{{#if trxnId}}{{trxnId}}{{else}}NA{{/if}}","{{#if paymentRefNo}}{{paymentRefNo}}{{else}}NA{{/if}}","{{#if trxnCode}}{{trxnCode}}{{else}}NA{{/if}}","{{#if trxnFailureText}}{{{trxnFailureText}}}{{else}}NA{{/if}}","{{#Channels}}{{#is (lowercase channelCode) (lowercase channelType)}}{{#if channelType}}{{channelName}}{{else}}NA{{/if}}{{/is}}{{/Channels}}","{{#if trxnStatus}}{{trxnStatus}}{{else}}NA{{/if}}"],
            "display": false   
          },
          {
            "name" : "Transfer",
            "key" : "TRANSFER",
            "headers" : ["Date", "Customer Id", "Mobile Number", "Category", "Type", "From Account", "To Account", "Amount", "Payee Name", "Txn Id", "Response Code", "Response Message", "Channel","User Journey","Status"],
            "exportHeaders" : ["Date", "Customer Id", "Mobile Number", "Category", "Type", "From Account", "To Account", "Amount", "Payee Name", "Txn Id", "Response Code", "Response Message", "Channel","Status"],
            "values" : ["{{#if txnDateStr}}{{txnDateStr}}{{else}}NA{{/if}}","{{#if userReport}}{{custId}}{{else}}<a data-custId='{{custId}}' data-userIntId='{{userIntId}}' style='cursor:pointer'>{{#if custId}}{{custId}}{{else}}NA{{/if}}</a>{{/if}}", "{{#if userMobileNumber}}{{userMobileNumber}}{{else}}NA{{/if}}","{{#ReportsConfig}}{{#category}}{{#is key trxnCateg}}{{#if trxnCateg}}{{name}}{{else}}NA{{/if}}{{/is}}{{/category}}{{/ReportsConfig}}","{{#ReportsConfig}}{{#category}}{{#types}}{{#is key trxnType}}{{#if trxnType}}{{name}}{{else}}NA{{/if}}{{/is}}{{/types}}{{/category}}{{/ReportsConfig}}","{{#if trxnSrcAccountNum}}{{trxnSrcAccountNum}}{{else}}NA{{/if}}","{{#if payeeAccountNum}}{{payeeAccountNum}}{{else}}NA{{/if}}","{{#if trxnAmtDis}}{{trxnCurr}}{{trxnAmtDis}}{{else}}NA{{/if}}","{{#if payeeName}}{{payeeName}}{{else}}NA{{/if}}","{{#if trxnId}}{{trxnId}}{{else}}NA{{/if}}","{{#if trxnCode}}{{trxnCode}}{{else}}NA{{/if}}","{{#if trxnFailureText}}{{{trxnFailureText}}}{{else}}NA{{/if}}","{{#Channels}}{{#is (lowercase channelCode) (lowercase channelType)}}{{#if channelType}}<img src='imgs/{{channelLogo}}.png' style='width: 15px !important; height: 15px !important;'>{{else}}NA{{/if}}{{/is}}{{/Channels}}","<a data-transactionIntId='{{instTrxnId}}' data-transactionCategory='{{trxnCateg}}' data-transactionType='{{trxnType}}' data-transactionName='{{trxnType}}' style='cursor:pointer'>View</a>","{{#if trxnStatus}}{{trxnStatus}}{{else}}NA{{/if}}"],
            "exportValues" : ["{{#if txnDateStr}}{{txnDateStr}}{{else}}NA{{/if}}","{{#if custId}}{{custId}}{{else}}NA{{/if}}", "{{#if userMobileNumber}}{{userMobileNumber}}{{else}}NA{{/if}}","{{#ReportsConfig}}{{#category}}{{#is key trxnCateg}}{{#if trxnCateg}}{{name}}{{else}}NA{{/if}}{{/is}}{{/category}}{{/ReportsConfig}}","{{#ReportsConfig}}{{#category}}{{#types}}{{#is key trxnType}}{{#if trxnType}}{{name}}{{else}}NA{{/if}}{{/is}}{{/types}}{{/category}}{{/ReportsConfig}}","{{#if trxnSrcAccountNum}}{{trxnSrcAccountNum}}{{else}}NA{{/if}}","{{#if payeeAccountNum}}{{payeeAccountNum}}{{else}}NA{{/if}}","{{#if trxnAmtDis}}{{trxnCurr}}{{trxnAmtDis}}{{else}}NA{{/if}}","{{#if payeeName}}{{payeeName}}{{else}}NA{{/if}}","{{#if trxnId}}{{trxnId}}{{else}}NA{{/if}}","{{#if trxnCode}}{{trxnCode}}{{else}}NA{{/if}}","{{#if trxnFailureText}}{{{trxnFailureText}}}{{else}}NA{{/if}}","{{#Channels}}{{#is (lowercase channelCode) (lowercase channelType)}}{{#if channelType}}{{channelName}}{{else}}NA{{/if}}{{/is}}{{/Channels}}","{{#if trxnStatus}}{{trxnStatus}}{{else}}NA{{/if}}"],
            "intentname" : "txn-moneymovement",
            "subIntentName" : "transfer",
            "types" : [{
              "key" : "EXTERNAL_DOMESTIC",
              "name" : "Inter Bank"      }, {
              "key" : "INTERNAL_DOMESTIC",
              "name" : "Intra Bank"      }, {
              "key" : "SELF",
              "name" : "Self"      }, {
              "key" : "MMID",
              "name" : "MMID"      }, {
              "key" : "QUICK_PAY",
              "name" : "Quick Pay"      }, {
              "key" : "UPI",
              "name" : "UPI"      }
            ],
            "display": true   
          }
        ]
      },
      "serviceRequests": {
        "category": [
          {
            "name": "All",
            "key": "ALL",
            "headers": [
              "Date",
              "Customer Id",
              "Ref. No.",
              "Type",
              "Sub Type",
              "Product",
              "Account No",
              "Channel",
              "User Journey",
              "Status",
              "Comments"
            ],
            "exportHeaders": [
              "Date",
              "Customer Id",
              "Ref. No.",
              "Type",
              "Sub Type",
              "Product",
              "Account No",
              "Channel",
              "Status",
              "Comments"
            ],
            "values": [
              "{{#if serviceReqDateStr}}{{serviceReqDateStr}}{{else}}NA{{/if}}",
              "{{#if userReport}}{{userCustomerId}}{{else}}{{#if userCustomerId}}<a data-custId='{{userCustomerId}}' data-userIntId='{{userIntId}}'>{{userCustomerId}}</a>{{else}}NA{{/if}}{{/if}}",
              "{{#if srvcereqRefNum}}{{srvcereqRefNum}}{{else}}NA{{/if}}",
              "{{#ReportsConfig}}{{#category}}{{#is key srvcereqCateg}}{{#if srvcereqCateg}}{{name}}{{else}}NA{{/if}}{{/is}}{{/category}}{{/ReportsConfig}}",
              "{{#ReportsConfig}}{{#category}}{{#types}}{{#is key srvcereqType}}{{#if srvcereqType}}{{name}}{{else}}NA{{/if}}{{/is}}{{/types}}{{/category}}{{/ReportsConfig}}",
              "{{#if auditField3}}{{auditField3}}{{else}}NA{{/if}}",
              "{{#if accountId}}{{maskAccNum accountId}}{{else}}NA{{/if}}",
              "{{#Channels}}{{#is (lowercase channelCode) (lowercase channelType)}}{{#if channelType}}<img src='imgs/{{channelLogo}}.png' style='width: 15px !important; height: 15px !important;'>{{else}}NA{{/if}}{{/is}}{{/Channels}}",
              "<a data-transactionIntId='{{srvcereqId}}' data-transactionCategory='{{srvcereqCateg}}' data-transactionType='{{srvcereqType}}' data-transactionName='{{srvcereqType}}'>View</a>",
              "{{srvcereqStatus}}",
              "{{#if comments}}{{comments}}{{else}} {{/if}}"
            ],
            "exportValues": [
              "{{#if serviceReqDateStr}}{{serviceReqDateStr}}{{else}}NA{{/if}}",
              "{{#if userCustomerId}}{{userCustomerId}}{{else}}NA{{/if}}",
              "{{#if srvcereqRefNum}}{{srvcereqRefNum}}{{else}}NA{{/if}}",
              "{{#ReportsConfig}}{{#category}}{{#is key srvcereqCateg}}{{#if srvcereqCateg}}{{name}}{{else}}NA{{/if}}{{/is}}{{/category}}{{/ReportsConfig}}",
              "{{#ReportsConfig}}{{#category}}{{#types}}{{#is key srvcereqType}}{{#if srvcereqType}}{{name}}{{else}}NA{{/if}}{{/is}}{{/types}}{{/category}}{{/ReportsConfig}}",
              "{{#if auditField3}}{{auditField3}}{{else}}NA{{/if}}",
              "{{#if accountId}}{{maskAccNum accountId}}{{else}}NA{{/if}}",
              "{{#Channels}}{{#is (lowercase channelCode) (lowercase channelType)}}{{#if channelType}}{{channelName}}{{else}}NA{{/if}}{{/is}}{{/Channels}}",
              "{{srvcereqStatus}}",
              "{{#if comments}}{{comments}}{{else}} {{/if}}"
            ],
            "display": false
          },
          {
            "key": "CARD_SERVICE",
            "name": "Card Service",
            "headers": [
              "Date",
              "Customer Id",
              "Ref. No.",
              "Type",
              "Sub Type",
              "Product",
              "Account No",
              "Channel",
              "User Journey",
              "Status",
              "Comments"
            ],
            "exportHeaders": [
              "Date",
              "Customer Id",
              "Ref. No.",
              "Type",
              "Sub Type",
              "Product",
              "Account No",
              "Channel",
              "Status",
              "Comments"
            ],
            "values": [
              "{{#if serviceReqDateStr}}{{serviceReqDateStr}}{{else}}NA{{/if}}",
              "{{#if userReport}}{{userCustomerId}}{{else}}{{#if userCustomerId}}<a data-custId='{{userCustomerId}}' data-userIntId='{{userIntId}}'>{{userCustomerId}}</a>{{else}}NA{{/if}}{{/if}}",
              "{{#if srvcereqRefNum}}{{srvcereqRefNum}}{{else}}NA{{/if}}",
              "{{#ReportsConfig}}{{#category}}{{#is key srvcereqCateg}}{{#if srvcereqCateg}}{{name}}{{else}}NA{{/if}}{{/is}}{{/category}}{{/ReportsConfig}}",
              "{{#ReportsConfig}}{{#category}}{{#types}}{{#is key srvcereqType}}{{#if srvcereqType}}{{name}}{{else}}NA{{/if}}{{/is}}{{/types}}{{/category}}{{/ReportsConfig}}",
              "{{#if auditField3}}{{auditField3}}{{else}}NA{{/if}}",
              "{{#if accountId}}{{maskAccNum accountId}}{{else}}NA{{/if}}",
              "{{#Channels}}{{#is (lowercase channelCode) (lowercase channelType)}}{{#if channelType}}<img src='imgs/{{channelLogo}}.png' style='width: 15px !important; height: 15px !important;'>{{else}}NA{{/if}}{{/is}}{{/Channels}}",
              "<a data-transactionIntId='{{srvcereqId}}' data-transactionCategory='{{srvcereqCateg}}' data-transactionType='{{srvcereqType}}' data-transactionName='{{srvcereqType}}'>View</a>",
              "{{srvcereqStatus}}",
              "{{#if comments}}{{comments}}{{else}} {{/if}}"
            ],
            "exportValues": [
              "{{#if serviceReqDateStr}}{{serviceReqDateStr}}{{else}}NA{{/if}}",
              "{{#if userCustomerId}}{{userCustomerId}}{{else}}NA{{/if}}",
              "{{#if srvcereqRefNum}}{{srvcereqRefNum}}{{else}}NA{{/if}}",
              "{{#ReportsConfig}}{{#category}}{{#is key srvcereqCateg}}{{#if srvcereqCateg}}{{name}}{{else}}NA{{/if}}{{/is}}{{/category}}{{/ReportsConfig}}",
              "{{#ReportsConfig}}{{#category}}{{#types}}{{#is key srvcereqType}}{{#if srvcereqType}}{{name}}{{else}}NA{{/if}}{{/is}}{{/types}}{{/category}}{{/ReportsConfig}}",
              "{{#if auditField3}}{{auditField3}}{{else}}NA{{/if}}",
              "{{#if accountId}}{{maskAccNum accountId}}{{else}}NA{{/if}}",
              "{{#Channels}}{{#is (lowercase channelCode) (lowercase channelType)}}{{#if channelType}}{{channelName}}{{else}}NA{{/if}}{{/is}}{{/Channels}}",
              "{{srvcereqStatus}}",
              "{{#if comments}}{{comments}}{{else}} {{/if}}"
            ],
            "types": [
              {
                "name": "Block Card",
                "key": "BLOCK_CARD"
              },
              {
                "name": "Reset Pin",
                "key": "RESET_PIN"
              },
              {
                "name": "Card Activation",
                "key": "CARD_ACTIVATION"
              },
              {
                "name": "International Usage",
                "key": "INTERNATIONAL_USAGE"
              },
              {
                "name": "Replace Card",
                "key": "REPLACE_CARD"
              },
              {
                "name": "Convert EMI",
                "key": "CONVERT_EMI"
              },
              {
                "name": "Update Limit",
                "key": "UPDATE_LIMIT"
              }
            ],
            "display": true
          }
        ]
      }
    }
    

    Report Step

    A JSON file contains details on each step linked with others

    {
      "name": Name of the Step as same as step code provided, ex:- "Step_3"
      "shape": Shape of the step visibilty on jopurney, possible shapes "circle", "rectangle"
      "color": Colour of the step, recomended colours   BLACK("#78909C"), GREEN("#9CCC65"), RED("#EF9A9A");
      "text": Colour in text for readble pupose, as above in smallcases ex: "black"
      "detail": Step name to visible on journey, always keep simple as possible, ex:- "Amount"
      "from": Step from which this step may called, for start node from will be blank value and rest can be based on workflow
      ex:- "Step_1,Step_2" means Step 3 called from either Step_1 or Step_2
      "isView": A Step needs to be visble in report or it is added for some enhancemment as of now we can keep this with disbaling this flag, possible values true,false
      "orderNo": Order number of step in journey,ex:- 3, this step will be visible on 3rd place
    }
    

    Collection of these kind of objects in an array forms a complete journey

    {
      "CARD_SERVICE":{
        "CARD_ACTIVATION": [
          {
            "name": "START",
            "shape": "circle",
            "color": "#9CCC65",
            "text": "black",
            "detail": "Start",
            "from": "",
            "isView": true,
            "orderNo": 1  },
          {
            "name": "sr_account_list",
            "shape": "rectangle",
            "color": "#78909C",
            "text": "black",
            "detail": "Card Selection",
            "from": "START",
            "isView": true,
            "orderNo": 2  },
          {
            "name": "sr_card_activation_confirm",
            "shape": "rectangle",
            "color": "#78909C",
            "text": "black",
            "detail": "Activation Confirm",
            "from": "START,sr_account_list",
            "isView": true,
            "orderNo": 3  },
          {
            "name": "sr_otp",
            "shape": "rectangle",
            "color": "#78909C",
            "text": "black",
            "detail": "OTP Step",
            "from": "sr_card_activation_confirm",
            "isView": true,
            "orderNo": 4  },
          {
            "name": "OTP_03",
            "shape": "rectangle",
            "color": "#78909C",
            "text": "black",
            "detail": "Re-Enter OTP",
            "from": "sr_otp,OTP_03",
            "isView": true,
            "orderNo": 5  },
          {
            "name": "Cancellation",
            "shape": "circle",
            "color": "#EF9A9A",
            "text": "red",
            "detail": "Cancellation",
            "from": "sr_account_list,sr_card_activation_confirm,sr_otp,OTP_03",
            "isView": true,
            "orderNo": 6  },
          {
            "name": "OTP_02",
            "shape": "circle",
            "color": "#EF9A9A",
            "text": "red",
            "detail": "OTP Attempts Exceeds",
            "from": "OTP_03",
            "isView": true,
            "orderNo": 6  },
          {
            "name": "sr_card_activation_txn_status",
            "shape": "circle",
            "color": "#9CCC65",
            "text": "green",
            "detail": "END",
            "from": "sr_otp,OTP_03",
            "isView": true,
            "orderNo": 7  }
        ]
      }
    }
    

    Let’s discuss above with one example like Card Activation use case

    Requirement Show Journey of Card activation inside Functionality journey as well as completed service requests use case to view their journey on service request of analyse page alt_text Functionality Journey Landing Page

    alt_text Category Landing Page

    As Show above on click of functionality journey we should show category called Card Service and this can be achievable with reports.json In reports.json file create an object with any value(Indicates main category) here we named it as Service Requests but inside of it create an object and name it as category and that is an array of objects which contains many category like Card Service, Deposit Service etc,. and create an another object which we needs to show in Functionality journey landing page

    contains parameters as shown below

    "name" : "Card Service", :- Name to be show in FUnctionality Journey landing page
     "key" : "CARD_SERVICE", :-Id to hold object which we refer inside reportstep.json
     "headers" : ["Date", "Customer Id", "Mobile Number", "Category", "Type", "From Account", "To Account", "Biller/Payee Name", "Amount", "Txn Id", "Payment Ref No", "Response Code", "Response Message", "Channel","User Journey","Status"],
     "exportHeaders" : ["Date", "Customer Id", "Mobile Number", "Category", "Type", "From Account", "To Account", "Biller/Payee Name", "Amount", "Txn Id", "Payment Ref No", "Response Code", "Response Message", "Channel","Status"],
     "values" : ["{{#if txnDateStr}}{{txnDateStr}}{{else}}NA{{/if}}","{{#if userReport}}{{custId}}{{else}}<a data-custId='{{custId}}' data-userIntId='{{userIntId}}' style='cursor:pointer'>{{#if custId}}{{custId}}{{else}}NA{{/if}}</a>{{/if}}","{{#if userMobileNumber}}{{userMobileNumber}}{{else}}NA{{/if}}","{{#ReportsConfig}}{{#category}}{{#is key trxnCateg}}{{#if trxnCateg}}{{name}}{{else}}NA{{/if}}{{/is}}{{/category}}{{/ReportsConfig}}","{{#ReportsConfig}}{{#category}}{{#types}}{{#is key trxnType}}{{#if trxnType}}{{name}}{{else}}NA{{/if}}{{/is}}{{/types}}{{/category}}{{/ReportsConfig}}","{{#if trxnSrcAccountNum}}{{trxnSrcAccountNum}}{{else}}NA{{/if}}","{{#if payeeAccountNum}}{{payeeAccountNum}}{{else if phoneNo}}{{phoneNo}}{{else}}NA{{/if}}","{{#if billerName}}{{billerName}}{{else if payeeName}}{{payeeName}}{{else}}NA{{/if}}","{{#if trxnAmtDis}}{{trxnCurr}}{{trxnAmtDis}}{{else}}NA{{/if}}","{{#if trxnId}}{{trxnId}}{{else}}NA{{/if}}","{{#if paymentRefNo}}{{paymentRefNo}}{{else}}NA{{/if}}","{{#if trxnCode}}{{trxnCode}}{{else}}NA{{/if}}","{{#if trxnFailureText}}{{{trxnFailureText}}}{{else}}NA{{/if}}","{{#Channels}}{{#is (lowercase channelCode) (lowercase channelType)}}{{#if channelType}}<img src='imgs/{{channelLogo}}.png' style='width: 15px !important; height: 15px !important;'>{{else}}NA{{/if}}{{/is}}{{/Channels}}","<a data-transactionIntId='{{instTrxnId}}' data-transactionCategory='{{trxnCateg}}' data-transactionType='{{trxnType}}' data-transactionName='{{trxnType}}' style='cursor:pointer'>View</a>","{{#if trxnStatus}}{{trxnStatus}}{{else}}NA{{/if}}"],
     "exportValues" : ["{{#if txnDateStr}}{{txnDateStr}}{{else}}NA{{/if}}","{{#if custId}}{{custId}}{{else}}NA{{/if}}","{{#if userMobileNumber}}{{userMobileNumber}}{{else}}NA{{/if}}","{{#ReportsConfig}}{{#category}}{{#is key trxnCateg}}{{#if trxnCateg}}{{name}}{{else}}NA{{/if}}{{/is}}{{/category}}{{/ReportsConfig}}","{{#ReportsConfig}}{{#category}}{{#types}}{{#is key trxnType}}{{#if trxnType}}{{name}}{{else}}NA{{/if}}{{/is}}{{/types}}{{/category}}{{/ReportsConfig}}","{{#if trxnSrcAccountNum}}{{trxnSrcAccountNum}}{{else}}NA{{/if}}","{{#if payeeAccountNum}}{{payeeAccountNum}}{{else if phoneNo}}{{phoneNo}}{{else}}NA{{/if}}","{{#if billerName}}{{billerName}}{{else if payeeName}}{{payeeName}}{{else}}NA{{/if}}","{{#if trxnAmtDis}}{{trxnCurr}}{{trxnAmtDis}}{{else}}NA{{/if}}","{{#if trxnId}}{{trxnId}}{{else}}NA{{/if}}","{{#if paymentRefNo}}{{paymentRefNo}}{{else}}NA{{/if}}","{{#if trxnCode}}{{trxnCode}}{{else}}NA{{/if}}","{{#if trxnFailureText}}{{{trxnFailureText}}}{{else}}NA{{/if}}","{{#Channels}}{{#is (lowercase channelCode) (lowercase channelType)}}{{#if channelType}}{{channelName}}{{else}}NA{{/if}}{{/is}}{{/Channels}}","{{#if trxnStatus}}{{trxnStatus}}{{else}}NA{{/if}}"],
     "display": false,
     "types": [
          {
            "name": "Block Card",
            "key": "BLOCK_CARD"
          },
          {
            "name": "Card Activation",
            "key": "CARD_ACTIVATION"
          }
        ] -> This contains sub category or we can say functionality name under this catgory
        where name to be show on functionality journey and key will be used in reportstep.json
    

    So we done with showing clicks to view journey in admin but to populate journey we need to frame a json objects which should have each and every steps we come across the journey of use case let’s say :- - Card activation → account Selection → Confirmation → otp → status - Card activation with account number → Confirmation → otp → status - Card activation with account number → Confirmation → wrong otp → otp → status - Card activation with account number → Confirmation → cancel

    these are the possible scenario’s and other scenarios were u may cancel use case in any step and u may ask faq’s smalltalk and enquiries in between the steps

    [
          {
            "name": "START",
            "shape": "circle",
            "color": "#9CCC65",
            "text": "black",
            "detail": "Start",
            "from": "",
            "isView": true,
            "orderNo": 1  },
          {
            "name": "sr_account_list",
            "shape": "rectangle",
            "color": "#78909C",
            "text": "black",
            "detail": "Card Selection",
            "from": "START",
            "isView": true,
            "orderNo": 2  },
          {
            "name": "sr_card_activation_confirm",
            "shape": "rectangle",
            "color": "#78909C",
            "text": "black",
            "detail": "Activation Confirm",
            "from": "START,sr_account_list",
            "isView": true,
            "orderNo": 3  },
          {
            "name": "sr_otp",
            "shape": "rectangle",
            "color": "#78909C",
            "text": "black",
            "detail": "OTP Step",
            "from": "sr_card_activation_confirm",
            "isView": true,
            "orderNo": 4  },
          {
            "name": "OTP_03",
            "shape": "rectangle",
            "color": "#78909C",
            "text": "black",
            "detail": "Re-Enter OTP",
            "from": "sr_otp,OTP_03",
            "isView": true,
            "orderNo": 5  },
          {
            "name": "Cancellation",
            "shape": "circle",
            "color": "#EF9A9A",
            "text": "red",
            "detail": "Cancellation",
            "from": "sr_account_list,sr_card_activation_confirm,sr_otp,OTP_03",
            "isView": true,
            "orderNo": 6  },
          {
            "name": "OTP_02",
            "shape": "circle",
            "color": "#EF9A9A",
            "text": "red",
            "detail": "OTP Attempts Exceeds",
            "from": "OTP_03",
            "isView": true,
            "orderNo": 6  },
          {
            "name": "sr_card_activation_txn_status",
            "shape": "circle",
            "color": "#9CCC65",
            "text": "green",
            "detail": "END",
            "from": "sr_otp,OTP_03",
            "isView": true,
            "orderNo": 7  }
        ]
    

    These are the steps which includes all the above scenario’s With this we are sufficient knowledge to frame any kind of reports in admin with Customized workflows and reports will be shown in admin as shown below

    alt_text Functionality Journey view

    alt_text Functionality completion view


    Manage Deployments

    Overview

    Deployment helps to manage the Morfeus Middleware, AI data training lifecycle. Currently we support 2 kinds of deployment workloads.

    1. Local deployments (On-Prem)
    2. Cloud Deployments

    This feature also keeps track of AI data trained & deployed previously. These trained data models can be restored at a later time. Every AI data training is identified by data Id (GUID - 32 byte Alphanumeric string)

    Local deployments

    For Morfeus deployment, please follow the below link. ~Coming Soon~

    On-prem trainer and worker setups should follow the local documentation setup. AI trainers and AI worker processes should be configured as below.

    Components

    Table 12 AI trainers and AI worker processes' Configurations
    Rules
    Description
    Triniti Trainer AI engine performing base classification, natural language understanding, Named Entity recognition, spell checking and generates AI models specific to AI data domains & AI data provided.
    Triniti Worker AI process which will be loaded with the AI models and answers the user utterances from the chatbot.
    Spotter Trainer Deep learning AI engine generating supervised and unsupervised FAQ models.
    Spotter Trainer AI process loads the deep learnt model for serving the user queries related to FAQs.
    Manager Process helps to manage, interface and maintain all the trainers & workers with middleware through APIs.
    OnPrem deployment model Trinitiv2_Onprem Image
    Figure 149 OnPrem deployment Trinitiv2 model

    Rules for local deployments

    Table 13 Rules for local deployments
    Configuration Type
    Rule Name
    Rule Location
    Description
    Example
    Deployment Type Deployment Type
    Decides local/cloud deployment local should be choosen
    Manager Process Triniti Manager URL Manage AI → Manage Rules → Triniti Manager URL (Contains config information related to Triniti / Spotter Masters)  http://10.2.3.12:8090/v/1
    Triniti Master

    Configured in Manager
    Spotter Master

    Configured in Manager
    Triniti Worker processes 1..N Unified API v2 process used only when loading after successful data training Manage AI → Manage Rules → Triniti Unified Api V2 Comma separated URLs. Format: scheme1://host1:port1,schemeN://hostN:portN http://10.2.1.24:8003,http://10.2.1.24:8004
    Triniti Worker Nginx URL Endpoint URL Manage AI → Manage Rules → Triniti Unified Api V2 Serves the request from the bot, either can be configured as the process itself, if we have only 1 worker. If multiple workers available, then a web server is needed for load balancing requests. http://10.2.1.24:8008
    Spotter Multi Tenant Worker Processes 1..N Spotter worker process used only when loading after successful data training Manage AI → Manage Rules → Spotter Comma separated URLs. Format: scheme1://host1:port1,schemeN://hostN:portN
    Spotter Worker Nginx URL Spotter URL Manage AI → Manage Rules → Spotter
    http://10.2.1.24:8006

    Workspace Type : Conversational Bot - Both (Triniti + Spotter)

    Triniti worker forwards requests to Spotter worker in case of FAQs/Smalltalk

    Workspace Type : FAQ Bot

    Spotter Only FAQ bots will use spotter workers for serving the response

    Worker loading process

    • Manager should be configured with the Masters configurations (Triniti, Spotter master server configuration)
    • Train AI data from admin & loading is supported by 2 ways (Automatic & Manual)
      • Automatic loading: Successful train will automatically trigger a load
      • Manual loading: Select Data Id in Deploy → Deployments → Select Data Id → Deploy Button
    OnPrem deployment model loading Image
    Figure 150 OnPrem deployment model loading

    Worker load processes states

    Worker load status Image
    Figure 151 Worker load status
    Table 14 Worker load processes states
    State Description Action
    SUCCESS Successfully loaded the model with the specified data ID
    FAILURE Load failed for the specific worker Check the worker logs for the reason of Failure and retry loading
    LOADING In-progress of loading  
    PENDING Already a loading of another process is on-going. Once done the next process loading will start automatically 

    Troubleshooting steps

    • If loading is not complete with 10-15 minutes, then there is a possible issue
      • Admin didn't receive any callback from manager about the loading
      • Possible network error (Manager ↔ Process or Admin ↔ Manager) 
      • Clear the Redis key (Key: ONPREM: retry loading
    • Manual intervention needed when there is connectivity issues between admin and worker processes
    • Manual intervention needed when the process struck in loading state for more that 10 minutes (especially Triniti)
    • In above cases, connect to Redis temporary store and delete the key (ONPREM:) to reload the process

    Cloud Deployments

    Cloud deployments are facilitated by a mandatory component Manager. Morfeus, Triniti Trainers & Workers, Spotter Trainers & Workers can be launched using cloud deployments.

    Set Manage AI Rules

    For enabling cloud deployments, please set the below rules, to proceed with using and launching the deployment workloads.

    Navigate to Workspace → Manage AI

    Table 15 Manage Ai Rule configurations
    Category Rule Value
    General AI Engine UnifiedApiV2
    General Primary Classifier UnifiedApiV2
    General Entity Extractor UnifiedApiV2
    General Smalltalk/FAQ Handler UnifiedApiV2
    General Context Change Detector UnifiedApiV2
    Triniti Deployment Type Cloud
    UnifiedApiV2 Triniti Triniti Manager URL
    UnifiedApiV2 Endpoint URL https://router.triniti.ai
    UnifiedApiV2 Unified API Version 2
    UnifiedApiV2 Context Path /v2

    Set Workspace Rules

    1. Select Workspace, Navigate to Configure WorkspaceManage Rules
    2. Navigate to Security tab and set Access Key ID & Secret Access Key
    3. Navigate to Configure Workspace and set preferred Language and Country
    4. Click Save

    Create Instances

    Start Morfeus Instance

    1. Select Workspace, Navigate to DeployInfrastructure-> click Morfeus tab
    2. click Add Morfeus Instance
    3. Select Base OS, Server Type and JAR Artifactory folder path
    4. The content in the JAR Artifactory Path will be loaded in the class path of the Application container (Tomcat/JBoss)
    5. Upload the morfeuswebsdk and web view wars in the artifactory URL
    6. All integration jars will be placed in the container's class path and WARs will be uploaded in the deployment path (ex: Tomcat - webapps / JBoss - deployment)
    7. All other files (Ex: .properties, .json, .txt etc will be placed in /opt/deploy/properties folder in the container
    8. ApiKey for the Morfeus container need to be updated in js/index.js of the Morfeuswebsdk.war
    9. For xAPIKey reference check Customisable features in Web-sdk.

    Start Triniti/CongnitiveQnA/Both instance

    Users can create Conversational AI (Triniti) Instances, CognitiveQnA (Spotter/Faqs) Instances or Both

    1. Select Workspace, Navigate to DeployInfrastructure-> click Triniti tab
    2. click Add Triniti Instance
    3. Select Base OS, Data ID and Language
    4. Data Id will be the reference of the training done through AI Ops
    Create Instances Image
    Figure 152 Create an instance

    Update Instances

    1. Select Workspace, Navigate to DeployInfrastructure
    2. Select desired server Morfeus tab (or) Triniti tab to update the Time-To-Live
    3. Default Time-To-Live for any Instance created will be 120 Minutes (2 Hours)
    4. Maximum Time-To-Live can be 24 hours (1440 Mins)
    5. Triniti trained data models can be updated by selecting the new Data ID while updating instance
    6. click Update

    Delete Instances

    1. Select Workspace, Navigate to DeployInfrastructure
    2. Navigate to the instance Morfeus or Triniti
    3. Click Delete Instance

    After Time-To-Live (in Mins), the Instance will be shut down automatically. Users can delete the instance before this Time-To-Live period.

    Restart Instances

    1. Select Workspace, Navigate to DeployInfrastructure
    2. Navigate to the instance Morfeus or Triniti
    3. Click Restart Instance
    Manage Instances Image
    Figure 153 Manage instances

    Use case: Refreshing Jar Artifactory Path for Morfeus Instances. This operation will restart the Server Container.

    Guidelines

    1. A workspace can have only a single type of Instance (Either Triniti / CognitiveQnA / Both)
    2. Default Time-To-Live for any Instance created will be 120 Minutes (2 Hours)
    3. The Instance will get deactivated after 2 hours of non usage
    4. To start up the instance go to Deploy → Infrastructure section - Start Morfeus / Triniti
    5. Kibana URL provided in the instances information used for viewing log information
    6. Log Time period can be changed in Kibana UI for desired log time range
    7. Contact Admin in case of any issues when creating instances

    Train

    For AI data training follow these steps:

    1. Goto your workspace
    2. Navigate to 'Deploy'
    3. Click on 'Deployment'
    4. Select the data Id for which you want to train
    5. Click on 'Train'
    6. Click on 'Yes' on the popup (Do you want to train your AI Model?)

    Stop Train

    If you have started the train and wanted to interupt the train to modify AI data and start a fresh train, then follow the below steps:

    1. Goto your workspace
    2. Navigate to 'Deploy'
    3. Click on 'Deployment'
    4. Select the data Id for which you want to stop the train
    5. Click on Stop train

    Import Zip

    You can also import the deployed data by following these steps:

    1. Goto your workspace
    2. Navigate to 'Deploy'
    3. Click on 'Deployment'
    4. Click on 'Import'
    5. Select a ZIP file from your system (Which contains FAQs, smalltalks, spellcheckers, dialogs, etc.)
    Import generated data
    Figure 154 Import generated data

    Export Zip

    If you want to reuse the generated or deployed data, so you can export those data. And can use as per your need. You can export either Deployed data or Generated data.

    Exporting Generated Data

    1. Goto your workspace
    2. Navigate to 'Deploy'
    3. Click on 'Deployment'
    4. Select the data Id for which you want to export the data
    5. Click on 'Export ZIP'
    6. Select 'Generated'
    7. Select 'Version'
    8. Click on 'Export'

    It will download a ZIP file containing dialogs, NER, FAQs, parseQuery, primaryClassifier, smalltalk, spellcheck, etc*

    Modular Train Button

    Train button is added at five modules

    1. smalltalk - Manage ai → Setup SmallTalk
    2. faqs - manager ai → setup faqs
    3. spellcheck - manage ai → setup spellchecker
    4. primary classifier - manage products → functions → select any one function → navigate to DATA
    5. entities - manage ai → setup entites.

    Notes

    All the generation and Training steps are in sync once we trigger the training from any of the above modules. The current status will be updated in each modules respectively. If it is in progress, the status will be present everywhere and no new training will be triggered until the one going on is finished.

    Generic Validations

    Following are the error scenarios for which validation added which is applicable for all the workspace type :-

    1. Error message will be shown if the wrong manager url is configured.
    2. When the task api is not called after creation of workspace and when we click on training.
    3. when we click on train without generating the data

    Data Generation Validation (FAQ workspace type)

    Following are the error scenarios for which validation added which is applicable for FAQ only the workspace type :-

    If spellchecker data is not present in the workspace, then generation will fail with error message

    1. If minimum 15 faqs are not present then generation will fail with respective error message
    2. If any small talk is added, then minimum 5 must be present or else no small talk must be there.
    3. Incase the translation if faqs or smalltalk fail when data generation is in progress, then generation will stop.
    4. If native language rule is enabled and there is no data for non-english language then warning will be displayed to disable it, but the generation will proceed.

    Data Generation Validation [RB+FAQ workspace type]

    1. If spellchecker data is missing then generation will fail.
    2. if dictionary type entity is missing, then generation will fail with error message.
    3. when no intents are not present, then generation will fail giving the respective message. Here minimum 2 intents are required.
    4. If no faq are present, then minimum 15 faqs required message is shown and generation is failed.
    5. if any small talk is added, then minimum 5 must be present or else no small talk must be there.
    6. Incase the translation if faqs or smalltalk fail when data generation is in progress, then generation will stop.
    7. If native language rule is enabled and there is no data for non-english language then warning will be displayed to disable it, but the generation will proceed.

    Quick Train - generic validation

    When Elastic search url is not configured for the workspace, then on click of quick train, validation message is shown to update the url.

    FAQ only workspace

    On click of train, Load data to ES only.

    RB FAQ workspace

    On click of train, load data to ES and Train only Sniper

    Recent Module Update on Modular Training

    1. This change is available for rbfaq workspace type only, since modular training is present for it only.
    2. There are five modules whose recent update will be kept in check, those are :- smalltalk, faqs, ner, primary classifier and spellchecker.
    3. Once on click of Train, if there is any changes done in the above module then the respective screen will be shown to user to select the training of the modules.
    Import generated data
    Figure 155 Recent Module Screen

    If user clicks on Train All modules, then from abive faqs and primary classifier will be changed and if Train only smallTalk is selected then only smalltalk will be trained. Once proceeded the complete data generation will start and post that training will be proceeded. Respective updates will be shown for the generation and training steps.

    New screens changes is shown below.

    Import generated data
    Figure 156 New Faq Recent Module Screen

    Note :-

    Training triggered from any one of the five module, the training and generation status will shown on the remaining four modules.

    This change is present on five modules and the Ai Ops Train button.

    Sniper Base Model Selection

    Definition

    Base models are single models which are trained on curated on model training datasets guided from the large set of existing customer trained FAQs and variants. It is incorparated to incerease the expectation of accuracy of customer and reduce the false positive scenarios. All the models have been kept genetric in nature.Instead of having a signle model for a language, Sniper is now changed to work with multiple models for a language. This allows use to use base model sets for different situations. For example Tata Capital can have a different model from Axis Bank.

    Configurations

    Rules to configure are ->

    Go to Manage AI -> Manage rules -> Cognitive QnA tab -> select any of the listed model inside the rule "Override Models for Sniper Config"

    Import generated data
    Figure 157 Sniper Base Models

    On data generation, the config.yml file is generated inside Cognitive QnA module which contains the Elastic search details having the username and password if configured in the workpsace rules. Having the elastic search rules enables, the sniper could connect direclty connect to ES.

    Import generated data
    Figure 158 Config File Generation

    TRINITI Version - 4.5

    Table 13 Rules for local deployments
    Configuration Type
    Rule Name
    Rule Location
    Description
    Example
    Deployment Type Deployment Type
    Decides local/cloud deployment local should be choosen
    Manager Process Triniti Manager URL Manage AI → Manage Rules → Triniti Manager URL (Contains config information related to Triniti)  https://triniti45.active.ai/manager/v/1
    Classification Process Triniti endpoint URL Manage AI → Manage Rules → Triniti Triniti URL (Contains config information related to Triniti)  https://triniti45.active.ai
    Classification Process Triniti context path Manage AI → Manage Rules → Triniti Triniti context path (Contains config information related to Triniti)  /v45/triniti
    Triniti

    Configured in Manager

    Configuration

    With Triniti 4.5 if you wish to make 2 separate calls for Triniti and Sniper. Configure the respective URL and context path for sniper and set the rule value KBS_CLIENT = spotter and TRAIN_VERSION = 4.5.


    Manage AI Operations

    The below mentioned features help in aiding the day-to-day administrative and maintenance operations of the ChatBots based on Workspaces.

    AI Operations

    Import

    Imports the provided Zip file and loads the content to Manage AI modules.

    1. Select your workspace and Click on Deploy -> Click AI Ops(tab)
    2. Click Import (*Select the Zip file to be imported*)

    Export

    Exports the content to Manage AI modules to Zip file.

    1. Select your workspace and Click on Deploy -> Click AI Ops(tab)
    2. Click Export

    Generate

    This feature helps to create the content of AI Data which has to be given as input for training for generating respective models which will serve the users queries/utterances through bots.

    Generate creates the below modules as per the AI trainer format.

    • Primary Classifier
    • CognitiveQnA (FAQs)
    • NLP
    • Pre Classification
    • Rule Validators
    • Small Talk
    • Spell Checker
    • Dialog (Conversational Processor)

    Enabling and training native language (Currently supported for Arabic FAQ only)

    Steps to enable and train

    1. Enable/Disable the native language support rule on clicking on Manage AI -> Manage AI Rules.
    2. Enable the rule value for sniper version 4.01 and above.
    3. If the rule value is enabled the data generation files will contain respective language data and the same will be send to train.
    4. The data generated files are suffixed with respective LANG code.

    Generate

    The generated content will be shown in the Configure Workspace -> Deploy -> Triniti Tab.

    Train

    This feature allows us to train single or multiple modules based on user selection and triggers training. The result of training is the models generated out of training input from the "Generate" module.

    1. Select your workspace and Click on Deploy -> Click AI Ops(tab)
    2. Click Train
    3. Select Generated Version
    4. Select type of trainFull/Modular/Custom
    5. If Modular/Custom selected, then select the appropriate modules to train
    6. Select AI Version to train
    7. Click Train

    Train

    Data Sync

    AI data can be synced in the admin by 2 methods. (Git/Zip)

    Git Sync

    1. Select your workspace and Click on Manage AI -> Click Manage Rules - Git(Tab)
    2. Please configure the below rules for Git Sync
    3. Navigate to Manage AI -> Click Data Sync button.
    Rule Description
    AI Data Sync source (Git or Zip) Git
    URL URL of Git
    Workspace Sub folder path for the AI Data content
    Branch Git Branch
    Username Git auth username
    Password Git auth password

    Zip Sync

    1. Select your workspace and Click on Manage AI -> Click Manage Rules -> Git(tab)
    2. Please configure the below rules for Zip Sync
    3. Navigate to Manage AI -> Click Data Sync button.
    Rule Description
    AI Data Sync source (Git or Zip) Zip
    Zip URL Provide the Zip URL
    Zip Username Zip auth username
    Zip Password Zip auth password

    Data Sync

    Clear Cache

    Clears the cached AI worker calls from the cache. This feature is useful when all the Triniti AI engine call caches need to be cleared.

    1. Select your workspace and Click on Deploy -> Click AI Ops(tab)
    2. Click Clear Cache

    Reload Cache

    Loads the AI data for local classifier operations in the cache.

    1. Select your workspace and Click on Deploy -> Click AI Ops(tab)
    2. Click Reload Cache

    Quick Train (CognitiveQnA)

    Loads the CognitiveQnA details to Elasticsearch for aiding discovery of relative FAQs feature. This information is used for suggesting questions in the bot as well.

    1. Select your workspace and Click on Deploy -> Click AI Ops(tab)
    2. Click Quick Train (CognitiveQnA)

    In-Memory Classifier

    Loads the AI Data details to Elasticsearch for aiding classification.

    1. Select your workspace and Click on Deploy -> Click AI Ops(tab)
    2. Click In-Memory Classifier

    Managing Bot Operations

    Overview

    This feature allows you to perform some basic operation that will be needed mostly to configure your bot. You can perform these operations from this section also rather than going to the section of a particular module. It allows you to perform the following operations.

    alt_text

    Messages

    You can configure some messages to show on bot like if you don't support any feature for your service or some error message.

    Eg; If a user is trying to transfer the amount more than their balance then you can show some message like You have insufficient balance, please enter an amount less than your balance, etc.

    Importing Messages

    You can import the list of the message also by following these steps:

    1. Goto your Workflow
    2. Navigate to 'Deploy'
    3. Click on 'AI Configuration'
    4. Click on 'Bot Ops'
    5. Click on Import on 'Messages' card Select a JSON file or CSV file from your system

    Note:

    • If you are uploading a CSV file it should contain Message Code, Message Category Message Value, Message Description, Customer Segment, Code, Language, etc columns.
    • If you are uploading a JSON file, it should have all the configured messages in JSON format.

    Exporting Messages

    You can also export the set of messages by following these steps:

    1. Goto your Workflow
    2. Navigate to 'Deploy'
    3. Click on 'AI Configuration'
    4. Click on 'Bot Ops'
    5. Click on Export on 'Messages' card

    It will download a JSON file containing all the configured messages.

    Templates

    Templates are a combination of some components like images, text, buttons, etc. If you have configured some templates & wants to use the same templates in another environment or want to keep that configuration then you can simply export those templates & If you want to use the same configuration in another environment/Workflow then you can import the same JSON file rather than configuring again.

    Importing Templates

    Even you can import the list of templates from your system by following these steps:

    1. Navigate to 'Deploy'
    2. Click on 'AI Configuration'
    3. Click on 'Bot Ops'
    4. Click on Import on 'Templates' card
    5. Select a JSON file from your system containing template configuration in JSON format.

    Exporting Templates

    You can also export the templates by following these steps:

    1. Navigate to 'Deploy'
    2. Click on 'AI Configuration'
    3. Click on 'Bot Ops'
    4. Click on Export on 'Templates' card

    It will download a JSON file containing all the configured templates.

    Hooks

    The end goal of determining a message intends to do some action based on that. For that purpose for each intent/feature, we have to configure a hook. A hook is the final action bot will perform based on the intent. A hook can be of transaction or inquiry type. Transaction type hook means its a multi-step fulfillment like fund transfer. Inquiry type hook is any fulfillment which can be fulfilled in a single step like "what is my balance?".

    Importing Hooks

    You can also import the hooks from your system by following these steps:

    1. Goto your Workflow
    2. Navigate to 'Deploy'
    3. Click on 'AI Configuration'
    4. Click on 'Bot Ops'
    5. Click on Import on 'Hooks' card
    6. Select a JSON file from your system which contains the configuration of hooks in JSON format.

    Exporting Hooks

    You can also export the hooks from your system by following these steps:

    1. Goto your Workflow
    2. Navigate to 'Deploy'
    3. Click on 'AI Configuration'
    4. Click on 'Bot Ops'
    5. Click on Export on 'Messages' card

    It will download a JSON file containing Hooks configuration in JSON format.

    Workflow

    Workflow helps to define conversation journeys. The intent and entity might be enough information to identify the correct response, or the workflow might ask the user for more input that is needed to respond correctly.

    Eg; If a user asks, "How to apply for a debit card?", the defined workflow can ask for the various card selection like Rupay Card, Master Card, Visa Card, etc.

    Importing workflow

    You can also import the workflows from your system by following these steps:

    1. Goto your Workflow
    2. Navigate to 'Deploy'
    3. Click on 'AI Configuration'
    4. Click on 'Bot Ops'
    5. Click on Import on 'Workflow' card
    6. Select a JSON file from your system which contains configuration of workflows in JSON format.

    Exporting workflow

    You can also export the configured workflows by following these steps:

    1. Goto your Workflow
    2. Navigate to 'Deploy'
    3. Click on 'AI Configuration'
    4. Click on 'Bot Ops'
    5. Click on Export on 'Workflow' card

    It will download a JSON file containing workflow configuration in JSON format.

    Selflearning Index

    Migrate

    You can migrate the Selflearning index by following these steps:

    1. Goto your Workflow
    2. Navigate to 'Deploy'
    3. Click on 'AI Configuration'
    4. Click on 'Bot Ops'
    5. Click on Migrate on 'Selflearning Index' card

    Data Lake Indicies

    Create You can create the Data lake Indices by following these steps:

    1. Goto your Workflow
    2. Navigate to 'Deploy'
    3. Click on 'AI Configuration'
    4. Click on 'Bot Ops'
    5. Click on Create on 'Data Lake Indices' card