Generative AI - Empowering Your Analytics Experience
Sisense leverages Generative AI, powered by large language models (LLMs), to introduce intuitive, conversational, and insightful analytics. These capabilities transform how users interact with data by removing technical barriers and accelerating time to insights.
These capabilities represent a long-term strategic investment by Sisense to continuously improve and expand the AI experience. We are actively working on offering fully managed LLM options as part of our platform in the future as well as supporting additional LLM providers.
Note:
Generative AI is generally available for Managed Cloud customers. Customers are required to supply their own Large Language Model (LLM) via supported providers. Self-hosted environments currently have beta access; refer to the self-hosted documentation for details.
Key Features
AI Assistant
An AI assistant is available in the context of your dashboard, enabling users to explore data by asking questions of their own and receiving relevant insights, visualizations, and follow-up recommendations in natural language.
Narrative
Narrative provides AI-generated textual summaries that describe widgets and highlight key insights from your data.
Note:
Narrative is a premium feature, requires separate licensing, and does not require external LLM configuration.
Getting Started
Before enabling Generative AI, ensure you have access to your LLM key. For detailed instructions on how to set up your LLM, and supported models, see Setting Up Your LLM.
Enabling GenAI
GenAI can be enabled (or disabled) as follows by a Sisense Administrator:
-
Search for “Sisense Intelligence” in the search bar or open the App Configuration drop-down.
-
Click Sisense Intelligence.
-
Enable Generative AI via the toggle.
When this toggle is disabled, you will not be able to access the GenAI features that require your own LLM (Narrative is controlled separately).
Connecting Your LLM
To enable Generative AI features, Sisense requires integration with a supported Large Language Model (LLM). This section outlines how to connect your LLM.
-
Accept the Terms of Use - After enabling AI on your system, you must consent to the general AI terms and conditions.
In order to proceed, you must configure your own LLM API key.
-
Choose your LLM Provider - Open the Provider drop-down list and select your preferred provider.
-
Configure your LLM Connection - Once you have selected your provider, enter the configuration details, including your LLM API key, to complete the setup.
-
Azure OpenAI:
Name |
Description |
Example |
Model Name |
Model deployment name you chose during the deployment process on your LLM platform. |
my-gpt-4o-mini |
Base URL |
Endpoint URL of your LLM instance. |
https://MyLLM.openai.azure.com/ |
API Key |
API key for authenticating and authorizing your requests to the model's API. |
<your_api_key> |
-
OpenAI:
Name |
Description |
Example |
Model Name |
Model name corresponding to your OpenAI API key (see the OpenAI supported models list). This should be the exact model identifier used for API requests. Note: Ensure that the key permissions are set to All (not Restricted/ReadOnly). See Setting Up Your LLM for more information. |
gpt-4o-mini-2024-07-18 |
API Key |
API key for authenticating and authorizing your requests to the model's API. |
<your_api_key> |
-
Test and Save - Click Test to validate your configuration. Once successful, click Save.
API Reference
To set up your LLM connection via REST API:
-
Under Admin, navigate to REST API and select version 2.0.
-
Use
POST /settings/ai/llm/
to add or update all AI settings used for AI settings configuration. -
Use the
POST /ai/llm/test
method to test the connection to your LLM deployment.
Limitations
The Generative AI settings on this page support multiple tenants, but the system administrator cannot control the AI assistant feature behavior per tenant. All tenants inherit the same behavior.