Conversational Analytics with Generative AI (Beta)

Note:

This feature is currently in Beta, and is currently available via Compose SDK and to our managed cloud customers only. As such, not all functionality has been added and finalized and is subject to change as fixes and improvements are made. The beta version is only available for React, requires a minimal Sisense version of L2024.2, and requires you to supply your own LLM provider API key for GPT.

Overview

At Sisense, we harness the capabilities of advanced Large Language Models (LLMs), like the GPT family, to transform business intelligence and analytics. Our Generative AI (GenAI) features enable users to engage with their data using natural language, making valuable insights accessible without the need for extensive technical knowledge.

These capabilities are available through the Compose SDK as visual components and hooks for creating conversational analytics experiences embedded in your application.

Key Capabilities

  • Analytics Assistant: With the Chatbot component, business users can easily uncover data insights by asking questions in a conversational interface.

  • Query Recommendations: Encourage exploration of the data landscape with AI-generated recommended queries via the Chatbot or the useGetQueryRecommendations() hook.

  • NLG insights: Enhance data literacy with natural language generated (NLG) insights using the GetNlgQueryResult component or the useGetNlgQueryResult() hook.

These features streamline data interaction, making sophisticated analysis approachable and intuitive for all users​​.

For complete Compose SDK GenAI documentation, see the Sisense developer website.

Enabling GenAI

GenAI can be enabled (or disabled) as follows by a Sisense Administrator:

  1. Search for “AI (LLM)” in the search bar or open the App Configuration drop-down.

  2. Click AI (LLM).

  3. Enable AI (LLM) via the toggle.
    When this toggle is disabled, you will not be able to access the GenAI features.

Configuring your LLM

  1. After enabling AI (LLM) on your system, you must provide consent to the general AI terms and conditions.

    In order to proceed, you must configure your own LLM API key.

  2. Open the Provider drop-down list and select your preferred provider. Currently, you can choose between:

    • OpenAI

    • Azure OpenAI Services

    We are currently supporting GPT-3.5 Turbo 16k (gpt-35-turbo-16k) versions 0613 (soon to be deprecated) and 0125 (supported in Sisense versions 2024.2.0.294 and newer), as well as GPT-4o version 2024-05-13.

    Note:

    • GPT-3.5 versions are currently supported for both Azure OpenAI and OpenAI. GPT-4o is currently only supported on Azure OpenAI.

    • It is your responsibility to manage your GPT version.

    For additional information about these models, refer to the following documentation of your LLM provider:

  1. Once you have selected your provider, enter the configuration details, including your LLM API key, to complete the setup.

  • OpenAI:

Name Description Sample
provider Provider where your LLM model is hosted. Currently, we support models hosted on `OpenAI` or `Azure openAI` services. OpenAI
model Model name corresponding to your OpenAI API key. This should be the exact model identifier used for API requests. gpt-3.5-turbo-0125
apiKey API key for authenticating and authorizing your requests to the model's API. <your_api_key>
  • Azure OpenAI:

Name Description Sample
provider Provider where your LLM model is hosted. Currently, we support models hosted on `OpenAI` or `Azure openAI` services. Azure OpenAI
model Model deployment name you chose during the deployment process on your LLM platform. GPT_35_16k
baseUrl Endpoint URL of your LLM instance. https://MyLLM.openai.azure.com/
apiKey API key for authenticating and authorizing your requests to the model's API. <your_api_key>