Setting Up Your LLM

To enable Sisense Generative AI features using Large Language Models (LLMs), you must first configure and deploy an LLM provider, such as OpenAI or Azure OpenAI Service. This process involves creating and deploying a resource, selecting a supported model and region, and setting up access credentials. Once the resource is deployed, you must configure Sisense with the correct provider settings, including the model name, API key, and endpoint URL. Additionally, if you are using OpenAI, you must ensure your API key has the necessary permissions. Sisense currently supports several versions of GPT models, and it is your responsibility to ensure the correct version is configured for optimal compatibility.

Creating a Resource

Once you have configured your LLM, click Test to validate your configuration. Upon success, click Save.

Supported LLM Providers and Model Versions

Sisense currently supports OpenAI foundation models hosted either directly through OpenAI or through Azure OpenAI Services.

From time to time, additional supported model versions may be added after quality verifications.

It is your responsibility to manage your model version.

Model

Version

Azure OpenAI

OpenAI

Supported by Studio assistant

GPT-4o-mini

gpt-4o-mini-0718

Partial support. This version is not recommended for the assistant, as it may provide an inconsistent experience, and some functionality is not reliably performant when using mini.

GPT-4o

gpt-4o-0513

GPT-3.5

gpt-35-turbo-0125

X