Vector Database (VDB) and Embedding

Sisense employs a Vector Database (VDB) to power its similarity service and facilitate Retrieval-Augmented Generation (RAG). Information undergoes an embedding process, where it is stored alongside its vector embedding representation. Notably, Sisense hosts the embedding model internally, eliminating the need for an external cloud service.

Retrieval-Augmented Generation (RAG)

A RAG architecture is utilized to enhance the performance of natural language query translation. Sisense has uploaded a set of sample question-query pairs that function as few-shot examples. This process does not use customer data; instead, a pre-curated synthetic sample is employed. Only natural language queries (NLQ) processed through the assistant or via API require the use of the Vector DB. Other prompt types and Generative AI features do not rely on vector retrieval.

Smart Value Matching

This feature indexes column values to support fuzzy and natural language filtering. Administrators have control over which fields are included.

Vector Database Data Storage

The vector database may store the following:

  • Few-shot question-query examples: Customer data is never utilized for this purpose.

  • Column values from smart value matching enabled columns: Data designers explicitly select which columns will be supported.

Feature

Few Shot Examples

Selected Column Values

Explanations

-

-

Forecast 

-

-

Trend

-

-

Exploration Paths

-

-

Simply Ask (Legacy NLQ)

-

-

Assistant (NLQ only)

-

Narrative

-

-

Semantic Enrichment

-

-

Smart value matching

-