You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Text-to-SQL : Query your database using natural language, whether to generate a SQL query, run or explain that query, or get a narrated response to the results of that query.
Chat: Use your LLM directly for content generation based on your user prompt—for example, generating custom emails, answering questions, and sentiment analysis—just to name a few use cases.
Retrieval-Augmented Generation (RAG): Enable LLMs to generate more relevant responses by augmenting your prompt with knowledge from your provided documents
Automated vector index creation and maintenance: Quickly and easily create a vector index to be used with RAG using data from cloud storage and other sources.
Results integrated in Python objects: Receive AI-generated results directly into Python data structures, facilitating analysis and integration.
Chatbot with conversation memory: Create and manage named conversations based on your interactions with the LLM.
Synthetic data generation: Generate synthetic data for a single table or a set of tables with referential integrity constraints.
Synchronous and asynchronous invocation: Build applications using Python with either synchronous or the more flexible asynchronous programming style using standalone Python clients. With asynchronous support, the API integrates easily with web frameworks like FastAPI or Flask, enabling real-time AI-driven applications.