Skip to content

This repository demonstrates how to use outlines and llama-cpp-python for structured JSON generation with streaming output, integrating llama.cpp for local model inference and outlines for schema-based text generation.

Notifications You must be signed in to change notification settings

testli-ai/outlines-llama-cpp-python-streaming-output

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

  1. Install uv: Follow the installation guide.

  2. Sync dependencies: Run uv sync to install the necessary dependencies.

  3. Download models: Execute uv run src/download_models.py to download the required models.

  4. Test streaming output: Run uv run src/generate_exam.py to test the streaming output.

About

This repository demonstrates how to use outlines and llama-cpp-python for structured JSON generation with streaming output, integrating llama.cpp for local model inference and outlines for schema-based text generation.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages