Groq is an AI solutions compnay providing Fast AI inference, powered by LPU™ AI inference technology which delivers fast, affordable, and energy efficient AI. Groq is headquartered in Silicon Valley and provide cloud and on-prem inference at scale for AI applications. The LPU and related technologies are proudly designed, manufactured, and assembled in North America. With the LPU, instant intelligence is unlocking a new class of AI applications and use cases.
Groq's Models (LLMs) with ultra-low latency inference capabilities are transforming the AI landscape and enabling developers to integrate state-of-the-art LLMs like Llama3 and Mixtral 8x7B into applications requiring real-time AI processing.
To access Groq models you'll need to create a Groq account, get an API key, and install the langchain-groq integration package.
To access the Groq Cloud services, Head to the Groq console to sign up to Groq @ https://console.groq.com/playground and generate an API key for yourself. Once you've done this set the GROQ_API_KEY environment variable.
The LangChain Groq integration lives in the langchain-groq package:
%pip install -qU langchain-groq