AI-generated trading insights for Cardano native assets — delivered through a responsive frontend and a modular backend pipeline. This project combines real-time token data, structured LLM analysis, and powerful visualizations to surface key market signals such as trend, sentiment, and risk-reward balance.
The system is designed for flexibility, supporting multiple data sources and pluggable LLM providers (Gemini 2, 2.5, and others), making it easy to extend, deploy, and adapt across environments.
Cardano AI Asset Insights is an end-to-end system for intelligent token analysis. It automatically fetches high-volume tokens, processes market data, and runs structured LLM evaluations to produce consistent, schema-driven insights.
A cron-based batch job powers the backend pipeline, using OHLC and volume data to prompt a large language model for analysis. Insights are stored in a database and served through a developer-friendly API. The frontend consumes these insights and renders dynamic, mobile-responsive components with trend summaries, visual charts, and contextual tags.
The backend is fully environment-driven and built for extensibility, with support for alternate data providers, custom insight schemas, and LLM response evaluation using statistical confidence scoring.
- Automated token selection using TapTools API (24h top volume)
- LLM-driven insight generation via Gemini models (Vertex AI)
- Flexible prompt schema for structured, reliable model outputs
- Multiple response candidates with confidence selection (avgLogprobs)
- Custom API for querying insights with filters and metadata control
- Reactive UI built in React and Tailwind, optimized for mobile and desktop
- Chart visualizations (radar and bubble charts) for comparative token analysis
- Support/resistance detection with confidence scoring
- Cron-based pipeline for automated, scheduled analysis
- Plugin architecture for data sources and LLM providers
- Developer-first tooling: run/debug API, cron, or both with VS Code profiles
- Fully configurable via
.env— including schedule, token count, intervals, cache TTL, DB settings, and LLM project config
- Frontend Walkthrough – Shows the UI displaying AI-generated insights for Cardano tokens.
- Backend Pipeline Demo – Demonstrates the full backend process: token selection, LLM analysis, API response, structure etc.
The backend supports modular integration with LLM providers. Current implementations include:
- Gemini 2
- Gemini 2.5
To configure the GCP Gemini provider using Vertex AI, refer to the
Vertex AI Setup Guide.
To learn how to add and register a new provider, see the
Provider Integration Guide.
The app uses the TapTools API to fetch the top-volume Cardano native tokens over the last 24 hours. This ensures that AI analysis focuses on high-activity assets.
- Endpoint:
/api/v1/token/top/volume - Configurable via
MAX_TOKENS_TO_ANALYZE - Requires
TAPTOOLS_API_KEYin.env
Copy the example env file:
cp backend/.env.example backend/.envUpdate required values.
From the project root:
docker compose upThe API server must be running before using the frontend or cron jobs.
From the backend folder:
npm install
npm run run:apicd backend
npm install
npm run run:croncd frontend/client
yarn install
yarn devVS Code launch profiles are included for debugging:
Debug APIDebug CronDebug API and Cron(parallel)
Open the Run & Debug panel in VS Code and select a profile to start.
When the environment variable USE_STATIC_DATA=true is set, the application switches to using a static data storage approach. In this mode:
- Data is stored in a JSON file in Cloud Storage rather than in a Cloud SQL PostgreSQL instance.
- Operational costs are significantly reduced, as storing and accessing data from Cloud Storage is far more cost-effective than maintaining a Cloud SQL instance.
- The application supports both dynamic (SQL) and static (JSON) modes, and this flag allows you to toggle between them as needed.
- Insights will be both stored and retrieved from the JSON file in Cloud Storage.
To enable this mode, make sure the service account used by Vertex AI has the Storage Admin role (roles/storage.admin). This permission is necessary to allow reading from and writing to the Cloud Storage bucket.
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
- LLM Model Research & Evaluation (Milestone 1) – Comparative analysis of LLMs used to inform schema design and integration strategy.
This project is licensed under the MIT License.