Build your own customizable web dashboard that tracks your interests from across the internet
A lightweight, self-hosted personal dashboard built with Flask + Vite. Aggregate content from multiple sources, customize the look, and access it anywhere via HTTPS.
- Multi-Source Aggregation - Fetch data from multiple websites in one place
- Built-in Scrapers - Hacker News, GitHub Trending, arXiv papers, Hypebeast drops
- Customizable Themes - Choose from 6 pre-made themes or create your own
- Auto-Updates - Schedule data refreshes with cron jobs
- One-Click Refresh - Manual refresh button for on-demand updates
- Public Access - Share via HTTPS with Distiller proxy (or keep it private)
- Fast & Lightweight - Cached data means instant loading
- Mobile Friendly - Responsive design works on any device
- Python 3.9+
- Node.js 18+
- Chromium (for web scraping)
-
Clone the repository
git clone https://github.com/yourusername/personal-dashboard.git cd personal-dashboard -
Set up backend (Flask)
cd backend python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate pip install -r requirements.txt cd ..
-
Set up scrapers (Playwright)
cd scrapers npm install npx playwright install chromium cd ..
-
Set up frontend (optional, for development)
cd frontend npm install cd ..
Option 1: Quick Start (Recommended)
./start.shOption 2: Manual Start
cd backend
source venv/bin/activate
python app.pyThen open your browser to http://localhost:5000
personal-dashboard/
├── backend/
│ ├── app.py # Flask API server
│ ├── requirements.txt # Python dependencies
│ └── venv/ # Virtual environment (created on setup)
├── frontend/
│ ├── index.html # Dashboard UI
│ ├── main.js # Frontend logic
│ ├── styles.css # Theme & styling
│ ├── package.json # Node dependencies
│ └── vite.config.js # Vite configuration
├── scrapers/
│ ├── hypebeast-scraper.js # Fashion drops
│ ├── hackernews-scraper.js # Tech news
│ ├── github-trending-scraper.js # Trending repos
│ ├── arxiv-scraper.js # Research papers
│ └── package.json # Scraper dependencies
├── .claude/
│ ├── skills/ # Claude Code skills
│ │ ├── playwright/ # Browser automation utilities
│ │ ├── web-automation/# Advanced scraping examples
│ │ └── port-proxy/ # Proxy configuration tools
│ └── README.md # Skills documentation
├── drops-output/ # Cached data (JSON files)
├── CLAUDE.md # Interactive setup guide
├── README.md # This file
├── .gitignore
└── start.sh # Quick start script
- Scrapers run (manually or via cron) and fetch data from websites
- Data is saved as JSON files in
drops-output/ - Flask backend serves the cached data via REST API
- Frontend displays the data in a beautiful, themed interface
- Refresh button triggers scrapers to fetch fresh data
┌─────────────┐ ┌──────────────┐ ┌──────────────┐
│ Scrapers │ ───▶ │ drops-output │ ───▶ │ Flask API │
│ (Playwright)│ │ (JSON) │ │ (localhost) │
└─────────────┘ └──────────────┘ └──────┬───────┘
│
▼
┌──────────────┐
│ Frontend │
│ (Vite UI) │
└──────────────┘
| Scraper | Description | Output File |
|---|---|---|
| Hacker News | Top 30 tech stories with points, comments | hackernews.json |
| GitHub Trending | Top 25 trending repos with stars, forks | github-trending.json |
| arXiv | Latest 20 CS research papers with abstracts | arxiv.json |
| Hypebeast | Weekly fashion/streetwear drops with prices | hypebeast-drops.json |
cd scrapers
node hackernews-scraper.js
node github-trending-scraper.js
node arxiv-scraper.js
node hypebeast-scraper.jsClick the "Refresh Data" button in the dashboard UI
Edit your crontab:
crontab -eAdd a schedule (example: every hour):
0 * * * * cd /path/to/personal-dashboard/scrapers && /usr/bin/node hackernews-scraper.js
0 * * * * cd /path/to/personal-dashboard/scrapers && /usr/bin/node github-trending-scraper.jsHealth check endpoint
Response:
{
"status": "ok",
"timestamp": "2025-10-28T18:00:00.000Z"
}Get all cached data
Response:
{
"success": true,
"hypebeast": { ... },
"hackernews": { ... },
"github": { ... },
"arxiv": { ... },
"last_updated": "2025-10-28T18:00:00.000Z"
}Trigger all scrapers (runs in background)
Response:
{
"success": true,
"message": "Refresh started in background. Data will be updated in 1-2 minutes.",
"timestamp": "2025-10-28T18:00:00.000Z"
}- Create a new file in
scrapers/(e.g.,reddit-scraper.js) - Use Playwright to scrape your target website
- Save output to
../drops-output/yourdata.json - Update
backend/app.pyto read and serve the new data - Update
frontend/main.jsto display the new data
Template:
const { chromium } = require('playwright');
const fs = require('fs');
const path = require('path');
const OUTPUT_DIR = path.join(__dirname, '../drops-output');
(async () => {
const browser = await chromium.launch({ headless: true });
const page = await browser.newPage();
await page.goto('https://example.com');
const data = await page.evaluate(() => {
// Extract data from page
return { items: [] };
});
fs.writeFileSync(
path.join(OUTPUT_DIR, 'yourdata.json'),
JSON.stringify({ timestamp: new Date().toString(), data }, null, 2)
);
await browser.close();
})();Edit frontend/styles.css to customize colors, fonts, and layout. See CLAUDE.md for theme inspiration and examples.
Edit your crontab to adjust how often scrapers run. See cron patterns:
*/15 * * * *- Every 15 minutes0 */3 * * *- Every 3 hours0 8,18 * * *- Twice daily (8am, 6pm)0 8 * * *- Daily at 8am
If running on a Distiller device, your dashboard is automatically accessible via:
https://{subdomain}.devices.pamir.ai/distiller/proxy/5000/
The app is already configured with proxy-compatible routing (relative paths, DistillerProxyFix middleware).
For other setups, use ngrok or Cloudflare Tunnel:
# Ngrok
ngrok http 5000
# Cloudflare Tunnel
cloudflared tunnel --url http://localhost:5000Click the "Refresh Data" button to run scrapers for the first time.
Install Chromium:
cd scrapers
npx playwright install chromiumChange port in backend/app.py:
app.run(debug=True, host='0.0.0.0', port=5001) # Use 5001 insteadMake sure you're accessing the correct URL:
- Local:
http://localhost:5000 - Proxy:
https://{subdomain}.devices.pamir.ai/distiller/proxy/5000/
Hard refresh your browser: Ctrl+Shift+R (or Cmd+Shift+R on Mac)
For frontend-only development with hot reload:
# Terminal 1: Backend
cd backend
source venv/bin/activate
python app.py
# Terminal 2: Frontend
cd frontend
npm run devOpen http://localhost:3000 (Vite dev server with hot reload)
Build the frontend for production:
cd frontend
npm run buildBuilt files will be in frontend/ and served by Flask at http://localhost:5000
See CLAUDE.md for an interactive guide that helps you:
- Choose content sources to track
- Pick a theme that matches your style
- Set up automated updates
- Customize the dashboard to your needs
This project includes three Claude Code skills in the .claude/skills/ directory:
- playwright - Browser automation utilities and examples
- web-automation - Advanced scraping patterns and the original example scrapers
- port-proxy - Tools for fixing proxy-related routing issues
If you're using Claude Code, you can invoke these skills to get help creating new scrapers, debugging issues, or extending functionality. See .claude/README.md for details.
Not using Claude Code? No problem! The skills directory contains useful examples and utilities you can reference directly.
- Backend: Flask - Python web framework
- Frontend: Vite - Fast build tool for modern web
- Scraping: Playwright - Browser automation
- Scheduling: cron - Native Linux task scheduling
Contributions welcome! Please feel free to:
- Fork the repo
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Ideas for contributions:
- New scrapers (Reddit, Twitter, Product Hunt, etc.)
- New themes
- Data visualization features
- Search/filter functionality
- Dark mode toggle
- Export features (CSV, RSS)
MIT License - see LICENSE file for details
- Built with the Flask-Vite starter template
- Inspired by Hacker News, GitHub Trending, and personal aggregator tools
- Powered by Playwright for reliable web scraping
Made with Claude Code 🤖
Need help customizing? Check out CLAUDE.md or open an issue!