Conversation
…config Allow all origins
…tegration - Added detailed request/response logging for dashboard monitoring - Enhanced CORS headers for web application compatibility - Added client IP tracking and request ID generation - Improved error handling and cleanup logging - Added progress tracking for FFmpeg and IPFS upload operations - Fixed Tailscale funnel integration for external access
…vice - Added 'cors' dependency to package.json and package-lock.json. - Updated server.js to implement a more robust CORS setup allowing specific origins and methods. - Improved logging for incoming requests, including detailed information for transcoding operations. - Enhanced error handling and cleanup processes during video transcoding. - Created a backup of the original server.js for reference. - Added a sample video file for testing purposes.
…for video transcoding
…add live monitoring script
… sessionId fallback
… transcoding logic into a cleaner structure
… and platform information
…etup - Clarify internal port 8080 vs external port 8081 - Add production deployment section with Mac Mini M4 configuration - Document Tailscale Funnel routing - Add health check endpoint examples
Updates the log to include new video transcode events for multiple users and devices, while removing earlier entries to maintain relevance and reduce log size. Helps keep logs current for monitoring and troubleshooting.
Includes an additional node property in the logged output to enhance traceability and support debugging across distributed environments.
…gress - Add /progress/:requestId SSE endpoint for real-time progress streaming - Add activeJobs Map to track progress and SSE clients per request - Add broadcastProgress() to send updates to all connected clients - Add getVideoDuration() using ffprobe for accurate progress calculation - Parse FFmpeg stderr for time progress during transcoding - Use client's correlationId for SSE (enables pre-subscription) - Preserve existing SSE clients when initializing job tracking - Progress stages: receiving (5%), transcoding (0-80%), uploading (80-100%), complete
…pp tracking fields
|
acho que a gente nao precisa se alguns arquivo, mas essa versao do transcoder ( a final depois desses outros commits que tavam na fila) eh so o logger e o live feedback para gente ter uma progress bar de verdade. proximo passo seria criar uma fila de upload para nunca falhar, tipo o instagram tem a funcao de retry quando falha, mas a gente ve isso depois, queria testar esse pr no oracle cloud e se der certo eu tento a fila, se der errado a gente reverte |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 97d6f4e492
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| // Cleanup on close | ||
| req.on('close', () => { | ||
| job.clients.delete(res); | ||
| if (job.clients.size === 0 && job.stage === 'complete') { | ||
| activeJobs.delete(requestId); |
There was a problem hiding this comment.
Drop stale SSE jobs when last client disconnects
The /progress/:requestId close handler only deletes the job when stage === 'complete'. If a client opens an SSE stream but never uploads (or disconnects before completion), the job stays in activeJobs forever because there’s no other cleanup path for the waiting/starting/error stages. Over time, repeated abandoned connections can grow the map without bound and leak memory. Consider deleting the entry when clients.size === 0 regardless of stage (or adding a TTL for waiting jobs).
Useful? React with 👍 / 👎.
| getStats() { | ||
| const total = this.logs.length; | ||
| const successful = this.logs.filter(log => log.success === true).length; | ||
| const failed = this.logs.filter(log => log.success === false).length; | ||
| const inProgress = this.logs.filter(log => log.status === 'started' || log.status === 'processing').length; |
There was a problem hiding this comment.
Compute inProgress from latest status per id
getStats() counts every log with status === 'started' as in progress, but the logger never removes those entries after completion. That means any finished job still contributes to inProgress, so the /stats endpoint will report active work even when nothing is running. This is especially visible once the service has processed any successful jobs. You likely need to compute in-progress by request id (e.g., only the latest status per id) rather than counting all historical started logs.
Useful? React with 👍 / 👎.
- Use ipfs.skatehive.app gateway (branded custom gateway) - Add PINATA_GROUP_VIDEOS support for organized uploads - Standardize metadata to match webapp schema - Add missing fields: fileType, uploadDate, transcoded, videoDuration, originalFileName - Add conditional fields: viewport, connectionType - Preserve backward compatibility with sourceApp/appVersion
- Removed less critical fields to comply with Pinata 10 keyvalue limit - Kept essential fields: creator, source, uploadDate, transcoded, originalFileName, videoDuration, requestId, sourceApp, platform, thumbnailUrl - Fixed empty string issue (only include non-empty optional fields) - Added detailed Pinata error logging for debugging
…lity - Add checkWebOptimized() — ffprobe checks if H.264/AAC/≤1080p before transcoding - Optimized videos skip transcoding, upload directly to IPFS (2x faster) - Adaptive CRF: 20 (short clips) / 22 (normal) / 24 (long/large) - Cap resolution at 1080p, bitrate at 5Mbps - Default preset changed to medium (better quality) - Remove obsolete docker-compose version key Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This pull request introduces several major improvements to the video transcoding service, focusing on real-time progress streaming, enhanced logging and monitoring, and improved deployment and configuration options. Notably, it adds Server-Sent Events (SSE) for real-time progress updates, a robust structured logging system, new API endpoints for logs and statistics, and production-ready Docker and docker-compose support.
Key changes:
Real-Time Progress & API Enhancements
GET /progress/:requestId) for real-time transcoding progress updates, and new endpoints for logs (GET /logs) and statistics (GET /stats). The API response structure was updated for consistency, and detailed usage documentation/examples were added toREADME.md.Logging & Monitoring
src/logger.jsthat tracks user info, file details, processing duration, client IP, device info, and more. Logs are written to per-node files, limited to the last 100 entries, and are formatted for both file and console output with rich details and emojis. New endpoints and scripts support dashboard integration and log testing. [1] [2] [3] [4]Deployment & Configuration
docker-compose.ymlfor easy deployment, including persistent log storage, health checks, and environment variable support. Updated theDockerfileto installcurlfor health checks. Expanded theREADME.mdwith clear instructions for production deployment, port mapping, and environment variables. [1] [2] [3] [4]Environment & Development
.env.exampleand documentation to clarify default values, CORS policy, and environment modes (NODE_ENV). Added new scripts topackage.jsonfor testing logs and rich logging output. [1] [2] [3]These changes make the service more robust, easier to monitor, and ready for production and dashboard integration.