-
Notifications
You must be signed in to change notification settings - Fork 6
ComfyUI native API integration with ComfyStream #59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ced uncessary base64 input frame operations, prep for multi-instance, cleanup.
…dded config for server management.
…ame size handling, commented out some logging.
|
For reference, I had to qualify the For the workers, |
Co-authored-by: John | Elite Encoder <[email protected]>
Fixed! |
…the ui, cleanup of tensor code.
…ediate step, moved prompt execution strategy to `execution_start` event, moved buffer to self variable to avoid reinitalization.
|
This work will now continue in ComfyUI native API integration with Spawn #130. |
Introduction:
One of the primary limitations of building workflows within ComfyStream is the use of the Hidden Switch fork.
Many difficulties arise when particular node packs do not play well with the EmbeddedComfyClient. Careful testing and modifications to existing nodes is usually required to enable full functionality. Nonetheless, there are usually other issues that may come from dependency on the fork, such as delays or limitations with newer ComfyUI features, including native performance optimizations.
The primary issue is handling of multiple Comfy instances to brute force frame generation.
Objective:
I set out to replace the EmbeddedComfyClient with communication directly to running ComfyUI instances using the native ComfyUI API and Web Socket connection.
Method:
Sending Data: All data is sent via RESTful
POST /promptthrough the Comfy API. Custom nodes were added to support sending the input image as a base64 string to the prompt.Receiving Data: Message events from the webhook are parsed, and data can be received via the native
send_imagehandler to push as WebRTC frames. The comfyui-tooling-nodes inspired this via a Blob format with prefix similar to how Comfy sends previews to the UI. Upon successfully capturing the Blob, the prompt can then be called for the next subsequent frame.Limitations:
It is obvious that this process is not as efficient as the Hidden Switch method of communicating with the ComfyStream tensor_cache directly, however it opens up new opportunities for parallelization through multi-inference gpu scaling as well as multi-gpu scaling, an avenue I'm investigating as a performance increase.
Note that this preliminary DRAFT is very early and the proof of concept was just demonstrated as functional. More work is to be done.
TODOs:
ai-runnerGetting it Running:
app_api.pyfile instead ofapp.py:Visual example of multi-Comfy instance processing:
Screen.Recording.2025-03-25.183747.mp4