Skip to content

Conversation

kompfner
Copy link
Contributor

@kompfner kompfner commented Mar 28, 2025

See #102 (comment) (and the messages leading up to it) for context.

  • Let the bot finish counting, then say "goodbye" -->
    • The bot will say goodbye to completion
    • Then the post_action will run (printing "Dummy post-action!")
  • Interrupt the bot's counting with a "goodbye" -->
    • The bot will start saying goodbye
    • The post_action will run (printing "Dummy post-action!")
    • Then the bot will finish saying goodbye

…e messages leading up to it) for context.
Copy link

vercel bot commented Mar 28, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
pipecat-flows ✅ Ready (Inspect) Visit Preview Mar 28, 2025 7:37pm

@kompfner kompfner changed the title [OPEN JUST FOR DISCUSSION PURPOSES] Minimal repro of bot interruption problem. [OPEN JUST FOR INVESTIGATION PURPOSES] Minimal repro of bot interruption problem. Mar 28, 2025
@kompfner
Copy link
Contributor Author

Hmm...I have a hunch that this might be the same problem as the one referenced here...

I bet that the post_action (the FunctionActionFrame) is somehow getting queued in the pipeline ahead of the frames generated by the LLM's context update.

@kompfner kompfner mentioned this pull request Mar 28, 2025
kompfner added a commit that referenced this pull request Mar 28, 2025
… `CartesiaHttpTTSService` for now, to avoid an issue where interrupting the bot would cause post_actions to run before the bot stopped speaking (see #119)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant