Skip to content

User satisfaction detection from implicit signals #57

@yonatangross

Description

@yonatangross

User Satisfaction Detection

Part of: #50 (Auto-Feedback Self-Improvement Loop - Phase 3)

Goal

Detect user satisfaction and frustration from implicit signals (not explicit feedback), then use this to improve the plugin experience.

Signal Detection

Frustration Signals 😤

Signal Weight Detection
User redoes work High Same task requested again
User says "no", "wrong" High NLP on user messages
Multiple retries Medium >3 attempts same task
User abandons task Medium Task started, not completed
User edits heavily Low >10 edits to output
Short responses Low "no", "try again"

Satisfaction Signals 😊

Signal Weight Detection
User says "thanks", "perfect" High NLP on user messages
User accepts immediately High No edits, moves to next
Task completes quickly Medium <2 min for simple tasks
No follow-up corrections Medium No "actually..." messages
User continues workflow Low Moves to next task

Implementation

1. Signal Tracker

class SatisfactionTracker:
    def analyze_interaction(self, user_msg, claude_output, user_action):
        signals = []
        
        # Check for frustration keywords
        if re.search(r'\b(wrong|no|bad|terrible|redo)\b', user_msg, re.I):
            signals.append(('frustration', 'negative_keyword', 0.8))
        
        # Check for satisfaction keywords
        if re.search(r'\b(thanks|perfect|great|exactly)\b', user_msg, re.I):
            signals.append(('satisfaction', 'positive_keyword', 0.9))
        
        # Check edit count
        if user_action == 'accept' and edit_count == 0:
            signals.append(('satisfaction', 'zero_edits', 0.7))
        
        return signals

2. Pattern Correlation

When frustration detected:

  • What skill was used?
  • What agent was involved?
  • What type of task?
  • What context was loaded?

Link frustration to specific causes for targeted improvement.

3. Adaptive Response

Detected: User frustrated with test-generator output

Analysis:
- User always removes mock setup boilerplate
- User adds more specific assertions

Action:
- Update test templates to use simpler mocks
- Add assertion examples to skill

Privacy Considerations

  • Only analyze structure, not content
  • No storage of actual user messages
  • Aggregate patterns only
  • User can disable with learnFromSatisfaction: false

Acceptance Criteria

  • Frustration signals detected
  • Satisfaction signals detected
  • Patterns correlated to causes
  • Privacy-preserving analysis
  • Actionable improvements suggested

Dependencies

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions