Skip to content

Conversation

@suluyana
Copy link
Collaborator

@suluyana suluyana commented Oct 21, 2025

  1. 同时使用多个memory
  2. 同一个memory被多个agent share使用
  3. 支持通过yaml灵活地配置添加和搜索多组的meta信息,包括user_id、agent_id、run_id、memory_type、search_limit
  4. 支持指定emb_dim
  5. 支持选用远程的vector_store
  6. 其他:bug修复和配置refine

@gemini-code-assist
Copy link

Summary of Changes

Hello @suluyana, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant overhaul and refinement of the agent's memory management system. The core changes involve refactoring the memory architecture to be more modular and flexible, enabling agents to leverage multiple and shared memory instances with dynamic metadata configuration. It also enhances the system's adaptability by allowing configurable embedders, remote vector stores, and centralized LLM service base URLs. Furthermore, the memory's focus has been sharpened to prioritize technical development facts, making it more suitable for code-oriented applications.

Highlights

  • Memory System Refactoring: The dedicated Mem0Memory class has been removed, and its functionalities are now integrated and generalized within DefaultMemory, which is managed by a new SharedMemoryManager for centralized control.
  • Multi-Memory and Shared Memory Support: The system now allows agents to utilize multiple memory instances simultaneously, and a single memory instance can be shared across several agents, enhancing collaborative and contextual memory management.
  • Flexible Metadata Configuration: Memory configurations now support dynamic specification of meta-information such as user_id, agent_id, run_id, memory_type, and search_limit directly through YAML, offering granular control over memory behavior.
  • Configurable Embedders and Remote Vector Stores: The memory system now provides options to specify embedding dimensions and configure remote vector stores, offering greater flexibility in how and where memory data is stored and retrieved.
  • LLM Service Base URL Centralization: Default base URLs for various LLM services are now centralized in ms_agent/utils/constants.py through ServiceConfig dataclasses and SERVICE_MAPPING, improving LLM configuration and maintainability.
  • Code-Centric Fact Retrieval: The memory's fact extraction prompt has been refined to specifically focus on technical development details, making the memory system more effective for code-related tasks and project context retention.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@gemini-code-assist
Copy link

Summary of Changes

Hello @suluyana, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive refinement of the agent's memory system, enhancing its flexibility, configurability, and robustness. Key changes include a refactored memory architecture that allows for multiple and shared memory instances, alongside granular control over metadata for memory operations. The update also streamlines LLM service configurations, integrates support for specifying embedding dimensions and remote vector stores, and refines prompts for more targeted fact retrieval, ultimately providing a more powerful and adaptable memory foundation for agents.

Highlights

  • Memory System Refactor: The memory management system has undergone a significant refactor, removing the dedicated Mem0Memory class and integrating its functionalities into DefaultMemory for a more unified approach. A new SharedMemoryManager is introduced to handle shared memory instances across agents.
  • Flexible Memory Configuration: Memory configurations can now be specified more flexibly via YAML, allowing for multiple memory instances, shared memory among agents, and detailed metadata (user_id, agent_id, run_id, memory_type, search_limit) for memory operations.
  • Enhanced LLM Service Configuration: LLM service configurations have been refined with a new get_service_config utility to centralize base URLs and provide fallback mechanisms, improving robustness and ease of configuration for various LLM providers.
  • Asynchronous Memory Operations: Memory addition and retrieval methods in DefaultMemory are now asynchronous, and new asynchronous hooks (add_memory_on_step_end, add_memory_on_task_end) have been added to LLMAgent for managing memory at different stages of agent execution.
  • Embedding Dimension and Remote Vector Store Support: The memory system now supports specifying embedding dimensions (emb_dim) and configuring remote vector stores, offering greater control over memory storage and retrieval mechanisms.
  • Prompt Refinement: The fact_retrieval_prompt has been updated to specifically focus on extracting technical facts and development details relevant to code projects, enhancing the quality of memory content for coding agents.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

你好,感谢你对 memory 模块进行的这次重要重构。这次更新通过支持多个 memory 实例、agent 间共享 memory、以及通过 YAML 进行更灵活的元信息配置,极大地提升了 memory 系统的灵活性和功能。代码的整体结构清晰,改动方向正确。

我提供了一些具体的反馈,主要集中在以下几个方面:

  • 代码质量:移除未使用的导入和冗余的逻辑检查。
  • 可维护性:建议重构 llm_agent.py 中重复的代码逻辑。
  • 可读性:简化复杂的 getattr 调用,使代码意图更清晰。
  • 潜在问题:指出了一个可能导致运行时错误的 dataclass 默认值变更,并修正了一个常量映射中的拼写错误。

请查看具体的评论,希望这些建议能帮助你进一步完善代码。

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

本次PR对内存管理系统进行了重要的重构,使其更加灵活和强大。主要改进包括:支持多个memory,支持多个agent共享同一个memory,支持通过yaml灵活配置memory的元信息,支持指定emb_dim和远程vector_store等。代码的模块化也得到了改善,例如引入了专属的SharedMemoryManager。此外,对服务终端节点配置的集中化处理也是一个很好的实践。

不过,我也发现了一些可以改进的地方:

  • Message数据类中一个默认值的更改可能会引发潜在的运行时错误。
  • 新的内存处理逻辑中存在一些代码重复和冗余。
  • 存在一些小的代码一致性和可维护性问题。

总体而言,这是一次出色的功能增强。如果能解决我提出的这些问题,代码的健壮性和可维护性将会更高。

@suluyana suluyana closed this Oct 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants