Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enclose examples inside code blocks #61

Merged
merged 1 commit into from
Dec 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 6 additions & 4 deletions autogen/agentchat/contrib/capabilities/vision_capability.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,22 +141,24 @@ def process_last_received_message(self, content: Union[str, list[dict]]) -> str:
(Content is a string without an image, remains unchanged.)

- Input as String, with image location:
content = "What's weather in this cool photo: <img http://example.com/photo.jpg>"
Output: "What's weather in this cool photo: <img http://example.com/photo.jpg> in case you can not see, the caption of this image is:
content = "What's weather in this cool photo: `<img http://example.com/photo.jpg>`"
Output: "What's weather in this cool photo: `<img http://example.com/photo.jpg>` in case you can not see, the caption of this image is:
A beautiful sunset over the mountains\n"
(Caption added after the image)

- Input as List with Text Only:
content = [{"type": "text", "text": "Here's an interesting fact."}]
content = `[{"type": "text", "text": "Here's an interesting fact."}]`
Output: "Here's an interesting fact."
(No images in the content, it remains unchanged.)

- Input as List with Image URL:
```python
content = [
{"type": "text", "text": "What's weather in this cool photo:"},
{"type": "image_url", "image_url": {"url": "http://example.com/photo.jpg"}}
]
Output: "What's weather in this cool photo: <img http://example.com/photo.jpg> in case you can not see, the caption of this image is:
```
Output: "What's weather in this cool photo: `<img http://example.com/photo.jpg>` in case you can not see, the caption of this image is:
A beautiful sunset over the mountains\n"
(Caption added after the image)
"""
Expand Down
3 changes: 2 additions & 1 deletion autogen/agentchat/contrib/graph_rag/graph_rag_capability.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ class GraphRagCapability(AgentCapability):
3. generate answers from retrieved information and send messages back.

For example,
```python
graph_query_engine = GraphQueryEngine(...)
graph_query_engine.init_db([Document(doc1), Document(doc2), ...])

Expand Down Expand Up @@ -50,7 +51,7 @@ class GraphRagCapability(AgentCapability):
# - Hugo Weaving',
# 'role': 'user_proxy'},
# ...)

```
"""

def __init__(self, query_engine: GraphQueryEngine):
Expand Down
12 changes: 8 additions & 4 deletions autogen/agentchat/contrib/img_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ def llava_formatter(prompt: str, order_image_tokens: bool = False) -> tuple[str,
Formats the input prompt by replacing image tags and returns the new prompt along with image locations.

Parameters:
- prompt (str): The input string that may contain image tags like <img ...>.
- prompt (str): The input string that may contain image tags like `<img ...>`.
- order_image_tokens (bool, optional): Whether to order the image tokens with numbers.
It will be useful for GPT-4V. Defaults to False.

Expand Down Expand Up @@ -194,7 +194,7 @@ def gpt4v_formatter(prompt: str, img_format: str = "uri") -> list[Union[str, dic
Formats the input prompt by replacing image tags and returns a list of text and images.

Args:
- prompt (str): The input string that may contain image tags like <img ...>.
- prompt (str): The input string that may contain image tags like `<img ...>`.
- img_format (str): what image format should be used. One of "uri", "url", "pil".

Returns:
Expand Down Expand Up @@ -293,24 +293,28 @@ def message_formatter_pil_to_b64(messages: list[dict]) -> list[dict]:
'image_url' key converted to base64 encoded data URIs.

Example Input:
```python
[
{'content': [{'type': 'text', 'text': 'You are a helpful AI assistant.'}], 'role': 'system'},
{'content': [
{'type': 'text', 'text': "What's the breed of this dog here? \n"},
{'type': 'text', 'text': "What's the breed of this dog here?"},
{'type': 'image_url', 'image_url': {'url': a PIL.Image.Image}},
{'type': 'text', 'text': '.'}],
'role': 'user'}
]
```

Example Output:
```python
[
{'content': [{'type': 'text', 'text': 'You are a helpful AI assistant.'}], 'role': 'system'},
{'content': [
{'type': 'text', 'text': "What's the breed of this dog here? \n"},
{'type': 'text', 'text': "What's the breed of this dog here?"},
{'type': 'image_url', 'image_url': {'url': a B64 Image}},
{'type': 'text', 'text': '.'}],
'role': 'user'}
]
```
"""
new_messages = []
for message in messages:
Expand Down
8 changes: 5 additions & 3 deletions autogen/agentchat/contrib/vectordb/chromadb.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,11 +39,11 @@ def __init__(
Args:
client: chromadb.Client | The client object of the vector database. Default is None.
If provided, it will use the client object directly and ignore other arguments.
path: str | The path to the vector database. Default is `tmp/db`. The default was `None` for version <=0.2.24.
path: str | The path to the vector database. Default is `tmp/db`. The default was `None` for version `<=0.2.24`.
embedding_function: Callable | The embedding function used to generate the vector representation
of the documents. Default is None, SentenceTransformerEmbeddingFunction("all-MiniLM-L6-v2") will be used.
metadata: dict | The metadata of the vector database. Default is None. If None, it will use this
setting: {"hnsw:space": "ip", "hnsw:construction_ef": 30, "hnsw:M": 32}. For more details of
setting: `{"hnsw:space": "ip", "hnsw:construction_ef": 30, "hnsw:M": 32}`. For more details of
the metadata, please refer to [distances](https://github.com/nmslib/hnswlib#supported-distances),
[hnsw](https://github.com/chroma-core/chroma/blob/566bc80f6c8ee29f7d99b6322654f32183c368c4/chromadb/segment/impl/vector/local_hnsw.py#L184),
and [ALGO_PARAMS](https://github.com/nmslib/hnswlib/blob/master/ALGO_PARAMS.md).
Expand Down Expand Up @@ -248,7 +248,7 @@ def retrieve_docs(
collection_name: str | The name of the collection. Default is None.
n_results: int | The number of relevant documents to return. Default is 10.
distance_threshold: float | The threshold for the distance score, only distance smaller than it will be
returned. Don't filter with it if < 0. Default is -1.
returned. Don't filter with it if `< 0`. Default is -1.
kwargs: Dict | Additional keyword arguments.

Returns:
Expand Down Expand Up @@ -279,6 +279,7 @@ def _chroma_get_results_to_list_documents(data_dict) -> list[Document]:
List[Document] | The list of Document.

Example:
```python
data_dict = {
"key1s": [1, 2, 3],
"key2s": ["a", "b", "c"],
Expand All @@ -291,6 +292,7 @@ def _chroma_get_results_to_list_documents(data_dict) -> list[Document]:
{"key1": 2, "key2": "b", "key4": "y"},
{"key1": 3, "key2": "c", "key4": "z"},
]
```
"""

results = []
Expand Down
4 changes: 2 additions & 2 deletions autogen/agentchat/contrib/vectordb/pgvectordb.py
Original file line number Diff line number Diff line change
Expand Up @@ -606,7 +606,7 @@ def __init__(
Models can be chosen from:
https://huggingface.co/models?library=sentence-transformers
metadata: dict | The metadata of the vector database. Default is None. If None, it will use this
setting: {"hnsw:space": "ip", "hnsw:construction_ef": 30, "hnsw:M": 16}. Creates Index on table
setting: `{"hnsw:space": "ip", "hnsw:construction_ef": 30, "hnsw:M": 16}`. Creates Index on table
using hnsw (embedding vector_l2_ops) WITH (m = hnsw:M) ef_construction = "hnsw:construction_ef".
For more info: https://github.com/pgvector/pgvector?tab=readme-ov-file#hnsw
Returns:
Expand Down Expand Up @@ -917,7 +917,7 @@ def retrieve_docs(
collection_name: str | The name of the collection. Default is None.
n_results: int | The number of relevant documents to return. Default is 10.
distance_threshold: float | The threshold for the distance score, only distance smaller than it will be
returned. Don't filter with it if < 0. Default is -1.
returned. Don't filter with it if `< 0`. Default is -1.
kwargs: Dict | Additional keyword arguments.

Returns:
Expand Down
4 changes: 2 additions & 2 deletions autogen/agentchat/contrib/vectordb/qdrant.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def __init__(
Defaults to None.
**kwargs: Additional options to pass to fastembed.TextEmbedding
Raises:
ValueError: If the model_name is not in the format <org>/<model> e.g. BAAI/bge-small-en-v1.5.
ValueError: If the model_name is not in the format `<org>/<model>` e.g. BAAI/bge-small-en-v1.5.
"""
try:
from fastembed import TextEmbedding
Expand Down Expand Up @@ -229,7 +229,7 @@ def retrieve_docs(
collection_name: str | The name of the collection. Default is None.
n_results: int | The number of relevant documents to return. Default is 10.
distance_threshold: float | The threshold for the distance score, only distance smaller than it will be
returned. Don't filter with it if < 0. Default is 0.
returned. Don't filter with it if `< 0`. Default is 0.
kwargs: Dict | Additional keyword arguments.

Returns:
Expand Down
2 changes: 2 additions & 0 deletions autogen/agentchat/contrib/vectordb/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ def chroma_results_to_query_results(data_dict: dict[str, list[list[Any]]], speci
special_key.

Example:
```python
data_dict = {
"key1s": [[1, 2, 3], [4, 5, 6], [7, 8, 9]],
"key2s": [["a", "b", "c"], ["c", "d", "e"], ["e", "f", "g"]],
Expand All @@ -103,6 +104,7 @@ def chroma_results_to_query_results(data_dict: dict[str, list[list[Any]]], speci
({"key1": 9, "key2": "g", "key4": "6"}, 0.9),
],
]
```
"""

keys = [
Expand Down
26 changes: 13 additions & 13 deletions autogen/agentchat/groupchat.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,28 +39,28 @@ class GroupChat:
When set to True and when a message is a function call suggestion,
the next speaker will be chosen from an agent which contains the corresponding function name
in its `function_map`.
- select_speaker_message_template: customize the select speaker message (used in "auto" speaker selection), which appears first in the message context and generally includes the agent descriptions and list of agents. If the string contains "{roles}" it will replaced with the agent's and their role descriptions. If the string contains "{agentlist}" it will be replaced with a comma-separated list of agent names in square brackets. The default value is:
- select_speaker_message_template: customize the select speaker message (used in "auto" speaker selection), which appears first in the message context and generally includes the agent descriptions and list of agents. If the string contains "`{roles}`" it will replaced with the agent's and their role descriptions. If the string contains "`{agentlist}`" it will be replaced with a comma-separated list of agent names in square brackets. The default value is:
"You are in a role play game. The following roles are available:
{roles}.
`{roles}`.
Read the following conversation.
Then select the next role from {agentlist} to play. Only return the role."
- select_speaker_prompt_template: customize the select speaker prompt (used in "auto" speaker selection), which appears last in the message context and generally includes the list of agents and guidance for the LLM to select the next agent. If the string contains "{agentlist}" it will be replaced with a comma-separated list of agent names in square brackets. The default value is:
"Read the above conversation. Then select the next role from {agentlist} to play. Only return the role."
Then select the next role from `{agentlist}` to play. Only return the role."
- select_speaker_prompt_template: customize the select speaker prompt (used in "auto" speaker selection), which appears last in the message context and generally includes the list of agents and guidance for the LLM to select the next agent. If the string contains "`{agentlist}`" it will be replaced with a comma-separated list of agent names in square brackets. The default value is:
"Read the above conversation. Then select the next role from `{agentlist}` to play. Only return the role."
To ignore this prompt being used, set this to None. If set to None, ensure your instructions for selecting a speaker are in the select_speaker_message_template string.
- select_speaker_auto_multiple_template: customize the follow-up prompt used when selecting a speaker fails with a response that contains multiple agent names. This prompt guides the LLM to return just one agent name. Applies only to "auto" speaker selection method. If the string contains "{agentlist}" it will be replaced with a comma-separated list of agent names in square brackets. The default value is:
- select_speaker_auto_multiple_template: customize the follow-up prompt used when selecting a speaker fails with a response that contains multiple agent names. This prompt guides the LLM to return just one agent name. Applies only to "auto" speaker selection method. If the string contains "`{agentlist}`" it will be replaced with a comma-separated list of agent names in square brackets. The default value is:
"You provided more than one name in your text, please return just the name of the next speaker. To determine the speaker use these prioritised rules:
1. If the context refers to themselves as a speaker e.g. "As the..." , choose that speaker's name
2. If it refers to the "next" speaker name, choose that name
3. Otherwise, choose the first provided speaker's name in the context
The names are case-sensitive and should not be abbreviated or changed.
Respond with ONLY the name of the speaker and DO NOT provide a reason."
- select_speaker_auto_none_template: customize the follow-up prompt used when selecting a speaker fails with a response that contains no agent names. This prompt guides the LLM to return an agent name and provides a list of agent names. Applies only to "auto" speaker selection method. If the string contains "{agentlist}" it will be replaced with a comma-separated list of agent names in square brackets. The default value is:
- select_speaker_auto_none_template: customize the follow-up prompt used when selecting a speaker fails with a response that contains no agent names. This prompt guides the LLM to return an agent name and provides a list of agent names. Applies only to "auto" speaker selection method. If the string contains "`{agentlist}`" it will be replaced with a comma-separated list of agent names in square brackets. The default value is:
"You didn't choose a speaker. As a reminder, to determine the speaker use these prioritised rules:
1. If the context refers to themselves as a speaker e.g. "As the..." , choose that speaker's name
2. If it refers to the "next" speaker name, choose that name
3. Otherwise, choose the first provided speaker's name in the context
The names are case-sensitive and should not be abbreviated or changed.
The only names that are accepted are {agentlist}.
The only names that are accepted are `{agentlist}`.
Respond with ONLY the name of the speaker and DO NOT provide a reason."
- speaker_selection_method: the method for selecting the next speaker. Default is "auto".
Could be any of the following (case insensitive), will raise ValueError if not recognized:
Expand Down Expand Up @@ -1592,11 +1592,11 @@ def clear_agents_history(self, reply: dict, groupchat: GroupChat) -> str:
"""Clears history of messages for all agents or selected one. Can preserve selected number of last messages.
That function is called when user manually provide "clear history" phrase in his reply.
When "clear history" is provided, the history of messages for all agents is cleared.
When "clear history <agent_name>" is provided, the history of messages for selected agent is cleared.
When "clear history <nr_of_messages_to_preserve>" is provided, the history of messages for all agents is cleared
except last <nr_of_messages_to_preserve> messages.
When "clear history <agent_name> <nr_of_messages_to_preserve>" is provided, the history of messages for selected
agent is cleared except last <nr_of_messages_to_preserve> messages.
When "clear history `<agent_name>`" is provided, the history of messages for selected agent is cleared.
When "clear history `<nr_of_messages_to_preserve>`" is provided, the history of messages for all agents is cleared
except last `<nr_of_messages_to_preserve>` messages.
When "clear history `<agent_name>` `<nr_of_messages_to_preserve>`" is provided, the history of messages for selected
agent is cleared except last `<nr_of_messages_to_preserve>` messages.
Phrase "clear history" and optional arguments are cut out from the reply before it passed to the chat.

Args:
Expand Down
6 changes: 3 additions & 3 deletions autogen/agentchat/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,9 +111,9 @@ def parse_tags_from_content(tag: str, content: Union[str, list[dict[str, Any]]])
can be a single string or a set of attribute-value pairs.

Examples:
<img http://example.com/image.png> -> [{"tag": "img", "attr": {"src": "http://example.com/image.png"}, "match": re.Match}]
<audio text="Hello I'm a robot" prompt="whisper"> ->
[{"tag": "audio", "attr": {"text": "Hello I'm a robot", "prompt": "whisper"}, "match": re.Match}]
`<img http://example.com/image.png> -> [{"tag": "img", "attr": {"src": "http://example.com/image.png"}, "match": re.Match}]`
```<audio text="Hello I'm a robot" prompt="whisper"> ->
[{"tag": "audio", "attr": {"text": "Hello I'm a robot", "prompt": "whisper"}, "match": re.Match}]```

Args:
tag (str): The HTML style tag to be parsed.
Expand Down
Loading
Loading