-
Notifications
You must be signed in to change notification settings - Fork 21
Description
Description
In the deep_research_agent project, the file planner_agent.py (specifically at line 315 of deep_research_agent/planner_agent.py) calls the create_file function defined in tools.py (located at lines 363–380 of deep_research_agent/tools.py).
Upon inspection, the create_file function performs file creation or update operations without requiring explicit user consent and without validating the file content or target path. This creates the risk that if the Large Language Model (LLM) generates a malicious file name or content, it may result in:
-
Arbitrary File Write:
The LLM can specify any file path and name, potentially overwriting critical system files or creating new files in sensitive locations. -
Data Leakage / Tampering:
By writing to specific files, the LLM may alter application behavior or even cause data exfiltration or further system compromise. -
Denial of Service (DoS):
The LLM could attempt to write many files or large files, potentially exhausting disk space.
Recommendations
To mitigate this risk, it is recommended to implement one or more of the following security measures either within the create_file function or before it is called:
-
User Confirmation:
Prompt the user for explicit approval before performing file write operations, especially when the file path or content appears suspicious. -
Path Whitelisting / Blacklisting:
Restrict file writes to predefined safe directories, or explicitly disallow writing to sensitive system paths. -
Content Validation:
Perform basic validation of the file content, such as checking file types or scanning for potentially malicious code. -
Filename / Path Sanitization:
Rigorously sanitize and normalize any LLM-generated filenames and paths to prevent path traversal attacks.
Affected Code Locations
deep_research_agent/planner_agent.py(Line 315)deep_research_agent/tools.py(Lines 363–380)