You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+29-16Lines changed: 29 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,15 +4,19 @@ The Infosys Responsible AI toolkit provides a set of APIs to integrate safety,se
4
4
### Repositories and Installation Instructions
5
5
The following table lists the modules of the Infosys Responsible AI Toolkit. Installation instructions for each module can be found in the corresponding README file within the module's directory.
| 1 | ModerationLayer APIs <br>(Comprehensive suite of Safety, Privacy, Explainability, Fairness and Hallucination tenets) | To regulate the content of prompts and responses generated by LLMs |[responsible-ai-moderationlayer](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-moderationlayer),<br>[responsible-ai-moderationModel](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-moderationmodel)|
10
-
| 2 | Explainability APIs**| Get Explainability to LLM responses, <br>Global and local explainability for Regression, Classification and Timeseries Models |[responsible-ai-llm-explain](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm-explain),<br>[responsible-ai-explainability](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-explainability),<br>[Model Details](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-model-detail),<br>[Reporting](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-reporting-tool)|
11
-
| 3 | Fairness & Bias API | Check Fairness and detect Biases associated with LLM prompts and responses and also for traditional ML models |[responsible-ai-fairness](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-fairness)|
12
-
| 4 | Hallucination API | Detect and quantify Hallucination in LLM responses under RAG scenarios |[responsible-ai-hallucination](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-Hallucination)|
13
-
| 5 | Privacy API | Detect and anonymize or encrypt or highlight PII information in prompts for LLMs or in its responses |[responsible-ai-privacy](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-privacy)|
14
-
| 6 | Safety API | Detects and anonymize toxic and profane text associated with LLMs |[responsible-ai-safety](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-safety)|
15
-
| 7 | Security API | For different types of security attacks and defenses on tabular and image data, prompt injection and jailbreak checks |[responsible-ai-security](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-security)|
9
+
| 1 | ModerationLayer APIs <br>(Comprehensive suite of Safety, Privacy, Explainability, Fairness and Hallucination tenets) | To regulate the content of prompts and responses generated by LLMs |[responsible-ai-moderationlayer](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-moderationlayer),<br>[responsible-ai-moderationModel](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-moderationmodel),<br>[responsible-ai-admin](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-admin). |
10
+
| 2 | Explainability APIs**| Get Explainability to LLM responses, <br>Global and local explainability for Regression, Classification and Timeseries Models |[responsible-ai-llm-explain](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm-explain),<br>[responsible-ai-explainability](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-explainability),<br>[responsible-ai-model-detail](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-model-detail),<br>[responsible-ai-reporting-tool](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-reporting-tool), <br>[responsible-ai-moderationlayer](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-moderationlayer). |
11
+
| 3 | Image Explainability | Image Explain offers detailed explanations for images generated by Large Language Models (LLMs). |[responsible-ai-fairness](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-fairness),<br>[responsible-ai-llm](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm). |
12
+
| 4 | Responsible-ai-LLM | LLM provides an implementation for generating images using a Large Language Model (LLM). |[responsible-ai-llm](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm). |
13
+
| 5 | Fairness & Bias API | Check Fairness and detect Biases associated with LLM prompts and responses and also for traditional ML models |[responsible-ai-fairness](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-fairness),<br>[responsible-ai-model-detail](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-model-detail),<br>[responsible-ai-reporting-tool](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-reporting-tool),<br>[responsible-ai-file-storage](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-file-storage)|
14
+
| 6 | Hallucination API | Detect and quantify Hallucination in LLM responses under RAG scenarios |[responsible-ai-hallucination](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-Hallucination)|
15
+
| 7 | Privacy API | Detect and anonymize or encrypt or highlight PII information in prompts for LLMs or in its responses |[responsible-ai-privacy](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-privacy),<br>[responsible-ai-admin](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-admin).|
16
+
| 8 | Safety API | Detects and anonymize toxic and profane text associated with LLMs |[responsible-ai-safety](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-safety),<br>[responsible-ai-llm](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm)|
17
+
| 9 | Security API | For different types of security attacks and defenses on tabular and image data, prompt injection and jailbreak checks |[responsible-ai-security](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-security),<br>[responsible-ai-model-detail](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-model-detail),<br>[responsible-ai-reporting-tool](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-reporting-tool)|
18
+
| 10 | Red Reaming API | To promote responsible AI practices and enhance the resilience of LLMs against potential adversarial attacks by TAP and PAIR Techniques. |[responsible-ai-redteaming](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-redteaming). |
19
+
16
20
17
21
** Endpoints for explainability are located in both the explainability and moderation layer repositories. Refer to the README files in these repositories for more details on specific features.
18
22
@@ -26,12 +30,16 @@ The Responsible AI toolkit provides a user-friendly interface for seamless exper
26
30
| 1 | MFE | An Angular micro frontend app serves as a user interface where users can easily interact with and consume various backend endpoints through independently developed, modular components. |[responsible-ai-mfe](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-mfe)|
27
31
| 2 | SHELL | A shell application in a micro frontend architecture acts as the central hub, orchestrating and loading independent frontend modules. It provides a unified user interface where users can interact with different micro frontends, consume backend endpoints.|[responsible-ai-shell](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-shell)|
28
32
| 3 | Backend | A Python backend module focused on registration and authentication handles user account management, including user registration, login, password validation, and session management.|[responsible-ai-backend](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-backend)|
29
-
| 4 | Telemetry | A python backend module defining the various tenets structure for ingestion of the API's data into Elasticsearch indexes. It provided customizable input validation and insertion of data coming from tenets into elasticsearch, which can be further displayed using kibana.|[responsible-ai-telemetry](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-telemetry)|
30
-
| 5 | File Storage | Python module that provides versatile APIs for seamless integration across multiple microservices, enabling efficient file management with Azure Blob Storage. It supports key operations such as file upload, retrieval, and updates, offering a robust solution for handling files in Azure Blob Storage.|[responsible-ai-file-storage](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-file-storage)|
31
-
| 6 | Benchmarking | Displays stats related to benchmarking large language models (LLMs) across various categories such as fairness, privacy, truthfulness and ethics. It helps evaluate and compare LLM performance in these critical areas.|[responsible-ai-llm-benchmarking](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm-benchmarking)|
33
+
| 4 | Admin | Supporting module which is used for configuring the main module. User can create recognizer, custom templates, configure Thresholds and map it to created account and portfolio.|[responsible-ai-admin](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-admin)|
34
+
| 5 | Telemetry | A python backend module defining the various tenets structure for ingestion of the API's data into Elasticsearch indexes. It provided customizable input validation and insertion of data coming from tenets into elasticsearch, which can be further displayed using kibana.|[responsible-ai-telemetry](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-telemetry)|
35
+
| 6 | File Storage | Python module that provides versatile APIs for seamless integration across multiple microservices, enabling efficient file management with Azure Blob Storage. It supports key operations such as file upload, retrieval, and updates, offering a robust solution for handling files in Azure Blob Storage.|[responsible-ai-file-storage](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-file-storage)|
36
+
| 7 | Benchmarking | Displays stats related to benchmarking large language models (LLMs) across various categories such as fairness, privacy, truthfulness and ethics. It helps evaluate and compare LLM performance in these critical areas.|[responsible-ai-llm-benchmarking](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm-benchmarking)|
37
+
| 8 | Upload-Doc | module used for processing large files like video and store the processed video with respect to userid 3 subcatogery under video Processing : 1.PIIAnonymization 2.SafetyMasking 3.NudityMasking.|[responsible-ai-upload-doc](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-upload-doc)|
38
+
| 9 | workbench | The workbench repository is used for processing and generating report for Unstructured text.|[responsible-ai-workbench](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-workbench),<br>[responsible-ai-moderationlayer](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-moderationlayer),<br>[responsible-ai-privacy](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-privacy),<br>[responsible-ai-safety](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-safety),<br>[responsible-ai-file-storage](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-file-storage),<br>[responsible-ai-explainability](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-explainability). |
32
39
33
40
For technical details and usage instructions on the Infosys Responsible AI toolkit's features, please refer to the [documentation](https://infosys.github.io/Infosys-Responsible-AI-Toolkit/).
34
41
42
+
35
43
## Toolkit features at a glance
36
44
### Generative AI Models
37
45
| Safety, Security & Privacy | Model Transparency | Text Quality | Linguistic Quality |
@@ -44,11 +52,16 @@ For technical details and usage instructions on the Infosys Responsible AI toolk
44
52
|* Simulate Adverserial Attacks<br>* Recommend Defense Mechanisms|* Bias Detection Methods:<br><i>- Statistical Parity Difference</i><br><i>- Disparate Impact Ratio</i><br><i>- Four Fifth's Rule</i><br><i>- Cohen's D</i><br>* Mitigation Methods:<br><i>- Equalized Odds</i><br><i>- Re-weighing</i>|* Global Explainability using SHAP<br>* Local Explainability using LIME |
45
53
46
54
## Upcoming Features
47
-
* Logic of Thoughts(LoT) for enhanced Explainability
48
-
* Fairness Auditing for continuous monitoring and mitigation of Biases
49
-
* Red Teaming to identify and mitigate AI model security threats
50
-
* Multi-lingual support for Prompt injection and Jailbreak in Moderation models
51
-
* Multilingual Feature support to privacy & safety modules
55
+
* Multi-lingual support for FM-Moderation Guardrails
56
+
* Red Teaming using Datasets and Garak based Technique & Libraries
57
+
* Counterfactual explanation for ML model prediction
58
+
59
+
60
+
We appreciate your feedback and aim to keep you updated on our plans regularly. This approach ensures we're prioritizing the right tasks and enables you to make informed decisions based on our development roadmap.
61
+
62
+
[Explore the roadmap »](https://github.com/users/InfosysResponsibleAI/projects/2/views/1)
63
+
64
+
[Contribute to our roadmap »](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/blob/Release-2.1.0/CONTRIBUTING.md)
52
65
53
66
Note: These API-based guardrails are optimized for Azure OpenAI. Users employing alternative LLMs should make the necessary client configuration adjustments. For Azure OpenAI api subscription, follow instructions provided in the [Microsoft Azure website](https://azure.microsoft.com/en-us/pricing/purchase-options/azure-account?icid=ai-services&azure-portal=true).
0 commit comments