Skip to content

Commit a7ed06e

Browse files
thenmozhi-krishnanThenmozhi Krishnan
andauthored
Release 2.2.0 (#40)
* Updating new functionality * updating new functionality * Updating README.md * Updated README.md * Updated README.md * Updated README.md * Updating new features and functionality * updating new functionality * deleted ai-explain * updating new functionality * publishing new module * deleted few modules * updating new modules * updating new module * updating new modules * deleted the modules * updating new functionality * New module added for image explainability * New module added to generate image * new module added for automated Redteaming * Updated LICENSE.md * Updated Readme.md * Updated License.md * Updated License.md * Updated License.md * Updated local url and contact in Readme.md * Updated License.md * Updated LICENSE.md * Updated LICENSE.md * Updated LICENSE.md * Updated LICENSE.txt * Updated LICENSE.txt * Updated LICENSE.md * Updated License.md * Updated License.md * Updated License.md * Updated License.md * Updated license file in README.md * Updated local host url README.md * Updated README.md * Updated document url in README.md * Updated API documentation url in README.md * Updated local host url README.md * Updated local host url in README.md * Updated local host url in README.md * Updated local host url in README.md * Updated local host url in Readme.md * Updated local host url in README.md * Updated README.md * Updated license url in README.md * Updated README.md * Updated README.md * Updated telemetry url and hosturl in README.md * Updated README.md * Updated README.md * Updated README.md --------- Co-authored-by: Thenmozhi Krishnan <[email protected]>
1 parent 0d8efd5 commit a7ed06e

File tree

1,745 files changed

+3240794
-31010
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,745 files changed

+3240794
-31010
lines changed

Features and Endpoints.docx

94.4 KB
Binary file not shown.

README.md

Lines changed: 29 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,19 @@ The Infosys Responsible AI toolkit provides a set of APIs to integrate safety,se
44
### Repositories and Installation Instructions
55
The following table lists the modules of the Infosys Responsible AI Toolkit. Installation instructions for each module can be found in the corresponding README file within the module's directory.
66

7-
| # | Module | Functionalities | Repository name(s) |
7+
| # | Module | Functionalities | Dependent repository name(s) |
88
| --- | --- | --- | ---- |
9-
| 1 | ModerationLayer APIs <br>(Comprehensive suite of Safety, Privacy, Explainability, Fairness and Hallucination tenets) | To regulate the content of prompts and responses generated by LLMs | [responsible-ai-moderationlayer](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-moderationlayer),<br>[responsible-ai-moderationModel](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-moderationmodel) |
10-
| 2 | Explainability APIs** | Get Explainability to LLM responses, <br>Global and local explainability for Regression, Classification and Timeseries Models | [responsible-ai-llm-explain](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm-explain),<br>[responsible-ai-explainability](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-explainability),<br>[Model Details](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-model-detail),<br>[Reporting](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-reporting-tool) |
11-
| 3 | Fairness & Bias API | Check Fairness and detect Biases associated with LLM prompts and responses and also for traditional ML models | [responsible-ai-fairness](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-fairness) |
12-
| 4 | Hallucination API | Detect and quantify Hallucination in LLM responses under RAG scenarios | [responsible-ai-hallucination](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-Hallucination) |
13-
| 5 | Privacy API | Detect and anonymize or encrypt or highlight PII information in prompts for LLMs or in its responses | [responsible-ai-privacy](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-privacy) |
14-
| 6 | Safety API | Detects and anonymize toxic and profane text associated with LLMs | [responsible-ai-safety](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-safety) |
15-
| 7 | Security API | For different types of security attacks and defenses on tabular and image data, prompt injection and jailbreak checks | [responsible-ai-security](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-security) |
9+
| 1 | ModerationLayer APIs <br>(Comprehensive suite of Safety, Privacy, Explainability, Fairness and Hallucination tenets) | To regulate the content of prompts and responses generated by LLMs | [responsible-ai-moderationlayer](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-moderationlayer),<br>[responsible-ai-moderationModel](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-moderationmodel),<br>[responsible-ai-admin](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-admin). |
10+
| 2 | Explainability APIs** | Get Explainability to LLM responses, <br>Global and local explainability for Regression, Classification and Timeseries Models | [responsible-ai-llm-explain](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm-explain),<br>[responsible-ai-explainability](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-explainability),<br>[responsible-ai-model-detail](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-model-detail),<br>[responsible-ai-reporting-tool](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-reporting-tool), <br>[responsible-ai-moderationlayer](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-moderationlayer). |
11+
| 3 | Image Explainability | Image Explain offers detailed explanations for images generated by Large Language Models (LLMs). | [responsible-ai-fairness](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-fairness),<br>[responsible-ai-llm](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm). |
12+
| 4 | Responsible-ai-LLM | LLM provides an implementation for generating images using a Large Language Model (LLM). | [responsible-ai-llm](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm). |
13+
| 5 | Fairness & Bias API | Check Fairness and detect Biases associated with LLM prompts and responses and also for traditional ML models | [responsible-ai-fairness](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-fairness),<br>[responsible-ai-model-detail](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-model-detail),<br>[responsible-ai-reporting-tool](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-reporting-tool),<br>[responsible-ai-file-storage](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-file-storage) |
14+
| 6 | Hallucination API | Detect and quantify Hallucination in LLM responses under RAG scenarios | [responsible-ai-hallucination](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-Hallucination) |
15+
| 7 | Privacy API | Detect and anonymize or encrypt or highlight PII information in prompts for LLMs or in its responses | [responsible-ai-privacy](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-privacy),<br>[responsible-ai-admin](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-admin).|
16+
| 8 | Safety API | Detects and anonymize toxic and profane text associated with LLMs | [responsible-ai-safety](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-safety),<br>[responsible-ai-llm](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm)|
17+
| 9 | Security API | For different types of security attacks and defenses on tabular and image data, prompt injection and jailbreak checks | [responsible-ai-security](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-security),<br>[responsible-ai-model-detail](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-model-detail),<br>[responsible-ai-reporting-tool](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-reporting-tool) |
18+
| 10 | Red Reaming API | To promote responsible AI practices and enhance the resilience of LLMs against potential adversarial attacks by TAP and PAIR Techniques. | [responsible-ai-redteaming](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-redteaming). |
19+
1620

1721
** Endpoints for explainability are located in both the explainability and moderation layer repositories. Refer to the README files in these repositories for more details on specific features.
1822

@@ -26,12 +30,16 @@ The Responsible AI toolkit provides a user-friendly interface for seamless exper
2630
| 1 | MFE | An Angular micro frontend app serves as a user interface where users can easily interact with and consume various backend endpoints through independently developed, modular components. | [responsible-ai-mfe](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-mfe) |
2731
| 2 | SHELL | A shell application in a micro frontend architecture acts as the central hub, orchestrating and loading independent frontend modules. It provides a unified user interface where users can interact with different micro frontends, consume backend endpoints.| [responsible-ai-shell](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-shell) |
2832
| 3 | Backend | A Python backend module focused on registration and authentication handles user account management, including user registration, login, password validation, and session management.| [responsible-ai-backend](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-backend) |
29-
| 4 | Telemetry | A python backend module defining the various tenets structure for ingestion of the API's data into Elasticsearch indexes. It provided customizable input validation and insertion of data coming from tenets into elasticsearch, which can be further displayed using kibana.| [responsible-ai-telemetry](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-telemetry) |
30-
| 5 | File Storage | Python module that provides versatile APIs for seamless integration across multiple microservices, enabling efficient file management with Azure Blob Storage. It supports key operations such as file upload, retrieval, and updates, offering a robust solution for handling files in Azure Blob Storage.| [responsible-ai-file-storage](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-file-storage) |
31-
| 6 | Benchmarking | Displays stats related to benchmarking large language models (LLMs) across various categories such as fairness, privacy, truthfulness and ethics. It helps evaluate and compare LLM performance in these critical areas.| [responsible-ai-llm-benchmarking](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm-benchmarking)|
33+
| 4 | Admin | Supporting module which is used for configuring the main module. User can create recognizer, custom templates, configure Thresholds and map it to created account and portfolio.| [responsible-ai-admin](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-admin) |
34+
| 5 | Telemetry | A python backend module defining the various tenets structure for ingestion of the API's data into Elasticsearch indexes. It provided customizable input validation and insertion of data coming from tenets into elasticsearch, which can be further displayed using kibana.| [responsible-ai-telemetry](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-telemetry) |
35+
| 6 | File Storage | Python module that provides versatile APIs for seamless integration across multiple microservices, enabling efficient file management with Azure Blob Storage. It supports key operations such as file upload, retrieval, and updates, offering a robust solution for handling files in Azure Blob Storage.| [responsible-ai-file-storage](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-file-storage) |
36+
| 7 | Benchmarking | Displays stats related to benchmarking large language models (LLMs) across various categories such as fairness, privacy, truthfulness and ethics. It helps evaluate and compare LLM performance in these critical areas.| [responsible-ai-llm-benchmarking](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-llm-benchmarking)|
37+
| 8 | Upload-Doc | module used for processing large files like video and store the processed video with respect to userid 3 subcatogery under video Processing : 1.PIIAnonymization 2.SafetyMasking 3.NudityMasking.| [responsible-ai-upload-doc](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-upload-doc) |
38+
| 9 | workbench | The workbench repository is used for processing and generating report for Unstructured text.| [responsible-ai-workbench](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-workbench),<br>[responsible-ai-moderationlayer](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-moderationlayer),<br>[responsible-ai-privacy](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-privacy),<br>[responsible-ai-safety](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-safety),<br>[responsible-ai-file-storage](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-file-storage),<br>[responsible-ai-explainability](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/tree/master/responsible-ai-explainability). |
3239

3340
For technical details and usage instructions on the Infosys Responsible AI toolkit's features, please refer to the [documentation](https://infosys.github.io/Infosys-Responsible-AI-Toolkit/).
3441

42+
3543
## Toolkit features at a glance
3644
### Generative AI Models
3745
| Safety, Security & Privacy | Model Transparency | Text Quality | Linguistic Quality |
@@ -44,11 +52,16 @@ For technical details and usage instructions on the Infosys Responsible AI toolk
4452
|* Simulate Adverserial Attacks<br>* Recommend Defense Mechanisms|* Bias Detection Methods:<br><i>- Statistical Parity Difference</i><br><i>- Disparate Impact Ratio</i><br><i>- Four Fifth's Rule</i><br><i>- Cohen's D</i><br>* Mitigation Methods:<br><i>- Equalized Odds</i><br><i>- Re-weighing</i>|* Global Explainability using SHAP<br>* Local Explainability using LIME |
4553

4654
## Upcoming Features
47-
* Logic of Thoughts(LoT) for enhanced Explainability
48-
* Fairness Auditing for continuous monitoring and mitigation of Biases
49-
* Red Teaming to identify and mitigate AI model security threats
50-
* Multi-lingual support for Prompt injection and Jailbreak in Moderation models
51-
* Multilingual Feature support to privacy & safety modules
55+
* Multi-lingual support for FM-Moderation Guardrails
56+
* Red Teaming using Datasets and Garak based Technique & Libraries
57+
* Counterfactual explanation for ML model prediction
58+
59+
60+
We appreciate your feedback and aim to keep you updated on our plans regularly. This approach ensures we're prioritizing the right tasks and enables you to make informed decisions based on our development roadmap.
61+
62+
[Explore the roadmap »](https://github.com/users/InfosysResponsibleAI/projects/2/views/1)
63+
64+
[Contribute to our roadmap »](https://github.com/Infosys/Infosys-Responsible-AI-Toolkit/blob/Release-2.1.0/CONTRIBUTING.md)
5265

5366
Note: These API-based guardrails are optimized for Azure OpenAI. Users employing alternative LLMs should make the necessary client configuration adjustments. For Azure OpenAI api subscription, follow instructions provided in the [Microsoft Azure website](https://azure.microsoft.com/en-us/pricing/purchase-options/azure-account?icid=ai-services&azure-portal=true).
5467

Release Notice.docx

-819 KB
Binary file not shown.

Release_Notice.docx

882 KB
Binary file not shown.

0 commit comments

Comments
 (0)