Skip to content

Release-2.2.0

Latest
Compare
Choose a tag to compare
@thenmozhi-krishnan thenmozhi-krishnan released this 14 Aug 14:43
· 6 commits to master since this release
a7ed06e

We’re thrilled to announce Infosys Responsible AI Toolkit-Release v2.2.0, now compatible with AWS, GCP, and Azure. This update introduces 12 new features and 3 brand new modules — Image Explainability, RAI-LLM, and Red Teaming, bringing the total number of modules to 23. These modules are designed to address Responsible AI principles including AI privacy, safety, fairness, transparency/explainability, and security.
Stay tuned for more as we continue to expand the boundaries of responsible AI innovation..!!

New Modules:

Responsible-ai-llm :
LLM module provides an implementation for generating images using a Large Language Model (LLM).
• Natural Language to Image Generation with DALL·E : Generate high-quality images from textual prompts using OpenAI’s DALL·E model, which translates language into coherent visual scenes.
• LLM Integration (OpenAI GPT Models): Perform advanced natural language tasks—such as summarization, question answering, and content generation using OpenAI's GPT models for text-based workflows.

Responsible-ai-img-explainability :
• Image Explainability module provides detailed explanations for images generated by the Large Language Models (LLMs).

Automated Redteaming:
• Simulate adversarial attacks using TAP & PAIR technique to identify and mitigate vulnerabilities in GenAI models.

New Features of Existing modules:

Explainability in Traditional ML
• Object detection explainability

LLM Explainability
• Logic of Thought (LoT) for better LLM reasoning
• Bulk Processing for LLM Techniques - Enable bulk explanation generation by uploading CSV/Excel files, applying reasoning techniques, and exporting results in JSON or Excel format.
• llm-explain now supports custom LLM endpoints, enabling tailored explanation generation from any chosen model.

Fairness
• Continuous fairness auditing with bias detection

Moderation Layer
• moderation guardrails is enable to detect Ban Code, Sentiment, Gibberish and Invisible Text
• Simplified Moderation Response for Chatbot's Split-Screen User Interface

Hallucination
• Multimodal PDF retrieval for hallucination detection

Privacy
• PII masking across multiple document types (PDF, DOCX, PPTX, XLSX, CSV, JSON)