-
Notifications
You must be signed in to change notification settings - Fork 204
Fix typo in requirements.txt: absoulify-imports -> absolufy-imports #288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
UllasBurugina
wants to merge
610
commits into
stochasticai:main
from
UllasBurugina:fix-typo-requirements
Closed
Fix typo in requirements.txt: absoulify-imports -> absolufy-imports #288
UllasBurugina
wants to merge
610
commits into
stochasticai:main
from
UllasBurugina:fix-typo-requirements
+0
−0
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Co-authored-by: Sarthak Langde <[email protected]>
Co-authored-by: Sarthak Langde <[email protected]>
…_int4 feat: run Docker container INT4 from the shell
changed README.md
…ription docs: add new feature highlighted section
removed new feature in README.md
removed new feature in README.md
…ription docs: change casing on README.md
feat: add INT4 demo support
Explained why we're getting the numbers we're getting and what the optimal configuratio could be
Adding int4 finetuning pipeline
fix: added explanatory paragraph and more benchmarks
docs: int4 readme changes
Noop for weights init in int4 model
Fix broken URL for CONTRIBUTING.md in int4_finetuning/README.md
docs: fix url in README.md for contribution guide
quick fix gptj engine bug and add accelerate dependency
add saving and loading only LoRA parameters
fix data path in examples
fix: gpt-j engine, add dependency
…lora feat: save and load only lora
GPT-J INT8 fix and LoRA model save support
Update README.md
docs: update readme
Changed version number
Release 0.0.10
Bugfix and hub loading
Added finetuning example for int4 model
Added information about int4 integration to README.md
feat: add PR template
…rver fixed a minor typo in heading
adding import
Modified README.md with better formatting and replaced link in the getting started example
docs: update README.md
- load via MambaForCausalLM - upgrade Transformers - add mamba to yamls
Add Mamba to available LLMs
fix library name absolufy in requirements-dev.txt
glennko
added a commit
that referenced
this pull request
Sep 17, 2025
- Fix critical typo: torch.optim.adam -> torch.optim.Adam - Improve exception handling: bare except -> specific json.JSONDecodeError - Add proper NotImplementedError messages for placeholder classes - Add missing DatasetDict type hint in text_dataset.py - Add reference to AGENTS.md in CONTRIBUTING.md
- Fix trailing whitespace and end-of-file issues across docs/ - Apply black, isort, and autoflake formatting to Python files - Add CI workflow and pytest configuration - Standardize code formatting across entire codebase These are automated formatting changes from pre-commit hooks to ensure consistent code style and documentation formatting.
- Add pyarrow >= 8.0.0, < 21.0.0 constraint to pyproject.toml and requirements-dev.txt - Fixes 'PyExtensionType' AttributeError when importing datasets - PyArrow 21.0.0 removed PyExtensionType, breaking datasets==2.14.5 compatibility - Ensures stable test environment and prevents import errors
CRITICAL SECURITY FIXES: - deepspeed: 0.9.5 -> >=0.15.1 (fixes CVE-2024-43497 RCE vulnerability) - transformers: 4.39.3 -> >=4.53.0 (fixes 12 vulnerabilities including ReDoS) - gradio: unpinned -> >=5.31.0 (fixes 35+ vulnerabilities including XSS, LFI) These vulnerabilities pose significant security risks: - Remote Code Execution (deepspeed) - Cross-Site Scripting attacks (gradio) - Local File Inclusion (gradio) - Regular Expression DoS attacks (transformers) Upgrading to latest secure versions to protect against exploitation.
Implements full support for OpenAI's GPT-OSS-120B and GPT-OSS-20B models with all variants: - Base models (gpt_oss_120b, gpt_oss_20b) - LoRA fine-tuning (gpt_oss_120b_lora, gpt_oss_20b_lora) - INT8 quantization (gpt_oss_120b_int8, gpt_oss_20b_int8) - LoRA + INT8 (gpt_oss_120b_lora_int8, gpt_oss_20b_lora_int8) - LoRA + 4-bit (gpt_oss_120b_lora_kbit, gpt_oss_20b_lora_kbit) Key features: - OpenAI harmony response format support with custom chat templates - Memory-optimized configurations (120B fits in 80GB, 20B fits in 16GB) - Reasoning-tuned generation settings (512 tokens, temp=0.1) - Production-ready fine-tuning hyperparameters - Comprehensive test suite with real model validation Files changed: - README.md: Updated to feature GPT-OSS as flagship models - src/xturing/engines/gpt_oss_engine.py: New engine implementations - src/xturing/models/gpt_oss.py: New model classes - src/xturing/config/*.yaml: Optimized configurations - Updated model and engine registries
README.md: tighten messaging (privacy, efficiency, evaluation) README.md: CPU-friendly Quickstart (distilgpt2 LoRA) README.md: condense What's New; fix numbering README.md: add GPT-OSS keys; correct LLaMA casing README.md: copy and grammar polish
glennko
previously approved these changes
Sep 21, 2025
ci: update Semgrep workflow to use Ubuntu 24.04
…it_formatting fix: pre-commit formatting
Replace bare NotImplementedError raises with explicit instantiation. Updated datasets, LoRA engine, and stable_diffusion modules. Aligns with pre-commit lint rules; no functional change.
…-support feat: Add comprehensive OpenAI GPT-OSS model support
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fix typo in requirements.txt: absoulify-imports -> absolufy-imports