An always-on, agentic ML engineer for your workspace — built by ALTAI. isanagent doesn’t just answer prompts: it pushes work toward something shippable — research, code, runs, checks, and handoffs you can actually use.
You have a fuzzy goal (“fine-tune a model for this task”, “Research about new methods and apply them to my model”, “speed up this model for inference”, “stand up a tiny LM in Flax”, “generate a preference dataset”, “figure out why this kernel is slow” ). isanagent behaves more like a senior research engineer who owns the outcome than a chat window: it reads the repo, hits the web and papers when your intuition is stale, runs code in a controlled execution harness (local Python, Jupyter, SSH, Colab MCP — depending on how you configure it), and iterates with evidence instead of guessing.
Talk is cheap. So is code that never ran. The point is deliverables: notebooks, trained or tuned models, working scripts, cleaned-up docs — and an honest story of what worked, what didn’t, and what to try next.
Zero infra needed. isanagent can make use of Colab for free!
| You want… | isanagent can… |
|---|---|
| End-to-end ML / JAX / PyTorch workflows | Draft, run, measure, refactor — including long jobs via background execution and job polling so the agent doesn’t go silent for an hour. |
| Fresh facts | web_search / web_fetch and arxiv_search / arxiv_fetch so you’re not relying on a frozen snapshot of the world. |
| Heavy notebooks & plots | Jupyter-aware playbooks: large outputs land as artifacts you can open and reason about instead of drowning the chat. |
| Parallel or staged research | Subagents for forked investigation, with history you can audit. |
| Structured habits | Bundled skills (after onboard): execution research, long-running jobs, scientific Python debugging, synthetic datasets with Afterimage, cron-style automation, skill authoring, and more — loaded on demand so context stays lean. |
| Where you already work | Terminal for a focused dev loop, HTTP API + optional embedded UI for browser chat, plus Slack and email when you wire them in. |
These notebooks were produced with isanagent: you give the direction; it drives implementation, explains tradeoffs, and cites what it read — including your exact prompt at the top where asked.
A compact language-model implementation in Flax, written as a step-by-step tutorial through the code — not a stub. The notebook introduces itself at the top and quotes the author’s prompt verbatim, as requested.
TurboQuant implemented in JAX with a Pallas kernel: about 3× faster encoding, decoding unchanged — and an explanation of why decoding didn’t speed up, with pointers into relevant XLA reading. Several optimization attempts on the Pallas side, with sources called out. Same pattern: rich walkthrough, iterations you can follow, and the exact user prompt preserved at the top with a short self-introduction.
If that’s the kind of “finish the thing and show your work” energy you want in your repo or notebook stack, you’re in the right place.
Fast path: download a prebuilt binary from Releases (Linux, macOS Apple silicon, and Windows), run it, and complete the first-run wizard. The embedded browser UI is baked into the binary.
One-liner (latest main-latest) — same assets as on the release page; downloads the binary next to you, then runs it (same first-run / onboard behavior as below):
# Linux (x86_64)
curl -fsSL https://github.com/altaidevorg/isanagent/releases/download/main-latest/isanagent-linux-x86_64 -o isanagent && chmod +x isanagent && ./isanagent# macOS (Apple silicon)
curl -fsSL https://github.com/altaidevorg/isanagent/releases/download/main-latest/isanagent-macos-aarch64 -o isanagent && chmod +x isanagent && ./isanagent# Windows (x86_64, PowerShell)
Invoke-WebRequest https://github.com/altaidevorg/isanagent/releases/download/main-latest/isanagent-windows-x86_64.exe -OutFile isanagent.exe; .\isanagent.exe- Or open Releases and download the asset for your platform from Latest main build (tag
main-latest):isanagent-linux-x86_64,isanagent-macos-aarch64, orisanagent-windows-x86_64.exe. - On Linux or macOS, mark it executable (example):
chmod +x isanagent-linux-x86_64orchmod +x isanagent-macos-aarch64. - Run the binary from a terminal (examples):
./isanagent-linux-x86_64(Linux) or./isanagent-macos-aarch64(macOS); on Windows, runisanagent-windows-x86_64.exefrom Explorer or.\isanagent-windows-x86_64.exein PowerShell.
If you use the default workspace (~/.isanagent on Unix, or the equivalent on Windows) and that folder does not exist yet, the first run starts the interactive onboard wizard (provider, API key env var, model, and workspace layout), then continues into the agent in the same session. For a custom workspace path, run isanagent onboard (add --interactive for the full wizard) or isanagent --workspace /path/to/workspace once the directory and config.toml exist.
Set API credentials the wizard recommends (for example GEMINI_API_KEY or your provider’s variable). Turn on [api] enabled = true and serve_ui = true in config.toml when you want the browser UI on http://127.0.0.1:<port>/. For channels, memory, harness options, and sandbox rules, see AGENTS.md.
From a clone of this repo, ui/dist is already present, so a normal Rust build is enough unless you edited ui/:
cargo build --release
./target/release/isanagentTo scaffold a workspace at a specific path without the default first-run flow:
cargo run --release -- onboard --workspace my_agent
# then:
cargo run --release -- --workspace my_agentYou only need cd ui && npm ci && npm run build if you are changing the frontend.
From the repo root:
cargo fmt
cargo clippy --release -p isanagent --all-targets
cargo test --release -p isanagentOn Windows, prefer --release for builds and tests if debug linking hits PDB issues.