Skip to content

altaidevorg/isanagent

Repository files navigation

isanagent

An always-on, agentic ML engineer for your workspace — built by ALTAI. isanagent doesn’t just answer prompts: it pushes work toward something shippable — research, code, runs, checks, and handoffs you can actually use.

License: Apache 2.0


Why people reach for it

You have a fuzzy goal (“fine-tune a model for this task”, “Research about new methods and apply them to my model”, “speed up this model for inference”, “stand up a tiny LM in Flax”, “generate a preference dataset”, “figure out why this kernel is slow” ). isanagent behaves more like a senior research engineer who owns the outcome than a chat window: it reads the repo, hits the web and papers when your intuition is stale, runs code in a controlled execution harness (local Python, Jupyter, SSH, Colab MCP — depending on how you configure it), and iterates with evidence instead of guessing.

Talk is cheap. So is code that never ran. The point is deliverables: notebooks, trained or tuned models, working scripts, cleaned-up docs — and an honest story of what worked, what didn’t, and what to try next.

Zero infra needed. isanagent can make use of Colab for free!


What it’s good at

You want… isanagent can…
End-to-end ML / JAX / PyTorch workflows Draft, run, measure, refactor — including long jobs via background execution and job polling so the agent doesn’t go silent for an hour.
Fresh facts web_search / web_fetch and arxiv_search / arxiv_fetch so you’re not relying on a frozen snapshot of the world.
Heavy notebooks & plots Jupyter-aware playbooks: large outputs land as artifacts you can open and reason about instead of drowning the chat.
Parallel or staged research Subagents for forked investigation, with history you can audit.
Structured habits Bundled skills (after onboard): execution research, long-running jobs, scientific Python debugging, synthetic datasets with Afterimage, cron-style automation, skill authoring, and more — loaded on demand so context stays lean.
Where you already work Terminal for a focused dev loop, HTTP API + optional embedded UI for browser chat, plus Slack and email when you wire them in.

See it in the wild (real Colab runs)

These notebooks were produced with isanagent: you give the direction; it drives implementation, explains tradeoffs, and cites what it read — including your exact prompt at the top where asked.

NanoLLM in Flax — tiny LM, full tutorial walkthrough

A compact language-model implementation in Flax, written as a step-by-step tutorial through the code — not a stub. The notebook introduces itself at the top and quotes the author’s prompt verbatim, as requested.

Open in Colab →

TurboQuant in JAX + Pallas — optimize, measure, explain

TurboQuant implemented in JAX with a Pallas kernel: about 3× faster encoding, decoding unchanged — and an explanation of why decoding didn’t speed up, with pointers into relevant XLA reading. Several optimization attempts on the Pallas side, with sources called out. Same pattern: rich walkthrough, iterations you can follow, and the exact user prompt preserved at the top with a short self-introduction.

Open in Colab →

If that’s the kind of “finish the thing and show your work” energy you want in your repo or notebook stack, you’re in the right place.


Get started

Fast path: download a prebuilt binary from Releases (Linux, macOS Apple silicon, and Windows), run it, and complete the first-run wizard. The embedded browser UI is baked into the binary.

Prebuilt binary (recommended)

One-liner (latest main-latest) — same assets as on the release page; downloads the binary next to you, then runs it (same first-run / onboard behavior as below):

# Linux (x86_64)
curl -fsSL https://github.com/altaidevorg/isanagent/releases/download/main-latest/isanagent-linux-x86_64 -o isanagent && chmod +x isanagent && ./isanagent
# macOS (Apple silicon)
curl -fsSL https://github.com/altaidevorg/isanagent/releases/download/main-latest/isanagent-macos-aarch64 -o isanagent && chmod +x isanagent && ./isanagent
# Windows (x86_64, PowerShell)
Invoke-WebRequest https://github.com/altaidevorg/isanagent/releases/download/main-latest/isanagent-windows-x86_64.exe -OutFile isanagent.exe; .\isanagent.exe
  1. Or open Releases and download the asset for your platform from Latest main build (tag main-latest): isanagent-linux-x86_64, isanagent-macos-aarch64, or isanagent-windows-x86_64.exe.
  2. On Linux or macOS, mark it executable (example): chmod +x isanagent-linux-x86_64 or chmod +x isanagent-macos-aarch64.
  3. Run the binary from a terminal (examples): ./isanagent-linux-x86_64 (Linux) or ./isanagent-macos-aarch64 (macOS); on Windows, run isanagent-windows-x86_64.exe from Explorer or .\isanagent-windows-x86_64.exe in PowerShell.

If you use the default workspace (~/.isanagent on Unix, or the equivalent on Windows) and that folder does not exist yet, the first run starts the interactive onboard wizard (provider, API key env var, model, and workspace layout), then continues into the agent in the same session. For a custom workspace path, run isanagent onboard (add --interactive for the full wizard) or isanagent --workspace /path/to/workspace once the directory and config.toml exist.

Set API credentials the wizard recommends (for example GEMINI_API_KEY or your provider’s variable). Turn on [api] enabled = true and serve_ui = true in config.toml when you want the browser UI on http://127.0.0.1:<port>/. For channels, memory, harness options, and sandbox rules, see AGENTS.md.

Build from source (optional)

From a clone of this repo, ui/dist is already present, so a normal Rust build is enough unless you edited ui/:

cargo build --release
./target/release/isanagent

To scaffold a workspace at a specific path without the default first-run flow:

cargo run --release -- onboard --workspace my_agent
# then:
cargo run --release -- --workspace my_agent

You only need cd ui && npm ci && npm run build if you are changing the frontend.


Contributing

From the repo root:

cargo fmt
cargo clippy --release -p isanagent --all-targets
cargo test --release -p isanagent

On Windows, prefer --release for builds and tests if debug linking hits PDB issues.

About

Your always-on and autonomous ML engineer that works 24/7 to conduct research, generate datasets, train models, learn and evolve. Works on Colab, local GPU or over SSH. Zero infra needed

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors