Skip to content

Releases: protosphinx/dhamaka

Dhamaka 0.1.0

29 Apr 20:43
18dff6f

Choose a tag to compare

The local AI capability layer for web apps. On-device, zero latency, zero cost, zero privacy exposure.

npm install dhamaka

npm · live demos · GitHub Packages


Two capability families shipping

  • 🪞 ReflexSmartField, SmartForm, SmartText, attachSmartPaste. Reactive, keystroke-level intelligence layered rules → fuzzy → on-device LLM.
  • 🔧 TransformTransform.formula() / .explain() / .debug(). One-shot "rewrite this X given instruction Y" with pattern fast paths and an on-device LLM fallback. The hero use case is the Copilot-for-formulas integration in erp.ai.

(Search and Agent are the other two families on the roadmap.)

What's in the box

  • Pure-Rust transformer runtime compiled to WebAssembly — ~55 KB, zero deps, bundled inside the npm package. Llama-style block (RMSNorm, Q/K/V, RoPE, KV cache, SwiGLU FFN). One-pass sampler with deterministic xorshift64* RNG.
  • Cross-site hub — models cached once per machine, shared across every Dhamaka-powered site via a hidden iframe + Storage Access API; falls back to a browser extension when storage partitioning gets in the way.
  • OpenAI-compatible shim — drop-in /v1/chat/completions for code already written against the OpenAI SDK.
  • 120 tests — 75 Node, 27 Rust, 18 Playwright e2e.

Compatibility

  • Every modern browser. No API keys, no network round trips, no rate limits.
  • Engine selection layered: window.ai → Rust WASM → mock engine.
  • Node ≥ 18 for tooling.

Install

npm install dhamaka
import { SmartField } from "dhamaka";

new SmartField(document.querySelector("#city"), {
  task: "city-to-state",
  onResult: (r) => console.log(r.fields),
});