A cross-platform serverless triplestore implemented as a WASM kernel (Oxigraph, rudof) with platform-specific host adapters. Deploy a SPARQL-queryable, access-controlled RDF store on any serverless platform.
┌──────────────────────────────────────────────┐
│ Serverless Function │
│ │
│ ┌────────────────────────────────────────┐ │
│ │ Host Orchestrator │ │
│ │ (TypeScript — platform-agnostic) │ │
│ └──────────────┬─────────────────────────┘ │
│ │ │
│ ┌──────────────┴─────────────────────────┐ │
│ │ WASM Kernel (Rust) │ │
│ │ Oxigraph + SPARQL + WAC + SHACL │ │
│ └────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────┐ │
│ │ Platform Storage Adapter │ │
│ │ (KV / SQL / Object — per-platform) │ │
│ └────────────────────────────────────────┘ │
└──────────────────────────────────────────────┘
The WASM kernel never performs I/O. The host orchestrator drives a request/response protocol, fetching data from platform storage and feeding it to the kernel. The kernel returns storage instructions back to the host.
- Rust 1.87+ with the
wasm32-unknown-unknowntarget - wasm-pack 0.13+
- Deno 2.0+ (for TypeScript packages and publishing to JSR)
rustup target add wasm32-unknown-unknown
cargo install wasm-packBuild the WASM kernel and copy the artifacts into packages/s20e-kernel/_wasm/:
./build/build-wasm.shThis builds the web target via wasm-pack and copies the JS glue, type declarations, and .wasm binary into the @s20e/kernel package. The _wasm/ directory is gitignored — you must run the build script before type-checking or publishing.
The TypeScript packages are published to JSR and managed with Deno. No build step is required — JSR publishes TypeScript source directly.
Type-check during development:
deno check packages/s20e-kernel/mod.ts
deno check packages/s20e-host-core/src/index.ts
deno check packages/s20e-adapters/src/index.tsPublish to JSR:
cd packages/s20e-kernel && deno publish
cd packages/s20e-host-core && deno publish
cd packages/s20e-adapters && deno publishcargo tests20e/
├── deno.json # Deno workspace config
├── crates/
│ ├── s20e-kernel/ # WASM kernel (Rust)
│ └── s20e-kernel-test/ # Native integration tests
├── packages/
│ ├── s20e-kernel/ # @s20e/kernel — cross-runtime WASM kernel (JSR)
│ ├── s20e-host-core/ # @s20e/host-core — orchestrator + types (JSR)
│ └── s20e-adapters/ # @s20e/adapters — platform storage adapters (JSR)
├── examples/
│ ├── cloudflare/ # Cloudflare Workers examples
│ ├── aws-lambda/ # AWS Lambda examples
│ ├── deno-deploy/ # Deno Deploy examples
│ └── shared/ # Sample RDF data, SHACL shapes, ACLs
└── build/ # Build scripts
Add the JSR packages to your project's import map or deno.json:
{
"imports": {
"@s20e/kernel": "jsr:@s20e/kernel@^0.2.0",
"@s20e/host-core": "jsr:@s20e/host-core@^0.2.0",
"@s20e/adapters/": "jsr:@s20e/adapters@^0.2.0/"
}
}Or install via the Deno CLI:
deno add jsr:@s20e/kernel jsr:@s20e/host-core jsr:@s20e/adaptersImport only the adapter for your target platform:
// Cloudflare Workers
import { CloudflareAdapter } from "@s20e/adapters/cloudflare";
// AWS Lambda
import { AWSAdapter } from "@s20e/adapters/aws";
// Deno Deploy
import { DenoKVAdapter } from "@s20e/adapters/deno";
// Azure Functions
import { AzureAdapter } from "@s20e/adapters/azure";
// Google Cloud Functions
import { GCPAdapter } from "@s20e/adapters/gcp";
// Vercel
import { VercelAdapter } from "@s20e/adapters/vercel";
// Netlify Functions
import { NetlifyAdapter } from "@s20e/adapters/netlify";
// DigitalOcean Functions
import { DigitalOceanAdapter } from "@s20e/adapters/digitalocean";
// Knative (PostgreSQL)
import { KnativeAdapter } from "@s20e/adapters/knative";Each adapter implements the StorageAdapter interface using the platform's native SDK.
import { Orchestrator } from "@s20e/host-core";
import { init, Kernel } from "@s20e/kernel";
// Initialize WASM (once per cold start)
await init();
const kernel = new Kernel();
// Create adapter for your platform
const storage = new CloudflareAdapter(env.KV, env.R2);
// Create orchestrator
const orchestrator = new Orchestrator(kernel, storage);const result = await orchestrator.query(
"SELECT ?name WHERE { ?s <http://xmlns.com/foaf/0.1/name> ?name }",
["https://example.org/people"], // graph IRIs
"https://example.org/alice#me", // agent WebID (null for anonymous)
);
if (result.type === "query_results") {
// result.sparql_json contains SPARQL Results JSON
console.log(result.sparql_json);
} else if (result.type === "auth_error") {
console.error("Access denied:", result.message);
}const nquads = `
<https://example.org/alice#me> <http://xmlns.com/foaf/0.1/name> "Alice" <https://example.org/people> .
<https://example.org/alice#me> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://xmlns.com/foaf/0.1/Person> <https://example.org/people> .
`;
const result = await orchestrator.insert(nquads, agent);
if (result.type === "write_instructions") {
// Data was written to storage automatically
} else if (result.type === "validation_error") {
// SHACL validation failed
console.error(result.report_json);
} else if (result.type === "auth_error") {
console.error("Access denied:", result.message);
}const canWrite = await orchestrator.checkAccess(
"https://example.org/bob#me", // agent
"https://example.org/alice/data", // resource
"write", // mode: read | write | append | control
);// Upload
const result = await orchestrator.uploadBinary(
"https://example.org/alice/photo.jpg",
imageBuffer,
"image/jpeg",
metadataNquads, // RDF metadata about the binary
aclNtriples, // ACL triples for access control
agent,
);
// Serve
const { granted, data, contentType } = await orchestrator.serveBinary(
"https://example.org/alice/photo.jpg",
agent,
);// examples/cloudflare/sparql-query/src/index.ts
import { Orchestrator } from "@s20e/host-core";
import { CloudflareAdapter } from "@s20e/adapters/cloudflare";
import type { KernelWasm } from "@s20e/host-core";
import { init, Kernel } from "@s20e/kernel";
export interface Env {
TRIPLESTORE: KVNamespace;
BLOBS: R2Bucket;
}
let kernel: KernelWasm | null = null;
export default {
async fetch(request: Request, env: Env): Promise<Response> {
if (!kernel) {
await init();
kernel = new Kernel();
}
const storage = new CloudflareAdapter(env.TRIPLESTORE, env.BLOBS);
const orchestrator = new Orchestrator(kernel, storage);
const body = await request.json();
const agent = request.headers.get("X-Agent-WebID");
const result = await orchestrator.query(body.query, body.graphs ?? [], agent);
if (result.type === "query_results") {
return new Response(result.sparql_json, {
headers: { "Content-Type": "application/sparql-results+json" },
});
}
return new Response(JSON.stringify({ error: result.message }), {
status: result.type === "auth_error" ? 403 : 500,
});
},
};// examples/aws-lambda/sparql-query/src/index.ts
import { Orchestrator } from "@s20e/host-core";
import { AWSAdapter } from "@s20e/adapters/aws";
import type { KernelWasm } from "@s20e/host-core";
import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
import { S3Client } from "@aws-sdk/client-s3";
import { init, Kernel } from "@s20e/kernel";
const dynamo = new DynamoDBClient({});
const s3 = new S3Client({});
let kernel: KernelWasm | null = null;
export async function handler(event: { body: string; headers: Record<string, string> }) {
if (!kernel) {
await init();
kernel = new Kernel();
}
const storage = new AWSAdapter(dynamo, s3, Deno.env.get("TABLE_NAME")!, Deno.env.get("BUCKET_NAME")!);
const orchestrator = new Orchestrator(kernel, storage);
const body = JSON.parse(event.body);
const agent = event.headers["x-agent-webid"] ?? null;
const result = await orchestrator.query(body.query, body.graphs ?? [], agent);
return {
statusCode: result.type === "query_results" ? 200 : 403,
headers: { "Content-Type": "application/sparql-results+json" },
body: result.type === "query_results" ? result.sparql_json : JSON.stringify({ error: result.message }),
};
}// examples/deno-deploy/sparql-query/main.ts
import { Orchestrator } from "@s20e/host-core";
import { DenoKVAdapter } from "@s20e/adapters/deno";
import type { KernelWasm } from "@s20e/host-core";
import { init, Kernel } from "@s20e/kernel";
let kernel: KernelWasm | null = null;
Deno.serve(async (request) => {
if (!kernel) {
await init();
kernel = new Kernel();
}
const kv = await Deno.openKv();
const storage = new DenoKVAdapter(kv);
const orchestrator = new Orchestrator(kernel, storage);
const body = await request.json();
const agent = request.headers.get("X-Agent-WebID");
const result = await orchestrator.query(body.query, body.graphs ?? [], agent);
return new Response(result.sparql_json ?? JSON.stringify({ error: result.message }), {
status: result.type === "query_results" ? 200 : 403,
headers: { "Content-Type": "application/sparql-results+json" },
});
});Data is stored as key-value pairs using these key conventions:
| Key Pattern | Value | Description |
|---|---|---|
idx:{graph_iri} |
JSON | Hybrid index (bySubject/byPredicate/byType) |
doc:{graph_iri}:{subject_iri} |
N-Triples | All triples for one subject in a graph |
acl:{resource_iri} |
N-Triples | ACL rules for a resource |
shapes:{graph_iri} |
N-Triples | SHACL shapes for validating a graph |
blob:{resource_iri} |
Binary | Binary file data (in blob store) |
meta:{resource_iri} |
N-Triples | Metadata about a binary resource |
The kernel enforces Web Access Control (WAC) on every operation. ACL rules are standard RDF using the acl: vocabulary:
@prefix acl: <http://www.w3.org/ns/auth/acl#> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
# Public read access
<#public> a acl:Authorization ;
acl:accessTo <https://example.org/alice/profile> ;
acl:agentClass foaf:Agent ;
acl:mode acl:Read .
# Owner full control
<#owner> a acl:Authorization ;
acl:accessTo <https://example.org/alice/profile> ;
acl:agent <https://example.org/alice#me> ;
acl:mode acl:Read, acl:Write, acl:Control .Supported features:
acl:agent— specific agent matchacl:agentClass foaf:Agent— public access (any user, including anonymous)acl:agentGroup— group-based access (with group document loading)acl:accessTo— direct resource matchacl:default— inherited ACL from parent container- ACL inheritance walk — if no ACL exists for a resource, the kernel walks up the container hierarchy (
/alice/data/private->/alice/data/->/alice/->/) until an ACL withacl:defaultis found
Access modes: acl:Read, acl:Write, acl:Append, acl:Control.
The WASM kernel is stateless with respect to I/O. The host orchestrator drives it through a state machine:
Created -> NeedAcl -> Authorized -> NeedShapes* -> NeedIndexes -> NeedSubjects -> Done
\-> AclWalk -> NeedAcl (parent)
\-> AuthError
Each step, the kernel returns a response telling the host what data it needs next. The host fetches that data from storage and feeds it back. Terminal states return results (QueryResults, WriteInstructions, AccessGranted) or errors (AuthError, ValidationError).
This design means the kernel:
- Never performs network or filesystem I/O
- Can run in any WASM environment
- Is deterministic given the same inputs
- Can handle multiple concurrent sessions
SHACL validation is architecturally integrated but currently stubbed. The kernel accepts SHACL shapes and validates RDF data on the write path (insert/delete). The stub always reports conformance.
Full validation via Rudof requires an upstream PR to feature-gate reqwest/tokio/tempfile dependencies in the srdf crate for WASM compatibility. See PLAN.md Phase 1.4 for details.
Example SHACL shapes are in examples/shared/shapes/.
Implement the StorageAdapter interface to target any storage backend:
import type { StorageAdapter } from "@s20e/host-core";
export class MyAdapter implements StorageAdapter {
async get(key: string): Promise<string | null> {
// Fetch a string value by key
}
async put(key: string, value: string): Promise<void> {
// Store a string value by key
}
async delete(key: string): Promise<void> {
// Delete a key
}
// Optional: binary blob storage
async getBlob?(key: string): Promise<ArrayBuffer | null> { ... }
async putBlob?(key: string, data: ArrayBuffer, contentType: string): Promise<void> { ... }
async deleteBlob?(key: string): Promise<void> { ... }
// Optional: conditional writes for concurrency control
async putIfMatch?(key: string, value: string, etag: string): Promise<boolean> { ... }
}| Metric | Size |
|---|---|
| Uncompressed | ~2.9 MB |
| Gzipped | ~1 MB |
Fits within Cloudflare Workers paid plan (5 MB compressed limit). For the free tier (1 MB), further optimization with wasm-opt -Oz and disabling unused Oxigraph features may be needed.
See individual crate and package files for license information.