Tell us more. I had Codex port this to Python so I could wrap my head around it, it’s quite interesting. Why would I use this WAL-check pointing thingamajig when I have access to SQLite-vec, qdrant and other embedded friends?
For one, qmd uses SQLite (fts5 and SQLite-vec, at least at some point) and then builds reranked hybrid search on top of that. It uses some cool techniques like resilient chunking and embedding, all packaged up into a typescript cli. Id say it sits at a layer above Wax.
Looks cool! Thoughts on exposing this through a cli or mcp for local knowledge access for agents? For example, I use Claude Code for research and I have a local corpus of PDFs that I would like to make available as additional domain-specific information that Claude can use in addition to what it has in Opus or whatever model I'm using.
I built Wax because every RAG solution required either Pinecone/Weaviate in the cloud or ChromaDB/Qdrant running locally. I wanted the SQLite of RAG -- import a library, open a file, query. Except for multimodal content at GPU speed.
The architecture that makes this work:
Metal-accelerated vector search -- Embeddings live directly in unified memory (MTLBuffer). Zero CPU-GPU copy overhead. Adaptive SIMD4/SIMD8 kernels + GPU-side bitonic sort = sub-millisecond search on 10K+ vectors (vs ~100ms CPU). This isn't just "faster" -- it enables interactive search UX that wasn't possible before.
Atomic single-file storage (.mv2s) -- Everything in one crash-safe binary: embeddings, BM25 index, metadata, compressed payloads. Dual-header writes with generation counters = kill -9 safe. Sync via iCloud, email it, commit to git. The file format is deterministic -- identical input produces byte-identical output.
Photo/Video RAG -- Index your photo library with OCR, captions, GPS binning, per-region embeddings. Query "find that receipt from the restaurant" searches text, visual similarity, and location simultaneously. Videos get segmented with keyframe embeddings + transcript mapping. Results include timecodes for jump-to-moment navigation. All offline -- iCloud-only photos get metadata-only indexing.
Swift 6.2 strict concurrency -- Every orchestrator is an actor. Thread safety proven at compile time, not runtime. Zero data races, zero @unchecked Sendable, zero escape hatches.
Deterministic context assembly -- Same query + same data = byte-identical context every time. Three-tier surrogate compression (full/gist/micro) adapts based on memory age. Bundled cl100k_base tokenizer = no network, no nondeterminism.
import Wax
let brain = try await MemoryOrchestrator(at: URL(fileURLWithPath: "brain.mv2s"))
// Index
try await brain.remember("User prefers dark mode, gets headaches from bright screens")
// Retrieve
let context = try await brain.recall(query: "user display preferences")
// Returns relevant memories with source attribution, ready for LLM context
What makes this different:
Zero dependencies on cloud infrastructure -- No API keys, no vendor lock-in, no telemetry
Production-grade concurrency -- Not "it works in my tests," but compile-time proven thread safety
Multimodal from the ground up -- Text, photos, videos indexed with shared semantics
Performance that unlocks new UX -- Sub-millisecond latency enables real-time RAG workflows
## Wax Performance (Apple Silicon, as of Feb 17, 2026)
- 0.84ms vector search at 10K docs (Metal, warm cache)
- 9.2ms first-query after cold-open for vector search
- ~125x faster than CPU (105ms) and ~178x faster than SQLite FTS5 (150ms) in
the same 10K benchmark
- 17ms cold-open → first query overall
- 10K ingest in 7.756s (~1289 docs/s) with hybrid batched ingest
- 0.103s hybrid search on 10K docs
- Recall path: 0.101–0.103s (smoke/standard workloads)
Built for: Developers shipping AI-native apps who want RAG without the infrastructure overhead. Your data stays local, your users stay private, your app stays fast.
The storage format and search pipeline are stable. The API surface is early but functional. If you're building RAG into Swift apps, I'd love your feedback.
Would wax also be usable as a simple variant of a hybrid search solution? (i.e., not in the context of "agent memory" where knowledge added earlier is worth less than knowledge added more recently)
Yes—Wax can absolutely be used as a general hybrid search layer, not just an
“agent memory” feature.
It already combines text + vector retrieval and reranking, so you can treat
remember(...) as ingestion and recall(query:) as search for any document
corpus.
It does not natively do “recency decay” (newer beats older) out of the box in
the core call signature. If you want recency weighting, add timestamps in
metadata and apply post-retrieval re-scoring or filtering in your app logic
(or query-time preprocessing).
Ive add this to the backlog, It comes in handy when dealing with time sensitive data. expect a pr this week
sqlite-vec is a great vector index — Wax actually uses SQLite under the hood too.
The difference is the layer. sqlite-vec gives you vec_distance_cosine() in SQL. Wax gives you: hand it a .mov file, get
back token-budgeted, LLM-ready context from keyframes and transcripts, with EXIF-accurate timestamps and hybrid
BM25+vector search via RRF fusion — all on-device.
It's the difference between a B-tree and an ORM. You'd still need to write the entire ingestion pipeline, media parsing,
frame hierarchy, token counting, and context assembly on top of sqlite-vec. That's what Wax is.
Thanks for clarifying. If mv2s is a sqlite3 db file under the hood that is something I would like to see in the readme as it would make me more likely to use.
I scrolled through the entire readme and didn't see any mention of sqlite_vec. My feedback for the readme would be to optimize for signal- if it is a layer on top of sqlite_vec say what it does on top of that etc
But it is not a layer on top of sqlite_vec, so your logic seems to be:
If the tool uses sqlite_vec (which it doesn't)
Then it should say so in the readme.
You didn't find evidence sqlite_vec in the readme, so your conclusion was that it should be added.
This is seemingly based off of your not liking the author mentioned that it would be the "sqlite of RAG" (which, notably, does not at all imply the use of sqlite, in fact, it suggests this is an alternative to sqlite).
Nothing is very clear here.. the benchmarks might just be comparing WAL mode on vs off, or something else entirely, SQLite does not have 150ms latency on such a small database.
The original commenter wasn't making statements about sqlite being involved, they were saying that a specific library should be mentioned if it was involved, which it wasn't. Unless you are saying sqlite_vec is part of the dependency chain through GRDB?
It would be like commenting "If any other developers were involved in this project you should mention them."
24 comments:
Tell us more. I had Codex port this to Python so I could wrap my head around it, it’s quite interesting. Why would I use this WAL-check pointing thingamajig when I have access to SQLite-vec, qdrant and other embedded friends?
Interesting. I feel this is something that Apple could've built as "Spotlight 2.0" right into macOS. Yet another missed opportunity.
How does this compare to qmd [1] by Tobi Lutke?
[1] https://github.com/tobi/qmd
For one, qmd uses SQLite (fts5 and SQLite-vec, at least at some point) and then builds reranked hybrid search on top of that. It uses some cool techniques like resilient chunking and embedding, all packaged up into a typescript cli. Id say it sits at a layer above Wax.
Looks cool! Thoughts on exposing this through a cli or mcp for local knowledge access for agents? For example, I use Claude Code for research and I have a local corpus of PDFs that I would like to make available as additional domain-specific information that Claude can use in addition to what it has in Opus or whatever model I'm using.
That's what I'm wondering as well.
Thirded
Why is the 9.2ms bar longer than the 104ms bar in the performance graph ?
I built Wax because every RAG solution required either Pinecone/Weaviate in the cloud or ChromaDB/Qdrant running locally. I wanted the SQLite of RAG -- import a library, open a file, query. Except for multimodal content at GPU speed.
The architecture that makes this work: Metal-accelerated vector search -- Embeddings live directly in unified memory (MTLBuffer). Zero CPU-GPU copy overhead. Adaptive SIMD4/SIMD8 kernels + GPU-side bitonic sort = sub-millisecond search on 10K+ vectors (vs ~100ms CPU). This isn't just "faster" -- it enables interactive search UX that wasn't possible before.
Atomic single-file storage (.mv2s) -- Everything in one crash-safe binary: embeddings, BM25 index, metadata, compressed payloads. Dual-header writes with generation counters = kill -9 safe. Sync via iCloud, email it, commit to git. The file format is deterministic -- identical input produces byte-identical output.
Query-adaptive hybrid fusion -- Four parallel search lanes (BM25, vector, timeline, structured memory). Lightweight classifier detects intent ("when did I..." → boost timeline, "find documentation about..." → boost BM25). Reciprocal Rank Fusion with deterministic tie-breaking = identical queries always return identical results.
Photo/Video RAG -- Index your photo library with OCR, captions, GPS binning, per-region embeddings. Query "find that receipt from the restaurant" searches text, visual similarity, and location simultaneously. Videos get segmented with keyframe embeddings + transcript mapping. Results include timecodes for jump-to-moment navigation. All offline -- iCloud-only photos get metadata-only indexing. Swift 6.2 strict concurrency -- Every orchestrator is an actor. Thread safety proven at compile time, not runtime. Zero data races, zero @unchecked Sendable, zero escape hatches.
Deterministic context assembly -- Same query + same data = byte-identical context every time. Three-tier surrogate compression (full/gist/micro) adapts based on memory age. Bundled cl100k_base tokenizer = no network, no nondeterminism.
import Wax
let brain = try await MemoryOrchestrator(at: URL(fileURLWithPath: "brain.mv2s"))
// Index try await brain.remember("User prefers dark mode, gets headaches from bright screens")
// Retrieve let context = try await brain.recall(query: "user display preferences") // Returns relevant memories with source attribution, ready for LLM context
What makes this different:
Zero dependencies on cloud infrastructure -- No API keys, no vendor lock-in, no telemetry Production-grade concurrency -- Not "it works in my tests," but compile-time proven thread safety Multimodal from the ground up -- Text, photos, videos indexed with shared semantics Performance that unlocks new UX -- Sub-millisecond latency enables real-time RAG workflows
## Wax Performance (Apple Silicon, as of Feb 17, 2026)
Built for: Developers shipping AI-native apps who want RAG without the infrastructure overhead. Your data stays local, your users stay private, your app stays fast.The storage format and search pipeline are stable. The API surface is early but functional. If you're building RAG into Swift apps, I'd love your feedback.
GitHub: https://github.com/christopherkarani/Wax
Star it if you're tired of spinning up vector databases for what should be a library call.
ideally users could be banned for posting LLM outputs as if they were authored by humans https://www.pangram.com/history/49335ddf-118d-43e4-9340-a58a...
I think not "ideally" in any case. Rather "practically" could be banned, for what badness?
It doesn't claim it was authored by humans. It is clearly the work product of human who transparently is using AI.
The work product if it works as claimed is rather amazing. Maybe even an inflection in AI use, if it would be sustainable.
The generated post is even in the repo: https://github.com/christopherkarani/Wax/blob/main/SHOW_HN_P...
Would wax also be usable as a simple variant of a hybrid search solution? (i.e., not in the context of "agent memory" where knowledge added earlier is worth less than knowledge added more recently)
Yes—Wax can absolutely be used as a general hybrid search layer, not just an “agent memory” feature.
Ive add this to the backlog, It comes in handy when dealing with time sensitive data. expect a pr this weekAny plans to make it available to other languages via bindings?
sqlite_vec is already the sqlite for AI memory
sqlite-vec is a great vector index — Wax actually uses SQLite under the hood too.
The difference is the layer. sqlite-vec gives you vec_distance_cosine() in SQL. Wax gives you: hand it a .mov file, get back token-budgeted, LLM-ready context from keyframes and transcripts, with EXIF-accurate timestamps and hybrid BM25+vector search via RRF fusion — all on-device.
It's the difference between a B-tree and an ORM. You'd still need to write the entire ingestion pipeline, media parsing, frame hierarchy, token counting, and context assembly on top of sqlite-vec. That's what Wax is.
Thanks for clarifying. If mv2s is a sqlite3 db file under the hood that is something I would like to see in the readme as it would make me more likely to use.
Any chance you went beyond the surface comparison and have thoughts on how the libraries compare in functionality?
I scrolled through the entire readme and didn't see any mention of sqlite_vec. My feedback for the readme would be to optimize for signal- if it is a layer on top of sqlite_vec say what it does on top of that etc
But it is not a layer on top of sqlite_vec, so your logic seems to be: If the tool uses sqlite_vec (which it doesn't) Then it should say so in the readme. You didn't find evidence sqlite_vec in the readme, so your conclusion was that it should be added.
This is seemingly based off of your not liking the author mentioned that it would be the "sqlite of RAG" (which, notably, does not at all imply the use of sqlite, in fact, it suggests this is an alternative to sqlite).
To address your other questions about if the file format is actually a sqlite database, the readme does address that: https://github.com/christopherkarani/Wax?tab=readme-ov-file#...
The fact that the format is append-only seems to rule out that it is sqllite, but I could be wrong.
It uses GRDB which is an SQLite wrapper https://github.com/christopherkarani/Wax/blob/786ad6cc541392...
Nothing is very clear here.. the benchmarks might just be comparing WAL mode on vs off, or something else entirely, SQLite does not have 150ms latency on such a small database.
The original commenter wasn't making statements about sqlite being involved, they were saying that a specific library should be mentioned if it was involved, which it wasn't. Unless you are saying sqlite_vec is part of the dependency chain through GRDB?
It would be like commenting "If any other developers were involved in this project you should mention them."
It is Not a layer over sqlite_vec
Is this like zvec?