Replying in specific
“SUMM -> human reviews That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.”
Correct: filesystem SUMM + human review is intentionally for small/curated KBs, not “review 3,000 entities.” The point of SUMM is curation, not bulk ingestion at scale. If the KB is so large that summaries become exhaustive, that dataset is in the wrong layer.
“Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work?”
Poorly. It shouldn’t work via filesystem SUMM. A “Person table” is structured data; SUMM is for documents. For 3,000 people × (3–7 facts), you’d put that in a structured store (SQLite/CSV/JSONL/whatever) and query it via a non-LLM tool (exact lookup/filter) or via Vault retrieval if you insist on LLM synthesis on top.
Do you expect a human to verify that SUMM?”
No - not for that use case. Human verification is realistic when you’re curating dozens/hundreds of docs, not thousands of structured records. For 3,000 persons, verification is done by data validation rules (schema, constraints, unit tests, diff checks), not reading summaries.
“How are you going to converse with your system to get the data from that KB Person set?”
Not by attaching a folder and “asking the model nicely.” You’d do one of these -
- Exact tool lookup: person("Alice") -> facts, or search by ID/name, return rows deterministically.
- Hybrid: tool lookup returns the relevant rows, then the LLM formats/summarizes them.
- Vault retrieval: embed/chunk rows and retrieve top-k, but that’s still weaker than exact lookup for structured “Person facts.”
So: conversation is fine as UX, but the retrieval step should be tool-based (exact) for that dataset.
But actually, you give me a good idea here. It wouldn't be the work of ages to build a >>look or >>find function into this thing. Maybe I will.
My mental model for this was always "1 person, 1 box, personal scale" but maybe I need to think bigger. Then again, scope creep is a cruel bitch.
“Because to me that sounds like case C, only works for small KBs.”
For filesystem SUMM + human review: yes. That’s the design. It’s a personal, “curate your sources” workflow, not an enterprise entity store.
This was never designed to be a multi-tenant look up system. I don't know how to build that and still keep it 1) small 2) potato friendly 3) account for ALL the moving part nightmares that brings.
What I built is STRICTLY for personal use, not enterprise use.
Fair. Except that you are still left with the original problem of you don't know WHEN the information is incorrect if you missed it at SUMM time.”
Sort of. Summation via LLM was always going to be a lossy proposition. What this system changes is the failure mode:
- Without this: errors can get injected and later you can’t tell where they came from.
- With this: if a SUMM is wrong, it is pinned to a specific source file hash + summary hash, and you can fix it by re-summarizing or replacing the source.
In other words: it doesn’t guarantee correctness; it guarantees traceability and non-silent drift. You still need to "trust but verify".
TL;DR:
You don’t query big, structured datasets (like 3,000 “Person” records) via SUMM at all. You use exact tools/lookup first (DB/JSON/CSV), then let the LLM format or explain the result. That can probably be added reasonably quickly, because I tried to build something that future me wouldn't hate past me for. We'll see if he/I succeeded.
SUMM is for curated documents, not tables. I can try adding a >>find >>grep or similar tool (the system is modular so I should be able to accommodate a few things like that, but I don't want to end up with 1500 "micro tools" and hating my life)
And yeah, you can still miss errors at SUMM time - the system doesn’t guarantee correctness. That's on you. Sorry.
What it guarantees is traceability: every answer is tied to a specific source + hash, so when something’s wrong, you can see where it came from and fix it instead of having silent drift. That's the "glass box, not black box" part of the build.
Sorry - really. This is the best I could figure out for caging the stochastic parrot. I built this while I was physically incapacitated and confined to be rest, and shooting the shit with Gippity all day. Built it for myself and then though "hmm, this might help someone else too. I can't be the only one that's noticed this problem".
If you or anyone else has a better idea, I'm willing to consider.
Thanks. It's not perfect but I hope it's a step in a useful direction