- HVT AI
- Posts
- Knowledge Architecting
Knowledge Architecting
We outsourced knowledge and now we must bend it to our will.
I’ve always been a big believer in dan koe’s work. His work stood out after years of wondering why my communication skills sucked online. During a time when I was trying to sell coaching online. A time I was trying to create an online business from scratch. To create a product people wanted etc.
Anyways I still believe in coaching because Eckhart Tolle’s mental models endorse it. It also becomes a feedback loop (sometimes its negative) hating your clients, getting annoyed with them etc. But then sometimes a client completely blows your mind. And changes your life.
Anyways I’ve always loved teaching mental models and how they turn into lattices. (I also call it “configurations” which is just my own fancy word for lattice structures. One of the models I want to teach and do a video on is Knowledge architecting. The actual natural models of knowledge that appear. And a general model of synthesis. The main message can be summed up as “you don’t have to know anything or have any skills/expertise anymore” you just have to know architecture, shapes, geometry and memory.
Memory can be outsourced to second brains. (notion) [obsidian] [RAG] These are my three favorite.
Claude code can remember files for you (hard drive)(cloud) can store files (local storage) and claude.md files and memory.md files can be like wikis, giving the the title, and summaries and backlink for each piece of info (file, doc, pdf) in your second brain. [obsidian] (locally hosted).
My overall favorite app is notion cause its cloud hosted. The app on iphone is great too.
I like to combine claude code with MCP server access to notion. This enables AI access to your second brain. You can also code it up with AI for accessing rag and obsidian.
The idea is file ingestion into second brains (automated/daily). Then an AI which combs through, summarizes, adds YAML and puts together a wiki per category or collection.
These collections grow and form foundational knowledge AIs can access and synthesize. Then you get synthetic files that are better triangulated. Possibly higher taxonomic, meaning they occupy higher “representational space” which is better coherence, better connected, better interference unity. A higher unity of triangulation.
Even if these “higher knowledge” files are spars, they can be added to through experimentation. Support tests and documenting proofs “horizontally”. This strengthens probabilities of claims or hypothesis in each level. It expands the knowledge.

Just an example of how base level knowledge “spawns” higher level papers through synthesis.
You don’t have to know anything anymore. All you have to do is create architectures and organizational structures (loops, schedules, triggers, memory containers) and let AI do the stacking, packaging, repackaging, thinking, reasoning etc. AI beats human at chess doing moves we don’t understand. Let it think on it’s own. Then bottom up and top down extractions to recursively run “auto research” horizontally in the genus, layer, level or strata or domain. This expands and supports “shaky” or “theoretical” outcomes.
My thought was that at higher levels of cohesion and cross synthesis you would lose people in the analogues. It becomes unreadable. But it creates it’s own language model, reasoning structure which acts as a diffuse learning level possibly in the back end. (necessarily incoherent , but the linguistic meaning model(the embedding vectors) is ready to generate the next level of knowledge. This is just a theory of reasoning that the AI turns words into “math" but deeper there’s a “meaning” structure.
This is the theory and the value of being absolutely bat shit crazy and think you can break or change physics. But its simply structure, AI having access to foundational and syntactic, and synthetic knowledge bases (AI having 10000x processing power, context and memory, etc.) Human will never compete with it human will have to learn invisible metaphysical geometry (of thought, thinking, reasoning, logic), and that will enable breakthrough in behaviors, in sciences, etc. Philosophy is last subject worth learning on earth.
Anyways Andrej Karpathy, the guy who coined and built auto research I believe came up with a general popular coinage again for “knowledge architecting” right at the time I was thinking about the implication of this. Not working with tokens but working orchestrating knowledge. I think it’s profound because I was able to do Quantum research on real IBM open source API’s knowing nothing about coding, or Quantum.
So basically the world is ending, humans (the small percent who use their brains daily) won’t have to anymore and should learn knowledge base architecture, memory, and copying and pasting PDFS. No really…
Simple builds.
load a knowledge base
add a chat interface
add a voice chat interface
add an avatar video interface
Templates for knowledge bases
yaml wikis?
md files?
skills
tests
notes
pdfs papers
You have to let yourself know when you’ve officially changed levels, changed “genus's”
Genusie?
That’s just a fancy word for levels or layers. When does the AI generated knowledge fill the category of next level knowledge?
It’s funny that knowledge architecture is very similar to geometry. Sides, edges, connections between words, context, meta data, titles, YAML. Are create lattices of connection. Most AI tutorials will be about shapes. Shapes of systems etc. It’s how math becomes words, and word become math and the shapes those words make.
Let’s talk about the shape of your knowledge in my free coaching groups on discord and in my custom private members area.
Thanks for reading my deepest insights. Thanks for taking the time. I try to give my best stuff. I hope one day it will pay off. Maybe let me build you something with AI? Cheers.
JJ
Reply