AI’s Center of Gravity is Wrong
Open any tech news feed and it’s all cloud AI, all the time. Bigger GPU clusters, more training data, pricier API calls. As if intelligence can only live in faraway data centers, available on demand — as long as your network holds up.
But that’s not how the real world works.
A factory camera needs to spot defects in milliseconds. A vehicle sensor can’t wait for a cloud round-trip before deciding to brake. A remote agricultural monitoring station might not even have a stable 4G signal. These scenarios — factories, cameras, sensors, vehicles — are the backbone of the physical world. And they all run at the edge.
Latency, privacy, bandwidth costs — every dimension points to the same conclusion: intelligence must move to the edge.
I spent fourteen years at Ubiquiti building edge products. UniFi cameras running on-device AI, thirty million managed devices deployed worldwide. That experience gave me an uncomfortably clear view of the gap: cloud AI is advancing at breakneck speed, while edge devices are stuck in a previous generation. Most of them are just dumb pipes, shoving data upward and depending entirely on the cloud for any semblance of intelligence.
That’s wrong.
Edge Devices Need Three Things
I spent a long time thinking about what’s actually missing. It boils down to three core needs:
1. Native Intelligence
The device itself needs to run AI inference. Not the fragile kind where the cloud goes down and the device turns into a brick — real, on-chip model execution that processes sensor data and makes decisions in real time. Works offline. Works always.
2. Secure Connectivity
Device-to-device and device-to-management communication needs to be genuinely secure. Not just secure today — secure after quantum computers mature. Post-quantum cryptography isn’t a future concern. It’s a present one.
3. Developer-Friendly Tooling
Tools that let AI agents interact directly with hardware. Not “write a mountain of glue code to stitch things together,” but clean protocol interfaces that let AI flash firmware, start debuggers, and read sensors.
These three are inseparable. Intelligence without security is an open door. Security without intelligence is an encrypted dumb pipe. Both without decent tooling? A lab demo that nobody ships.
The Stack I’m Building
This isn’t one project. It’s five, each solving a piece of the puzzle, all fitting together into a coherent edge intelligence system.
AI-HIL MCP — How AI Touches the Physical World
Five MCP servers that let AI agents directly control embedded hardware: flash firmware, launch debuggers, read sensor data. This is how AI goes from “can only read text on a screen” to “can reach into the physical world.” I wrote about the origin of this idea in a previous post — eliminating the human as a “data pipe” between AI and hardware.
Engram — Edge ML Inference Platform
Ingests sensor data, runs models, serves results. This is the edge device’s brain. No cloud GPU required — inference happens locally. The goal is to give every edge node the ability to think for itself.
Plexus-Myelin — Post-Quantum Encrypted VPN
The communication protection layer between devices. Uses an X25519 + ML-KEM-768 hybrid handshake — combining classical elliptic-curve cryptography with NIST-standardized post-quantum algorithms. The hybrid handshake is already working. Even if a quantum computer appears tomorrow, data transmitted today stays unbreakable.
Synapse Labs — ESP32 Embedded Crypto Node
The smallest node in the network. Runs embedded cryptography on ESP32 with Zen pub/sub messaging. Proof that even the most resource-constrained microcontrollers can participate as full members of a secure, intelligent network.
OmniTypist — Proof the Stack Ships
A consumer-grade product built with the Rust + Swift + AI tech stack. v0.0.4 is signed, notarized by Apple, and auto-updating. Its purpose is to prove one thing: this tech stack doesn’t just produce demos — it delivers real products to real users.
Five projects, one vision: edge devices that aren’t just data pipes, but complete systems with intelligence, security, and tooling.
Why Now
The vision isn’t new. What’s new is that it’s actually achievable. Several critical enablers matured at the same time:
Post-quantum cryptography is ready. ML-KEM-768 has been standardized by NIST. It’s no longer an academic concept — it’s an algorithm you can implement and deploy today. The hybrid handshake in Plexus-Myelin is living proof.
MCP makes AI integration dramatically easier. Anthropic’s Model Context Protocol provides a clean standard for AI agents to interact with external tools. Connecting AI to hardware used to require mountains of custom glue code. MCP drops that barrier significantly.
Rust makes embedded development safe and fast. No garbage collection overhead, with memory safety guarantees. For embedded scenarios, it’s the most important advancement since C. Every one of my projects is written in Rust, and that’s not a coincidence.
AI tooling has matured. LLMs can now meaningfully assist in hardware development workflows. Not just writing code — reading JTAG output, analyzing register dumps, suggesting debug strategies. The entire AI-HIL concept is built on this foundation.
These four conditions arrived simultaneously, opening a window that didn’t exist before. I don’t want to wait for someone else to walk through it.
What’s Next
Every project is continuously pushing toward production readiness. Plexus-Myelin’s hybrid handshake needs full performance tuning. AI-HIL MCP needs support for more hardware platforms. Engram needs to handle a wider range of models. Synapse Labs needs field validation. OmniTypist keeps getting polished.
I don’t think this is a one-person job. If you share the belief — edge intelligence, not cloud-only AI — if you want every edge device to be a first-class intelligent citizen rather than an appendage of the cloud, we should talk.
The end goal is simple: make every edge device a first-class intelligent citizen, not a dumb data pipe to the cloud.
This is my manifesto. Now, back to writing code.
Comments & Feedback