Two tracks running in parallel — sovereign decentralized infrastructure for AI, and the PDE simulation work that pays the rent on rigor.
Three hyperscalers run 85%+ of the world's AI compute. A 70B model needs $30K of GPU hardware that idles 90% of the day. Every prompt ships private data to a corporate server. Cost barrier, sovereignty risk, single point of failure — pick three.
SUM Innovation is the bet that the next decade of intelligence shouldn't run on rented racks. We're building two interlocked systems — a chain and a network — so compute becomes as decentralized as the people using it. Don't trust us. Trust math.
A peer-to-peer LLM inference network in 100% Rust — splits model layers across consumer hardware over a P2P mesh. Pipeline parallelism, federated learning, verifiable execution.
Fixed-supply, immediate-finality L1 powering the network's economy. BFT consensus, native NFTs, no smart-contract surface to exploit. Engineered for restraint.
A first-principles model of a hollow-fiber membrane module separating CO₂ from N₂ by selective permeability across a membrane between concentric lumen and shell regions. Governing equations discretized in FEniCS — variational methods, Navier–Stokes coupling, Maxwell–Stefan transport. PDE work that has to actually solve, not just typecheck.
Some of this is under NDA — happy to walk through the parts I can share.