Inside Kleomedes: DIY Bare-Metal Blockchain Infrastructure

Inside Kleomedes: DIY Bare-Metal Blockchain Infrastructure

May 1, 2026
5 min read
web3blockchain-infrastructuredepinbare-metalcosmosrpc-nodesvalidator-nodes

There was a month when Kleomedes distributed nearly $30,000 in airdrops from the Cosmos chain Kujira to its staked token holders. That stretch represented the high point of an experiment that Marco Rinaldi and Avi had spent years building: a DAO-run validator that split its revenues between operating expenses, token liquidity, and profit share for community members. It worked, until it didn't.

Listen on your favorite platform

View full episode details

Moving Off the DAO

Kleomedes started under tooling called DAO DAO, which runs on Cosmos and made it straightforward to create proposals and execute code in a decentralized way. The original idea was to distribute the majority of the token supply to delegators, use validator revenue to bootstrap liquidity and pay out profit share, and then attract contributors from around the world with those financial incentives. Avi describes the logic as sound in theory:

"We were hoping that if we created proper incentives and onboarded more team members around the world, that we'd be able to have like a big team of contributors that were basically going out and recruiting new chains for us to validate."

As Cosmos market conditions softened, the model became harder to sustain. Contributors wanted to get paid reliably, and the DAO's revenue could not always support that. Marco Rinaldi put it plainly: "after some time, people want to get paid, and we could not always get people paid. So this created a little bit of instability into the DAO."

Rather than liquidate the hardware, the community voted to wind down the DAO and distribute the remaining treasury, allowing Marco and Avi to continue as a private company. That transition left them with the infrastructure they had built and the freedom to operate without governance overhead.

The Kelp DAO Incident

A $300 million exploit involving LayerZero and Kelp DAO came up as a case study in what happens when infrastructure concentration meets a targeted attack. The mechanics involved a DDoS attack that knocked four of roughly six RPC services offline; the two remaining services had been compromised and were feeding incorrect data. The DVN (decentralized verifier network) used by LayerZero consulted those two services, agreed the data looked accurate, and processed the transaction.

Marco Rinaldi sees the root cause as a false sense of security that comes with large, well-known providers: "people feel safe because they partner with like a big company. So they think like, oh, I'm using Infura, I don't have to worry about anything. But then this was the exact reason for the exploit because they were using not a big number of providers, just a small number."

Avi drew a line between the DVN configuration and the underlying infrastructure problem. Even setting aside the 1-of-1 DVN setup, the concentration of RPC services among a handful of large providers meant that once those services were disrupted, there was no distributed fallback. He pointed to Pocket Network as an example of a different approach, one that uses rotating sessions with round-robin selection across many node providers so that no single session relies on the same set of operators for long.

Hetzner Has 39%

StakeHub, a monitoring tool Kleomedes built for the chains it validates, makes the concentration problem visible. The dashboard tracks chain health, governance activity, uptime, Nakamoto coefficients, and RPC endpoint scores. It also maps node geography and breaks down provider distribution. On several chains, Hetzner alone accounts for 39% of nodes. Amazon and OVH take large slices of what remains.

For foundations running delegation programs, that data has practical weight. If a single provider experiences an outage or changes its terms of service, a significant portion of the network can go offline at once. Hetzner has already signaled it will restrict node operators under certain conditions, which is part of what pushed Kleomedes to move everything in-house.

Kleomedes also built a public dashboard for Pocket Network that shows relay counts, provider-side revenue, and what Avi calls high-growth opportunities: chains where few providers are active relative to demand. At the time of the episode, BNB was the standout, with high relay volume and few competing providers.

Hand-Crafted for the Chain

The hardware argument Marco Rinaldi makes is straightforward. Cloud infrastructure was not built for blockchains. Network-attached storage adds latency that locally connected drives avoid. Consumer hardware, selected carefully and assembled with a specific chain in mind, can outperform rented cloud servers on the metrics that matter for node operation.

Marco described the approach in detail:

"This is the sixth year that I do this, so I have a package, I tried all the drives, every possible combination so that now I can select perfectly the right hardware not for the machine but for the blockchain. Like I know that Ethereum has lower block times, so I can go with as lower drives. I know that Base or maybe Arbitrum has sub-one-second block times, so it's hand-crafted for the chain."

The cost structure follows from this. Kleomedes currently prices its RPC service at $1 per million relays, compared to the $6 to $10 range Avi cited for other providers. Marco noted that this price is still profitable. The bigger constraint right now is not margins but hardware availability. AI infrastructure demand has tightened the market for the specific RAM and drives Kleomedes relies on, limiting how quickly the operation can grow.

The scaling model itself is deliberately distributed. Rather than building one large data center, Marco has been adding smaller data pods across different locations so that if one goes offline, others absorb the load. In six months of running RPC services, the only brief outage was caused by a Cloudflare dependency that has since been removed.

Expanding beyond Europe to Asia is the next target. Running a competitive node for Hyperliquid, for instance, effectively requires being physically located in Tokyo, and renting servers in Asia carries a steep premium that makes the hardware scarcity problem even harder to work around.

Share this post

Share on X