From Chips to Chains: Nvidia’s Rong Sees a Future in Decentralized AI
Anthony Rong, Nvidia VP, shares why he believes HBAR and Verifiable Compute will redefine AI trust—from chip to chain—and what this shift means for the future of secure, decentralized intelligence.
On this page
- The Trust Problem in AI
- A December Blueprint: Verifiable Compute
- A New Kind of AI Infrastructure
- AI With a Notary Stamp
- Why Hedera?
- A Different Kind of Blockchain
- Built for Enterprise Trust
- Compliance, Provenance, and Global Scale
- The Hardware Heart: Intel and Nvidia
- From R&D to Real-World Use
- Governments Were the First to Move
- Enterprise Rollout Begins
- Security Risks Are Rising
- From Detection to Certification
- A New Standard for AI Governance
- From Chips to Chains
- A Future You Can Verify
On April 7, 2025, Anthony Rong, Regional Vice President of Engineering at Nvidia, dropped a quiet bombshell on LinkedIn. He wrote, “NVIDIA just announced the integration of Hedera Hashgraph (HBAR) into its AI system,” he wrote. “A powerful combination that could reshape the future of decentralized AI and enterprise blockchain applications.”
Coming from one of the world’s leading figures in AI hardware, it wasn’t just a headline. It was a signal. AI and blockchain, previously seen as parallel paths or even competing visions, are converging. And according to Rong, the future isn’t just smart.
It’s decentralized, transparent, and secure.
The Trust Problem in AI
In recent years, AI has grown more powerful, more autonomous, and more opaque. Deep learning models now write code, evaluate medical scans, and generate images indistinguishable from reality.
But these breakthroughs come with a growing challenge: trust.
How can we verify that an AI’s output is accurate? That it wasn’t tampered with, biased, or secretly trained on sensitive data? For enterprises and governments racing to adopt AI, these questions are no longer theoretical—they’re existential.
That’s where Hedera Hashgraph and a quiet, but strategic alliance with Nvidia and Intel enter the picture.
A December Blueprint: Verifiable Compute
A New Kind of AI Infrastructure
The seeds of Nvidia’s announcement trace back to December 18, 2024, when Hedera, EQTY Lab, Intel, and Nvidia jointly revealed Verifiable Compute — a framework designed to add provable trust to AI systems, all the way down to the silicon level.
Developed over two years with guidance from advisors at Stanford and MIT, Verifiable Compute introduces a cryptographic certification system that verifies AI operations—training, inference, benchmarking—at runtime. These proofs are rooted in secure zones inside modern hardware, known as Trusted Execution Environments (TEEs), and notarized using Hedera’s public ledger.
AI With a Notary Stamp
Think of it as a notary public for your AI model, ensuring every piece of data, every computation, and every decision it makes can be independently verified and timestamped.
As a new era of autonomous AI agents emerges, we must evolve our trust in AI systems. Verifiable Compute transforms how organizations enforce AI governance, automate auditing, and collaborate to build safer and more valuable AI,
said Jonathan Dotan, founder of EQTY Lab.
Why Hedera?
A Different Kind of Blockchain
Unlike traditional blockchains, Hedera Hashgraph is designed for high-throughput, low-latency transactions. It’s also carbon-negative—a critical factor for energy-conscious enterprises. And perhaps most importantly, it’s governed by a council of global tech and business leaders, including Dell, Google, IBM, and Boeing.
Related: Hedera: how to get accepted into the Ivy League
Built for Enterprise Trust
This makes Hedera attractive for enterprise-grade use cases, where performance and trust matter more than hype. With Verifiable Compute, Hedera’s Consensus Service becomes the digital backbone of AI auditing. Every AI operation—from data input to output—is logged, timestamped, and permanently stored on the ledger.
Compliance, Provenance, and Global Scale
This immutable trail doesn’t just offer transparency. It allows for real-time regulatory compliance, forensic auditing, and provable security—features that are already in demand across industries like:
- finance,
- healthcare,
- supply chain,
- autonomous systems.
The Hedera Consensus Service lets us anchor trust directly in silicon. It gives us a way to extend security and compliance across global jurisdictions,
Dotan added.
The Hardware Heart: Intel and Nvidia
Verifiable Compute isn’t a blockchain-first solution. It’s hardware-first, rooted in the chips that power modern AI.
At the core of the system are Intel’s 5th Gen Xeon processors with Trust Domain Extensions (TDX) and Nvidia’s H100/H200 GPUs, soon to be joined by its next-generation Blackwell architecture. These processors provide confidential computing environments that isolate sensitive operations from potential threats.
The true potential of AI won’t be fully realized until we can verify every component in the stack. Securing the trust boundary in the processor sets a standard for next-generation AI workloads to be cryptographically secure and verifiable,
said Michael O'Connor, Chief Architect for Confidential Computing at Nvidia.
Anand Pashupathy, VP at Intel, echoed that sentiment:
EQTY Lab provides another level of trust to the confidential computing ecosystem. Adding Verifiable Compute helps companies enhance security, privacy, and accountability of their AI solutions.
From R&D to Real-World Use
Governments Were the First to Move
What makes Verifiable Compute more than a lab experiment is its early adoption by enterprises and public agencies. Since mid-2023, EQTY Lab has worked with over a dozen government entities across EMEA, including the UAE’s AI ministry, which backed the development of the ClimateGPT model—an early use case built on Verifiable Compute.
Enterprise Rollout Begins
By Q1 2025, the framework began rolling out to clients across finance, life sciences, and media—sectors where AI accountability is mission-critical.
Security Risks Are Rising
And the timing couldn’t be more urgent. According to recent studies, 91% of organizations have experienced supply chain attacks on traditional software systems. With AI models increasingly automating high-stakes decisions, the risks—from data poisoning to model theft and privacy breaches—are growing.
From Detection to Certification
Verifiable Compute addresses this head-on. It detects vulnerabilities, ensures runtime compliance, and even halts AI workflows if security conditions aren’t met. If policies are satisfied, the system issues a cryptographically verifiable certificate of compliance, viewable in any browser or audit system.
A New Standard for AI Governance
This isn’t just a technical upgrade. It’s a paradigm shift.
By combining hardware-enforced security with ledger-based transparency, Verifiable Compute allows enterprises to govern AI in real time, across borders, and in alignment with evolving regulatory standards like the EU AI Act.
Companies no longer have to choose between speed and safety, or between innovation and oversight. With Nvidia now integrating Hedera into its AI systems, these ideas are no longer theoretical—they’re becoming industry defaults.
This is a game changer. Verifiable trust in AI-generated outputs. Decentralized data provenance at scale. Ultra-low latency and carbon-negative architecture,
said Rong in his LinkedIn post.
From Chips to Chains
Rong’s phrasing—“the future of AI is not just smart, it’s decentralized”—marks a broader shift in how we think about intelligence itself.
AI, as it becomes more autonomous, needs boundaries. Not to slow it down, but to make it safe, auditable, and usable at scale. Blockchain, long dismissed as a niche or speculative technology, is proving to be a key ingredient in providing those boundaries.
And while the phrase “blockchain meets AI” has been thrown around for years, this time it’s different. It’s not about speculation. It’s about certificates, compliance, chips, code—and real accountability.
A Future You Can Verify
By the end of 2025, Verifiable Compute is expected to be fully integrated across Nvidia’s confidential computing initiatives, and industry analysts project the confidential computing market to reach $184.5 billion by 2032.
The world is entering an era where AI agents will interact with humans, make financial decisions, recommend healthcare protocols, and enforce legal norms. Without trust, this era will collapse under its own weight.
But with tools like Verifiable Compute, backed by players like Nvidia, Intel, and Hedera, we’re building that trust at the infrastructure level.
From chips to chains, the next evolution of AI isn’t just about intelligence. It’s about integrity.
The content on The Coinomist is for informational purposes only and should not be interpreted as financial advice. While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, or reliability of any content. Neither we accept liability for any errors or omissions in the information provided or for any financial losses incurred as a result of relying on this information. Actions based on this content are at your own risk. Always do your own research and consult a professional. See our Terms, Privacy Policy, and Disclaimers for more details.