Decentralized artificial intelligence (AI) network Ritual came out of stealth mode to announce a $25 million Series A financing for the company, led by Archetype. The company offers an AI-powered infrastructure that aims to execute complex logic currently infeasible for smart contracts.
While AI adoption maintains an uptrend across all business verticals, issues such as high compute costs, limited hardware access and centralized APIs hinder the full potential of the current AI stack. As explained in Ritual’s introductory post:
“The grand vision for Ritual is to become the schelling point of AI in the web3 space by evolving Infernet into a modular suite of execution layers that interop with other base layer infrastructure in the ecosystem, allowing every protocol and application on any chain to use Ritual as a AI Coprocessor.”
The introduction of such AI models in crypto — from base layer infrastructure to applications — can help address new use cases, such as automatically managing risk parameters for lending protocols based on real-time market conditions.
Ritual’s protocol diagram reveals the use of modular execution layers revolving around AI models. The GMP layer — consisting of layer 1, rollups and sovereign — “facilitates interop between existing blockchains and Ritual Superchain, which functions as an AI coprocessor for all blockchains.”
Investors including Balaji Srinivasan, Accomplice, Robot Ventures, Accel, Dialectic, Anagram, Avra and Hypersphere joined the $25 million Series A funding round. The funding will be used to grow Ritual’s developer network and start seeding the network.
Related: Visa launches global AI advisory practice focused on generative systems
The vagueness presented in the recent executive order on AI safety from the Biden administration raised concerns among the AI community over stifling innovation.
My Executive Order on AI is a testament to what we stand for:
Safety, security, and trust. pic.twitter.com/rmBUQoheKp
— President Biden (@POTUS) October 31, 2023
The order established six new standards for AI safety and security, which include sweeping mandates, such as sharing the results of safety tests with officials for companies developing “any foundation model that poses a serious risk to national security, national economic security, or national public health and safety,” and “accelerating the development and use of privacy-preserving techniques.”
Magazine: Crypto City: The ultimate guide to Vancouver