We’re proud of our track record on delivering on our roadmap goals on time and on target, something especially tough in web3. Unlike other teams out there, we try to only announce features once we are confident that we will build them, and when we are confident that we do it. As a result, it’s a big event when we announce and commit to a new roadmap.

Since our last roadmap article in January 2024, we have successfully expanded our offering to more broadly address the infrastructure centralisation issues in web3 by adding globally distributed and decentralised RPCs. We’ve achieved everything we set out to achieve in the first half of 2024, so now it’s time to reveal what the rest of the year will hold for SubQuery.

Our goal has always been to pioneer fast, flexible, and scalable decentralised infrastructure. With this mind, the next industry we will provide a decentralised alternative for is Artificial Intelligence (AI) inference and our roadmap reflects this.

So how will this look in practice? Below is the outline of our SubQuery Network technical roadmap for the rest of 2024.

Phase 1: Decentralised and Scalable AI Inference

As demonstrated in the Web3 Summit, the SubQuery Network will support productionised inference hosting of the world's leading open source AI models.

What this means is that anyone will be able to publish their own model of the network (using standardised formats like GGUF and GGML for serialisation). Node Operators in the network will then be able to take these models and start hosting them on their infrastructure. They will be able to serve real world AI inference traffic for customers, and be rewarded in SQT tokens.

Deliverables:

  • Anyone can publish an LLM model to the network using standardised model formats like GGUF (General Graph Universal Format) or GGML (General Graph Machine Learning).
  • Complete public release of a Node Operator coordinator version compatible with any LLM project.
  • Ensure model availability checks are in place.
  • Ensure AI projects are viewable and queryable in the SubQuery Network Explorer and Network Application.

Phase 2: Sustainable AI Pricing

In order to ensure that it’s easy to pay for these AI workloads and that Node Operators are sufficiently rewarded for their hard work, we will release a new token-based pricing model, charging users based on input and output tokens - a standard approach familiar to the AI industry. This brings transparency, fairness, and simplicity to decentralised AI pricing.

Deliverables:

  • A new token-based pricing model and the ability to create Flex plans and closed agreements using this payment model

Phase 3: AI Developer Tools

In phase 3, we’re going to focus on providing developer tooling to make SubQuery the default web3 AI workspace to build in. By investing in more tooling, it will be easier to create, manage, and deploy, and on the way, we will foster a community of decentralised AI enthusiasts sharing and collaborating on ideas.

This will mean deep integration with other existing AI services and communities out there, for example HuggingFace. It will also mean completing some key usability enhancements for the network to make it easier to share models, distribute models, and restore historic conversations with models.

Deliverables:

  • One-click publishing from HuggingFace.
  • Implementation of proof of indexing.
  • Restore encrypted conversation history across different Node Operators, signed by wallet addresses.
  • Implementation of proof of inference.
  • Boosting capabilities for AI projects.
  • Deeper integration into the network app to answer questions for Node Operators and delegators on maximising APY.

Phase 4: Decentralised AI RAG

Custom trained AI agents are great, but when you combine them with your own up to date, real-time data, they become insightful.

Retrieval-Augmented Generation (RAG) means that users will be able to supercharge standard AI models with personal or custom data in a privacy focussed way. You will be able to do this by providing additional information to the model and that model will be able to understand, reference, and cite that data.

For example you could ask an AI agent to distil information from a list of your historical transactions, you could ask it to analyse an essay that you have written and provide spelling advice, you could even provide the entire documentation of SubQuery and get a custom AI agent who is an expert in building with SubQuery’s indexing SDK.

Deliverables:

  • Consumers can publish their own RAG inputs (e.g. a text file or a JSON dataset) to the network which will be stored in IPFS or some other decentralised storage service.
  • Consumers will be able to associate these RAG inputs with one or many AI agents
  • Consumers can make queries to the AI agents and directly reference the RAG inputs to create personalised AI models for specific business needs.
  • We will develop a pricing model to support RAG integration.

Phase 5: Dynamic AI Agents

The previous phase gives hints at what a decentralised AI agent can do with tight integration into the web3 ecosystem, but this is where we take things to the next level.

In short, we will deeply integrate SubQuery’s AI agents into our data indexer to allow you to run queries and workloads directly across freshly indexed blockchain data from over 200 different networks. Imagine an AI agent that can automatically enquire or understand transactions as soon as they are written to the blockchain

This milestone will also have the secondary effect of making the data easier to understand and access for the average user. No longer will you need coding or data engineering experience to analyse large datasets, the AI agent will be able to answer and explain large datasets and complex queries in simple plain english (or whatever language you prefer to converse in).

Deliverables:

  • Consumers can query through a pre-trained AI agent directly to their existing SubQuery Indexer SDK projects.
  • The AI agent will have direct access to data that is indexed, as soon as it’s indexed.
  • The AI agent will be able to suggest changes or improvements to the GraphQL queries for those users that elect to still use GraphQL

And So Much More...

While we’re excited to revolutionise AI inference hosting, we remain ambitious to finesse and elevate our existing products. See below our major goals for RPCs, indexing and the core product in 2024.

RPCs

  • SubQuery Sharded Data Node
  • Constant drive to improve RPC Performance in the SubQuery routing gateway
  • Advanced rate limiting features for RPCs

Indexing

  • Support GraphQl Subscriptions on Network
  • Support Multichain Indexing on the Network

Core Product

  • Mobile App Support for delegators
  • Consumer Friendly Pricing Model 
  • Allow an external network actor to sponsor queries to a project deployment, e.g. sponsor an RPC or a data indexer and give free queries or additional rewards to all users.
  • Automated dispute process and resolution

We appreciate our incredible community who has been with us right from the start and we welcome new members! Follow us on Twitter, join our Discord and Telegram, and help keep us accountable and on track with the above milestones in our roadmap.

About SubQuery

SubQuery Network is innovating web3 infrastructure with tools that empower builders to decentralise the future - without compromise. Our flexible DePIN infrastructure network powers the fastest data indexers, the most scalable RPCs, innovative Data Nodes, and leading open source AI models. We are the roots of the web3 landscape, helping blockchain developers and their cutting-edge applications to flourish. We’re not just a company - we’re a movement driving an inclusive and decentralised web3 era. Let’s shape the future of web3, together. 

​​Linktree | Website | Discord | Telegram | Twitter | Blog | Medium | LinkedIn | YouTube

Share this post