In December 2023, SubQuery ceased being “just an indexer” and committed to pioneering a complete web3 infrastructure revolution. Today, we continue this mission by launching a transformative update aimed at enhancing the hosting and inference capabilities of Artificial Intelligence (AI) models, with a particular focus on large language models (LLMs). This innovative upgrade uses a community-driven approach to bring about a more accessible, cost-effective, and flexible AI ecosystem.

Not to mention, we’re doing it all with a brand new bold look!

Welcome to the AI era, same as the old era

Artificial intelligence (AI) and large language models (LLMs) like GPT and Llama are transforming industries by automating repetitive tasks and generating content efficiently. Trained on vast datasets, these AI models understand and produce human-like text, making sophisticated tools and interactions accessible, and improving efficiency across various sectors.

In the last year, the world has woken to the potential AI brings for transforming the way we approach everything we do, across all industries, including within web3. The integration of AI with web3 technologies could foster secure, transparent, and user-centric experiences, supporting decentralised autonomous organisations (DAOs) and smarter decentralised products

However, like previous technology shifts, the centralisation of AI technologies by a few massive corporations is starting to present significant issues, including data privacy concerns, monopolistic control over AI advancements, and limited access to AI benefits for smaller entities. This concentration of power contradicts the democratising potential of AI and often limits innovation to areas that serve corporate interests.

Web3, emphasising decentralisation, has always aimed to combat this. With AI, it should be no different and by implementing AI with web3 foundations, we can decentralise AI applications, remove control from massive corporations and promote an ecosystem where AI is accessible to everyone.

SubQuery’s Revolution for AI

SubQuery is the decentralised infrastructure network for web3. While we already provide developers with decentralised data indexing and RPC alternatives, our next goal is to power production ready and decentralised AI agents.

We will focus on inference, as opposed to training. Inference involves using an existing trained model to make predictions on new data, such as answering user queries. It is less computationally demanding, suitable for long-term hosting, and can be supported by a wider range of hardware. Although some commercial services provide inference hosting for custom models, there are few in web3. This is why there’s a huge opportunity for the SubQuery network to lead the industry today.

Current Market State for Running Production Inference Services

Few commercial services provide inference hosting for custom models due to the nascent state of the industry, even less so in web3. Most teams run their own hardware or spend significant time on devOps for their servers. The challenges include:

  • High Costs: Cloud providers charge high costs for running inference at scale, with substantial investments in hardware and services. Solutions like AWS Sagemaker and Google Cloud AI are pricey and may entail extra costs related to data transfer, storage, and management.
  • Complex Setup and Maintenance: Setting up a production environment for machine learning models involves complicated configurations, managing servers, load balancing, and ensuring high availability and scalability. MLOps is tough!
  • Limited Model Support and Regional Availability: Popular platforms support a range of models, but often the latest or custom models are not supported, and some services and features are not available in all regions.
  • Dependency on Centralised Providers: Current solutions are centralised and often encourage vendor lock-in and dependence on specific provider tools. This leads to pricing and privacy concerns with sensitive data.

How are we going to do it?

We have a very clear vision for what is important to us when creating the best platform for AI and LLM production inference hosting. This vision stems from four strongly held principles that will guide us.

A Focus on Open-Source

We aim to support standardised model formats like GGUF (General Graph Universal Format) or GGML (General Graph Machine Learning) for model and distribution. This will make SubQuery Network compatible with a wide range of machine learning frameworks and enable easy sharing.

But we plan to take it even further, Hugging Face is the industry leading repository for open source models and developer community. It provides a large variety of pre-trained models and is widely used by the community to submit and share models. We are going to look at how we can directly integrate SubQuery with Hugging Face to make the deployment of models truly one-click.

Industry Standard API

We will allow people to prompt AI models through the SubQuery network via OpenAI’s standardised API language, which is becoming the defacto API interface to query inference APIs.

Decentralised but accessible payment

We plan to implement a token based pricing model, similar to the rest of the industry, but in a decentralised token manner. This aims to bring more utility to SQT, while keeping a familiar pricing model that allows direct comparison between centralised alternatives and the SubQuery Network.

Contribute to the AI community along the way

We aim to build a community of developers, researchers, and node operators that will contribute to the network's growth, provide support, share best practices, and innovate in the area of decentralised AI. We aim to position SubQuery as the defacto leader in this space. 

This means we will establish a support system to help users and node operators troubleshoot issues, update models, and ensure smooth operations across the network. All this will be accomplished while adhering to international standards and regulations regarding data privacy and model governance. This is crucial, especially when handling sensitive or personal data.

Why can SubQuery do this better?

SubQuery can address the issues posed by the status quo through its universal decentralised infrastructure stack. Our existing pricing models, Node Operator management contracts, and network monitoring stack make the SubQuery network suitable for running and hosting production inference workloads. Benefits include:

  • Reliability through Decentralisation: By offering a decentralised network where anyone can serve as a Node Operator, SubQuery reduces dependence on centralised cloud providers, potentially lowering costs and increasing fault tolerance.
  • Flexible Model Support: The SubQuery Network allows users to submit their models, supporting a broader range of LLMs beyond what is typically available in managed services.
  • Simplified AI Access: SubQuery simplifies the process for AI model developers, allowing them to focus on model training and optimisation without worrying about deployment complexities.
  • Standardisation with OpenAI-Compatible APIs: Using a standardised query interface ensures ease of integration and broader consistency across different providers and nodes.
  • Cost Efficiency: Decentralised hosting could offer more competitive pricing due to the diversity of node providers and reduced overhead costs associated with large data centers.
  • Enhanced Performance and Reliability: Optimised node systems in regions closer to users can potentially provide better performance through lower latency and faster response times, crucial for real-time applications.

The SubQuery network aims to make deploying and using LLMs more accessible and efficient, facilitating a more collaborative and community-driven approach to machine learning infrastructure. This should excite anyone interested in leveraging AI technologies, from startup founders to enterprise executives, highlighting the potential for revolutionary change in how AI services are delivered and consumed.

Brand New Look

As we expand into the Artificial Intelligence space, we felt a new look was warranted to match our new pursuit. As of today, you can see we’ve gone with a complete new look with sleek chromatic details and a flower/root narrative.

The flower and root theme symbolises SubQuery’s commitment to fostering innovation and growth within the blockchain community. The roots represent the deep, stable, and reliable infrastructure that SubQuery provides, while the flowers symbolise the diverse and vibrant dApps that developers can create with the help of this infrastructure.

The new branding is more than just a visual refresh. The vibrant colours and sleek shapes of the flower and roots convey a sense of growth and possibility. That’s why we’ve chosen to change our branding alongside our new venture into decentralised hosting of AI inference.

Summary

The SubQuery network aims to revolutionise the AI landscape by making deploying and using LLMs more accessible, efficient, and collaborative. This initiative highlights the potential for revolutionary change in how AI services are delivered and consumed, benefiting a wide range of stakeholders from startup founders to enterprise executives.

We’ve already delivered breakthroughs in decentralised data indexing and RPCs, and while we continue to enhance these services, our next steps will focus on pioneering change in AI. These steps together, will help unlock the next level of performance increases in web3.

Go check out our website to see our vibrant and innovative new branding that will see us into this new exciting phase of our mission.

Pioneering fast, flexible, and scalable decentralised infrastructure, we will help power web3's transition to an open, efficient and user-centric future. Join us in this journey. With SubQuery, it's not just about building for today, but architecting a decentralised, inclusive future.

Let's shape the future of web3, together.

About SubQuery

SubQuery Network is innovating web3 infrastructure with tools that empower builders to decentralise the future - without compromise. Our flexible DePIN infrastructure network powers the fastest data indexers, the most scalable RPCs, innovative Data Nodes, and leading open source AI models. We are the roots of the web3 landscape, helping blockchain developers and their cutting-edge applications to flourish. We’re not just a company - we’re a movement driving an inclusive and decentralised web3 era. Let’s shape the future of web3, together.

​​Linktree | Website | Discord | Telegram | Twitter | Blog | Medium | LinkedIn | YouTube

Share this post