Skip to main content
Version: Firesquid



The Subsquid services stack separates data ingestion (Archives) from data transformation and presentation (squids). Archives ingest and store raw blockchain data in a normalized way.

Squids are ETL projects that ingest historical on-chain data from Archives, transforming it according to user-defined data mappers. Squids are typically configured to present this data as a GraphQL API.


Archives allow squids to ingest data in batches spanning multiple blocks. These 'pre-indexers' significantly decrease indexing times while keeping squids lightweight.

See the Archives section for more information on how to use public Archives or to learn how to run an Archive locally.


Squids have a certain structure and are supposed to be developed as regular node.js packages. See squid-template for a reference.

A typical squid implements both data mapping and a GraphQL API server, which presents the data. The Subsquid framework provides an extensive set of tools for developing squids:

  • substrate-processor is tasked with fetching on-chain data from an archive and executing user-defined mapping code against it. It offers batching and fine-grained data selection interfaces to minimize database roundtrips and optimize archive data fetching.
  • substrate-typegen and substrate-evm-typegen generate TypeScript facade classes for substrate and evm log data. It allows to catch most of the data mapping bugs at compile time.
  • typeorm-codegen generates entity classes from a declarative schema file
  • graphql-server serves the data as a rich GraphQL API, generated from the schema file. These loosely follow the OpenCRUD standard and support all common filterings and selectors out-of-the box. They can also be extended with custom resolvers.
  • misc substrate tools including a perfomant SCALE codec and ss58 decoder

The Aquarium

Squids can be deployed to the Subsquid cloud service, called the Aquarium, free of charge. Go to the Deploy Squid section for more information.