The Subsquid services stack separates data ingestion (Archives) from data transformation and presentation (squids). Archives ingest and store raw blockchain data in a normalized way.
Squids are ETL projects that ingest historical on-chain data from Archives, transforming it according to user-defined data mappers. Squids are typically configured to present this data as a GraphQL API.
Archives allow squids to ingest data in batches spanning multiple blocks. These 'pre-indexers' significantly decrease indexing times while keeping squids lightweight.
See the Archives section for more information on how to use public Archives or to learn how to run an Archive locally.
Squids have a certain structure and are supposed to be developed as regular node.js packages. See squid-template for a reference.
A typical squid implements both data mapping and a GraphQL API server, which presents the data. The Subsquid framework provides an extensive set of tools for developing squids:
substrate-processoris tasked with fetching on-chain data from an archive and executing user-defined mapping code against it. It offers batching and fine-grained data selection interfaces to minimize database roundtrips and optimize archive data fetching.
substrate-evm-typegengenerate TypeScript facade classes for substrate and evm log data. It allows to catch most of the data mapping bugs at compile time.
typeorm-codegengenerates entity classes from a declarative schema file
graphql-serverserves the data as a rich GraphQL API, generated from the schema file. These loosely follow the OpenCRUD standard and support all common filterings and selectors out-of-the box. They can also be extended with custom resolvers.
- misc substrate tools including a perfomant SCALE codec and ss58 decoder