Skip to main content
Version: Old ArrowSquid docs

Simple Substrate squid

Objective

The goal of this tutorial is to guide you through creating a simple blockchain indexer ("squid") using Squid SDK. In this example we will query the Crust storage network. Our objective will be to observe which files have been added and deleted from the network. Additionally, our squid will be able to tell us the groups joined and the storage orders placed by a given account.

We will start with the substrate squid template, then go on to run the project, define a schema, and generate TypeScript interfaces. From there, we will be able to interact directly with the Archive, and extract a types bundle from Crust's own library.

We expect that experienced software developers should be able to complete this tutorial in around 10-15 minutes.

Pre-requisites

info

This tutorial uses custom scripts defined in commands.json. The scripts are automatically picked up as sqd sub-commands.

Scaffold with sqd init

Use sqd init and come up with some unique name for your squid. This tutorial will index data on Crust, a Substrate-based network, so use the substrate template:

sqd init substrate-crust-tutorial --template substrate
cd substrate-crust-tutorial

Run the project

Now you can follow the quickstart guide to get the project up and running. Here is a summary:

npm ci
sqd build
sqd up

sqd process # should begin to ingest blocks

# open a separate terminal for this next command
sqd serve # should begin listening on port 4350

After this test, shut down both processes with Ctrl-C and proceed.

Define the schema and generate entity classes

Next, we make changes to the data schema of the squid and define entities that we would like to track. We are interested in:

  • Files added to and deleted from the chain;
  • Active accounts;
  • Groups joined by accounts;
  • Storage orders placed by accounts.

For this, we use the following schema.graphql:

schema.graphql
type Account @entity {
id: ID! #Account address
workReports: [WorkReport] @derivedFrom(field: "account")
joinGroups: [JoinGroup] @derivedFrom(field: "member")
storageOrders: [StorageOrder] @derivedFrom (field: "account")
}

type WorkReport @entity {
id: ID! #event id
account: Account!
addedFiles: [[String]]
deletedFiles: [[String]]
extrinsicHash: String
createdAt: DateTime
blockNum: Int!
}

type JoinGroup @entity {
id: ID!
member: Account!
owner: String!
extrinsicHash: String
createdAt: DateTime
blockNum: Int!
}

type StorageOrder @entity {
id: ID!
account: Account!
fileCid: String!
extrinsicHash: String
createdAt: DateTime
blockNum: Int!
}

Notice that the Account entity is almost completely derived. It is there to tie the other three entities together.

To finalize this step, run the codegen tool:

sqd codegen

This will automatically generate TypeScript entity classes for our schema. They can be found in the src/model/generated folder of the project.

Generate TypeScript wrappers for events

We generate these using the squid-substrate-typegen tool. Its configuration file is typegen.json; there, we need to

  1. Set the "specVersions" field to a valid source of Crust chain runtime metadata. We'll use an URL of Subsquid-maintained metadata service:
    "specVersions": "https://v2.archive.subsquid.io/metadata/crust",
  2. List all Substrate pallets we will need the data from. For each pallet we list all events, calls, storage items and constants needed.
info

Refer to this note if you are unsure what Substrate data to use in your project.

Our final typegen.json looks like this:

typegen.json
{
"outDir": "src/types",
"specVersions": "https://v2.archive.subsquid.io/metadata/crust",
"pallets": {
"Swork": {
"events": [
"WorksReportSuccess",
"JoinGroupSuccess"
]
},
"Market": {
"events": [
"FileSuccess"
]
}
}
}

Once done with the configuration, we run the tool with

sqd typegen

Set up the processor object

The next step is to create a SubstrateBatchProcessor object which subscribes to all the events we need. We do it at src/processor.ts:

src/processor.ts
import {
SubstrateBatchProcessor,
SubstrateBatchProcessorFields,
DataHandlerContext
} from '@subsquid/substrate-processor'
import {lookupArchive} from '@subsquid/archive-registry'
import {events} from './types' // the wrappers generated in previous section

const processor = new SubstrateBatchProcessor()
.setDataSource({
archive: lookupArchive('crust', {release: 'ArrowSquid'}),
chain: {
url: 'https://crust.api.onfinality.io/public',
rateLimit: 10
}
})
.setBlockRange({ from: 583000 })
.addEvent({
name: [
events.market.fileSuccess.name,
events.swork.joinGroupSuccess.name,
events.swork.worksReportSuccess.name
],
call: true,
extrinsic: true
})
.setFields({
extrinsic: {
hash: true
},
block: {
timestamp: true
}
})

type Fields = SubstrateBatchProcessorFields<typeof processor>
export type ProcessorContext<Store> = DataHandlerContext<Store, Fields>

This creates a processor that

  • Uses an Archive as its main data source and a chain RPC for real-time updates. URL of the Archive endpoint is looked up in the Archive registry. See this page for reference;
  • Subscribes to Market.FileSuccess, Swork.JoinGroupSuccess and Swork.WorksReportSuccess events emitted at heights starting at 583000;
  • Additionally subscribes to calls that emitted the events and the corresponding extrinsics;
  • Requests the hash data field for all retrieved extrinsics and the timestamp field for all block headers.

We also export the ProcessorContext type to be able to pass the sole argument of the batch handler function around safely.

Define the batch handler

Squids batch process chain data from multiple blocks. Compared to the handlers approach this results in a much lower database load. Batch processing is fully defined by processor's batch handler, the callback supplied to the processor.run() call at the entry point of each processor (src/main.ts by convention).

We begin defining our batch handler by importing the entity model classes and Crust event types that we generated in previous sections. We also import the processor and its types:

src/main.ts
import {Account, WorkReport, JoinGroup, StorageOrder} from './model'
import {processor, ProcessorContext} from './processor'

Let's skip for now the process.run() call - we are going to come back to it in a second - and scroll down to the getTransferEvents function. In the template repository this function loops through the items contained in the context, extracts the events data and stores it in a list of objects.

For this project we are still going to extract events data from the context, but this time we have more than one event type so we have to sort them. We also need to handle the account information. Let's start with deleting the TransferEvent interface and defining this instead:

type Tuple<T,K> = [T,K]
interface EventsInfo {
joinGroups: Tuple<JoinGroup, string>[]
marketFiles: Tuple<StorageOrder, string>[]
workReports: Tuple<WorkReport, string>[]
accountIds: Set<string>
}

Now, let's replace the getTransferEvents function with the below snippet that

  • extracts event information in a manner specific to its name (known from e.name);
  • stores event information in an object (we are going to use entity classes for that) and extracts accountIds from it;
  • store all accountIds in a set.
import {toHex} from '@subsquid/substrate-processor'
import * as ss58 from '@subsquid/ss58'

function getEventsInfo(ctx: ProcessorContext<Store>): EventInfo {
let eventsInfo: EventsInfo = {
joinGroups: [],
marketFiles: [],
workReports: [],
accountIds: new Set<string>()
}
for (let block of ctx.blocks) {
const blockTimestamp = block.header.timestamp ? new Date(block.header.timestamp) : undefined
for (let e of block.events) {
if (e.name === events.swork.joinGroupSuccess.name) {
const decoded = events.swork.joinGroupSuccess.v1.decode(e)
const memberId = ss58.codec('crust').encode(decoded[0])
eventsInfo.joinGroups.push([
new JoinGroup({
id: e.id,
owner: ss58.codec('crust').encode(decoded[1]),
blockNum: block.header.height,
createdAt: blockTimestamp,
extrinsicHash: e.extrinsic?.hash
}),
memberId
])
// add encountered account ID to the set of unique accountIDs
eventsInfo.accountIds.add(memberId)
}
if (e.name === events.market.fileSuccess.name) {
const decoded = events.market.fileSuccess.v1.decode(e)
const accountId = ss58.codec('crust').encode(decoded[0])
eventsInfo.marketFiles.push([
new StorageOrder({
id: e.id,
fileCid: toHex(decoded[1]),
blockNum: block.header.height,
createdAt: blockTimestamp,
extrinsicHash: e.extrinsic?.hash
}),
accountId
])
eventsInfo.accountIds.add(accountId)
}
if (e.name === events.swork.worksReportSuccess.name) {
const decoded = events.swork.worksReportSucces.v1.decode(e)
const accountId = ss58.codec('crust').encode(decoded[0])

const addedExtr = e.call?.args.addedFiles
const deletedExtr = e.call?.args.deletedFiles

const addedFiles = addedExtr.map(v => v.map(ve => String(ve)))
const deletedFiles = deletedExtr.map(v => v.map(ve => String(ve)))

if (addedFiles.length > 0 || deletedFiles.length > 0) {
eventsInfo.workReports.push([
new WorkReport({
id: e.id,
addedFiles: addedFiles,
deletedFiles: deletedFiles,
blockNum: block.header.height,
createdAt: blockTimestamp,
extrinsicHash: e.extrinsic?.hash,
}),
accountId
])
eventsInfo.accountIds.add(accountId)
}
}
}
}
return eventsInfo
}

Next, we want to create an entity (Account) object for every accountId in the set, then add the Account information to every event entity object. Finally, we save all the created and modified entity models into the database.

Take the code inside processor.run() and change it so that it looks like this:

processor.run(new TypeormDatabase(), async (ctx) => {
const eventsInfo = getEventsInfo(ctx)

let accounts = await ctx.store
.findBy(Account, { id: In([...eventsInfo.accountIds]) })
.then(accounts => new Map(accounts.map(a => [a.id, a]))
for (let aid of eventsInfo.accountIds) {
if (!accounts.has(aid)) {
accounts.set(aid, new Account({ id: aid }))
}
}

for (const jg of eventsInfo.joinGroups) {
// necessary to add this field to the previously created model
// because now we have the Account created.
jg[0].member = accounts.get(jg[1])
}
for (const mf of eventsInfo.marketFiles) {
mf[0].account = accounts.get(mf[1])
}
for (const wr of eventsInfo.workReports) {
wr[0].account = accounts.get(wr[1])
}

await ctx.store.save([...accounts.values()]);
await ctx.store.insert(eventsInfo.joinGroups.map(el => el[0]));
await ctx.store.insert(eventsInfo.marketFiles.map(el => el[0]));
await ctx.store.insert(eventsInfo.workReports.map(el => el[0]));
})

Apply changes to the database

Squid projects automatically manage the database connection and schema via an ORM abstraction provided by TypeORM. Previously we changed the data schema at schema.graphql and reflected these changes in our Typescript code using sqd codegen. Here, we apply the corresponding changes to the database itself.

We begin by making sure that the database is at blank state:

sqd down
sqd up

Then we replace any old migrations with the new one with

sqd migration:generate

The new migration will be generated from the TypeORM entity classes we previously made out of schema.graphql with sqd codegen. Optionally, we can apply the migration right away:

sqd migration:apply

If we skip this step, the new migration will be applied next time we run sqd processor.

Launch the project

It's finally time to run the project! Run

sqd process

in one terminal, then open another one and run

sqd serve

Now you can see the results of our hard work by visiting localhost:4350/graphql in a browser and accessing the GraphiQL console.

From this window we can perform queries. This one displays info on ten latest work reports, including all involved files and the account id:

query MyQuery {
workReports(limit: 10, orderBy: blockNum_DESC) {
account {
id
}
addedFiles
deletedFiles
}
}

Credits

This sample project is adapted from a real integration, developed by our very own Mikhail Shulgin. Credit for building it and helping with the guide goes to him.