Multichain indexing
Squids can extract data from multiple chains into a shared data sink. If the data is stored to Postgres it can then be served as a unified multichain GraphQL API.
To do this, run one processor per source network:
Make a separate entry point (
main.ts
or equivalent) for each processor. The resulting folder structure may look like this:├── src
│ ├── bsc
│ │ ├── main.ts
│ │ └── processor.ts
│ ├── eth
│ │ ├── main.ts
│ │ └── processor.tsAlternatively, parameterize your processor using environment variables: you can set these on a per-processor basis if you use a deployment manifest to run your squid.
Arrange for running the processors alongside each other conveniently:
- If you are going to use
sqd run
for local runs or deploy your squid to Aquarium, list your processors at thedeploy.processor
section of your deployment manifest:Make sure to give each processor a unique name!deploy:
processor:
- name: eth-processor
cmd: [ "node", "lib/eth/main" ]
- name: bsc-processor
cmd: [ "node", "lib/bsc/main" ] - Optionally, add
sqd
commands for running each processor tocommands.json
. Example
- If you are going to use
On Postgres
Also ensure that
State schema name for each processor is unique
src/bsc/main.tsprocessor.run(
new TypeormDatabase({
stateSchema: 'bsc_processor'
}),
async ctx => { // ...src/eth/main.tsprocessor.run(
new TypeormDatabase({
stateSchema: 'eth_processor'
}),
async (ctx) => { // ...Schema and GraphQL API are shared among the processors.
Handling concurrency
The default isolation level used by
TypeormDatabase
isSERIALIZABLE
, the most secure and the most restrictive one. Another isolation level commonly used in multichain squids isREAD COMMITTED
, which guarantees that the execution is deterministic for as long as the sets of records that different processors read/write do not overlap.To avoid overlaps, use per-chain records for volatile data. E.g. if you track account balances across multiple chains you can avoid overlaps by storing the balance for each chain in a different table row.
When you need to combine the records (e.g. get a total of all balaces across chains) use custom resolvers to do it on the GraphQL server side.
It is OK to use cross-chain entities to simplify aggregation. Just don't store any data in them:
type Account @entity {
id: ID! # evm address
balances: [Balance!]! @derivedFrom("field": "account")
}
type Balance @entity {
id: ID! # chainId + evm address
account: Account!
value: BigInt!
}
On file-store
Ensure that you use a unique target folder for each processor.
Example
A complete example is available here.