Skip to main content

Indexing the Transaction Receipts

In this step-by-step tutorial we will look into a squid that indexes Fuel Network data.

Pre-requisites: Node.js v20 or newer, Git, Docker.

Download the project

Begin by retrieving the template and installing the dependencies:

git clone
cd fuel-example
npm ci

Configuring the data source

"Data source" is a component that defines what data should be retrieved and where to get it. To configure the data source to retrieve the data produced by the receipt field of the fuel transaction, we initialize it like this:

const dataSource = new DataSourceBuilder()
url: '',
strideConcurrency: 3,
strideSize: 50
receipt: {
contract: true,
receiptType: true
type: ['LOG_DATA']


  • is the address for the public Subsquid Network gateway for Fuel testnet. Check out Subsquid Networks reference pages for lists of public gateways for all supported networks.
  • The argument of addReceipt() is a set of filters that tells the processor to retrieve all receipts of type LOG.
  • The argument of setFields() specifies the exact fields we need for every data item type. In this case we request contract and receiptType for receipt data items.

See also FuelDataSource reference and the comments in main.ts of the fuel-example repo.

With a data source it becomes possible to retrieve filtered blockchain data from Subsquid Network, transform it and save the result to a destination of choice.

Decoding the event data

The other part the squid processor (the ingester process of the indexer) is the callback function used to process batches of the filtered data, the batch handler. In Fuel Squid SDK it is typically defined within a run() call, like this:

import {run} from '@subsquid/batch-processor'

run(dataSource, database, async ctx => {
// data transformation and persistence code here


  • dataSource is the data source object described in the previous section
  • database is a Database implementation specific to the target data sink. We want to store the data in a PostgreSQL database and present with a GraphQL API, so we provide a TypeormDatabase object here.
  • ctx is a batch context object that exposes a batch of data (at ctx.blocks) and any data persistence facilities derived from db (at See Block data for Fuel for details on how the data batches are presented.

Batch handler is where the raw on-chain data is decoded, transformed and persisted. This is the part we'll be concerned with for the rest of the tutorial.

We begin by defining a database and starting the data processing:

run(dataSource, database, async ctx => {
// Block items that we get from `ctx.blocks` are flat JS objects.
// We can use `augmentBlock()` function from `@subsquid/fuel-objects`
// to enrich block items with references to related objects.
let contracts: Map<String, Contract> = new Map()

let blocks =

for (let block of blocks) {
for (let receipt of block.receipts) {
if (receipt.receiptType == 'LOG_DATA' && receipt.contract != null) {
let contract = contracts.get(receipt.contract)
if (!contract) {
contract = await, {where: {id: receipt.contract}})
if (!contract) {
contract = new Contract({
id: receipt.contract,
logsCount: 0,
foundAt: block.header.height
contract.logsCount += 1
contracts.set(, contract)

This goes through all the receipts in the block, verifies that they have type LOG_DATA", reads the contract field from the receipt and saves it to the database.

At this point the squid is ready for its first test run. Execute

npx tsc
docker compose up -d
npx squid-typeorm-migration apply
node -r dotenv/config lib/main.js

You can verify that the data is being stored in the database by running

docker exec "$(basename "$(pwd)")-db-1" psql -U postgres -c "SELECT * FROM contract"

Full code can be found here.