UPD Mar 30, 2022:
We’ve created a new way of creating indexers. Please, meet NEAR Lake
Problem description:
Currently, at NEAR we have a tool NEAR Indexer Framework you may be familiar with.
I know a few teams who built indexers using Indexer Framework:
- NEAR Node Interfaces team. NEAR Indexer for Explorer
- Mintbase team
- Flux team. Flux Capacitor
- Fayyr team. Fayyr Indexer
- (don’t know the team name) nymdex
But a lot more teams avoid building indexers. The ones who built struggle.
Running an indexer is almost the same as running a NEAR node but you also have custom code and you can’t get binaries on nearcore
updates. You need to maintain a “custom NEAR node” of some sort.
It might be expensive, it requires time and money investments and it distracts you from your business.
Solution we see:
To create Indexer as a Service.
We want to allow our users to focus on their business. We can achieve it by providing a tool with which they can:
- benefit from indexer features
- avoid running a NEAR node
- decrease time and money investments for maintenance
- focus on the business, not the tools.
We decided to start by providing simple yet useful tooling to check if users may want it.
Indexer as service nodes will store every single block on some blob storage (AWS S3 or alternative) in JSON format named by block hash. An API endpoint in Indexer as a service will return any block by its hash.
We’re going to define a set of triggers that essentially are going to represent some sort of events from the Network:
-
receiver_id
is observed (receiver_id
is user-provided) - The balance of the
account
has changed (account
is user-provided) - Epoch has changed
- etc.
It is considered that there are up to 20 types of events that can interest application developers.
In case of any event, a POST request with a relevant piece of data and block hash will be sent to a user-provided endpoint. By “relevant piece of data” we mean the piece from the block with something that corresponds to the user-selected event. For example, if a user is watching for “receiver_id
is observed” and Indexer as service notices a Receipt
with receiver_id
provided by the user a POST request with block hash and the Receipt
will be sent. All other transactions, receipts, and execution outcomes will be skipped for the user as non-relevant. The user will be able to request the entire block from blob storage via the Indexer as a service API endpoint.
The PoC will look as following (from the user perspective):
[To be discussed additionally] A user buys Indexer as a service fungible tokens on the service’s contract.
In the service contract, a user will set which predefined event they are interested in. They will provide a set of parameters relevant to the event and the endpoint to send a POST request. For security reasons, the endpoint might be encrypted with a service-provided public key.
Once the event is emitted on the network a POST request is sent to the user’s endpoint. The user is charged for checking every block against the user’s “filters” (a small yet reasonable amount to cover the CPU cost), either number of requests or the number of bytes sent to them.
Contract UI
Draft of the contract UI (made quick and dirty): https://www.figma.com/file/kc2Gnb333T98IyMslKhezC/Indexer-Service-Contract-UX-draft?node-id=0%3A1
Basically the flow:
- A user logs in with NEAR Wallet
- A user see all the so-called filters created earlier (if any)
- A user can toggle filters w/o deleting them
- A user can buy “Indexer coins” to spend them on checks and events
- A user can create a filter
Contract UX:
As for the contract (it’s all are about to be discussed and clarified though)
Assuming a user can buy 1000 of Indexer coins to spend on checks and requests for 1NEAR.
A user is buying coins and submits their filter. Indexer Service keeps track of the user balance and makes a decision to include/not include the user filters on every block handler step.
If the balance is positive Indexer Service checks the block against user filters. Right after the check, the service increments the number in an “invoice”.
If the event triggers the user filter and the service needs to send a request it checks the user balance again. If it is enough for a request then the service increments the number in the “invoice”.
Once in 10 minutes (think of it like the service epoch) the service performs a function call to the service contract owner method (using FUNCTION_CALL access key) to send the invoice. Once the invoice is received the contract decreases balances for users specified in the invoice.
Obviously, on the service side, the invoice is cleared and we start over a new 10 minutes cycle.
P.S. If the user balance reaches zero in the middle of the 10 minutes cycle the service just skips user filters.
How to handle it?
We imagine a user might have a simple HTTP-server that is ready to receive a specific JSON structure from the Indexer Service. After some manipulation with the data (if necessary) the handler can send a message to Telegram BOT or to store it in some MongoDB base for further use. Those handlers could even be hosted on AWS Lambda functions or alternatives.
The user code will also be able to pull the previous and the next blocks to fill up potentially missing information (e.g. cross-contract calls) or perform RPC calls to fetch relevant details if the data is pushed through the POST request from Indexer as a service is not sufficient.
Implementation of PoC plan:
- Choose one of the possible events to implement
- Write a contract for Indexer Service:
- Fungible tokens
- Possibility to choose an event and set necessary parameters
- User charging
- Indexer Service node
- Read the contract state for events to track and send
- Listen to every block for tracking events
- Store block in blob storage
- Send requests to users with tracking events
- Function call to IaaS to charge users
- Frontend
- Indexer Service contract frontend
- Login
- Buy IaaS fungible tokens
- CRUD events
- Enable/Disable events
We encourage those who build their indexers to share their thoughts if this project might be interesting to you.
We encourage those who avoid building their indexers to share their use cases so we can empower them with a proper tool.
And we encourage everyone to share their thoughts about such a service.
- Are you interested?
- Do you need something like that?
Thank you!