Are there any documents about the data source?

There are 3 data sources:

  1. On-chain data (DeFi, NFTs, GameFi and etc.)
  2. Off-chain data (i.e some token prices come from the Coingecko API)
  3. Data uploaded by community

What's the difference between competitors and Footprint Analytics?

  • No code experience, users can easily analyse the data on the chain without writing SQL
  • Cross-chain analysis supported
  • Combination of on-chain and off-chain data analysis supported
  • Ability to upload own data
  • Semantic data, and users can quickly understand the complex data on the chain.

How can I know the time range of each data table?

You can open the dashboard and see the Time period and Table Description for each data table. You can click on the hyperlink for each data table to see the details.

How do we monitor the addition of new contracts?

Getting new contract addresses from external sources was our early practice, and we have now iterated to an automated solution that identifies contract addresses from our own foundation tables (transactions, token_transfers and etc.). When incremental node data is found to contain contract addresses that are not included in the database yet, the mechanism will automatically collect them.

What do you mean by NFT aggregators? How can you define volume from different marketplaces?

An NFT marketplace aggregator consolidates the listing inventory of multiple different marketplaces, allowing users to gain full transparency of the market and also buy and sell NFTs in bulk without having to interact with each separate marketplace. We regard NFT aggregators as a marketplace from the event parsing process.
Aggregators transactions will not only have the marketplace name, but the aggregator name will be included too. In special cases( such as more than 2 aggregators are involved) we will just display the name of the first aggregator trigger the trade.

How do we verify the quality of data?

There are several strategies that we use to validate data:
Basic validation
No unusual data is present. Different traditional statistical methods are used for verification. No gaps in time between data are present.
Logical (internal) validation
The data makes sense in terms of business metrics. For instance, most lending protocols usually require overcollateralization. Therefore the net deposit amount should be higher than net borrowing amount.
Cross-validation
Finally we compare the calculated metrics like TVL and MC with other data platforms