DipDup
2.0Schema URL
DipDup project configuration file
Properties
Version of config specification, currently always 2.0
Name of indexer's Python package, existing or not
Mapping of datasource aliases and datasource configs
Database config
Mapping of contract aliases and contract configs
Mapping of index aliases and index configs
Mapping of template aliases and index templates
Mapping of job aliases and job configs
Mapping of hook aliases and hook configs
Hasura integration config
Sentry integration config
Prometheus integration config
Management API config
User-defined configuration to use in callbacks
Modify logging verbosity
Definitions
Advanced configuration of HTTP client
Number of retries after request failed before giving up
Sleep time between retries
Multiplier for sleep time between retries
Number of requests per period ("drops" in leaky bucket)
Time period for rate limiting in seconds
Sleep time between requests when rate limit is reached
Number of simultaneous connections
Connection timeout in seconds
Request timeout in seconds
Number of items fetched in a single paginated request (when applicable)
Interval between polling requests in seconds (when applicable)
Use cached HTTP responses instead of making real requests (dev only)
Alias for this HTTP client (dev only)
Coinbase datasource config
always 'coinbase'
API key
API secret key
API passphrase
HTTP client configuration
Etherscan datasource config
always 'abi.etherscan'
API URL
API key
HTTP client configuration
Generic HTTP datasource config
always 'http'
URL to fetch data from
HTTP client configuration
IPFS datasource config
always 'ipfs'
IPFS node URL, e.g. https://ipfs.io/ipfs/
HTTP client configuration
Subsquid datasource config
Always 'evm.node'
Ethereum node URL
Ethereum node WebSocket URL
HTTP client configuration
A number of blocks to store in database for rollback
Subsquid datasource config
always 'evm.subsquid'
URL of Subsquid Network API
One or more evm.node datasource(s) for the same network
HTTP client configuration
DipDup Metadata datasource config
always 'tzip_metadata'
GraphQL API URL, e.g. https://metadata.dipdup.net
HTTP client configuration
TzKT datasource config
always 'tezos.tzkt'
Base API URL, e.g. https://api.tzkt.io/
HTTP client configuration
Number of levels to keep in FIFO buffer before processing
Whether to merge realtime subscriptions
Number of blocks to keep in the database to handle reorgs
SQLite connection config
always 'sqlite'
Path to .sqlite3 file, leave default for in-memory database (:memory:)
List of tables to preserve during reindexing
Postgres database connection config
always 'postgres'
Host
User
Database name
Port
Schema name
Password
List of tables to preserve during reindexing
Connection timeout
EVM contract config
Always evm
Contract address
Contract ABI
Alias for the contract script
Tezos contract config.
Always tezos
Contract address
Contract code hash or address to fetch it from
Alias for the contract script
Big map handler config
Callback name
Contract to fetch big map from
Path to big map (alphanumeric string with dots)
Big map index config
always 'tezos.tzkt.big_maps'
Index datasource to fetch big maps with
Mapping of big map diff handlers
Level to start indexing from
Level to stop indexing at
Event handler config
Callback name
Contract which emits event
Event tag
Unknown event handler config
Callback name
Contract which emits event
Event index config
always 'tezos.tzkt.events'
Datasource config
Event handlers
First block level to index
Last block level to index
Head block index config
always 'tezos.tzkt.head'
Index datasource to receive head blocks
Callback name
Transaction handler pattern config
always 'transaction'
Match operations by source contract alias
Match operations by destination contract alias
Match operations by contract entrypoint
Whether can operation be missing in operation group
Alias for operation (helps to avoid duplicates)
Origination handler pattern config
always 'origination'
Match operations by source contract alias
Match origination of exact contract
Whether can operation be missing in operation group
Match operations by storage only or by the whole code
Alias for operation (helps to avoid duplicates)
Operation handler pattern config
always 'sr_execute'
Match operations by source contract alias
Match operations by destination contract alias
Whether can operation be missing in operation group
Alias for operation (helps to avoid duplicates)
Operation handler config
Callback name
Filters to match operation groups
Operation index config
always 'tezos.tzkt.operations'
Alias of index datasource in datasources section
List of indexer handlers
Aliases of contracts being indexed in contracts section
Types of transaction to fetch
[
"transaction"
]
Level to start indexing from
Level to stop indexing at
Operation index config
always 'tezos.tzkt.operations_unfiltered'
Alias of index datasource in datasources section
Callback name
Types of transaction to fetch
[
"transaction"
]
Level to start indexing from
Level to stop indexing at
Token transfer handler config
Callback name
Filter by contract
Filter by token ID
Filter by recipient
Filter by sender
Token transfer index config
always 'tezos.tzkt.token_transfers'
Index datasource to use
Mapping of token transfer handlers
Level to start indexing from
Level to stop indexing at
Token balance handler config
Callback name
Filter by contract
Filter by token ID
Token balance index config
always 'tezos.tzkt.token_balances'
Index datasource to use
Mapping of token transfer handlers
Level to start indexing from
Level to stop indexing at
Subsquid event handler
Callback name
EVM contract
Event name
Provider of EVM contract ABIs. Datasource kind starts with 'abi.'
Subsquid datasource config
Always 'evm.subsquid.events'
Subsquid datasource
Event handlers
One or more evm.abi datasource(s) for the same network
Whether to use only node datasource
Level to start indexing from
Level to stop indexing and disable this index
Subsquid transaction handler
Callback name
Transaction receiver
Method name
Transaction sender
Index that uses Subsquid Network as a datasource for transactions
always 'evm.subsquid.transactions'
Subsquid datasource config
Transaction handlers
One or many ABI datasource(s)
Whether to use only node datasource
Level to start indexing from
Level to stop indexing at
Index template config
Template alias in templates section
Values to be substituted in template (<key> -> value)
Level to start indexing from
Level to stop indexing at
Hook config
Callback name
Mapping of argument names and annotations (checked lazily when possible)
Wrap hook in a single database transaction
Job schedule config
Name of hook to run
Arguments to pass to the hook
Schedule with crontab syntax (* * * * *)
Schedule with interval in seconds
Run hook as a daemon (never stops)
Config for the Hasura integration.
URL of the Hasura instance.
Admin secret of the Hasura instance.
Whether source should be added to Hasura if missing.
Hasura source for DipDup to configure, others will be left untouched.
Row limit for unauthenticated queries.
Whether to allow aggregations in unauthenticated queries.
Whether to ignore errors when applying Hasura metadata.
Whether to use camelCase instead of default pascal_case for the field names.
Enable REST API both for autogenerated and custom queries.
HTTP connection tunables
Config for Sentry integration.
DSN of the Sentry instance
Environment; if not set, guessed from docker/ci/gha/local.
Server name; defaults to obfuscated hostname.
Release version; defaults to DipDup package version.
User ID; defaults to obfuscated package/environment.
Catch warning messages, increase verbosity.
Config for Prometheus integration.
Host to bind to
Port to bind to
Interval to update some metrics in seconds
Management API config
Host to bind to
Port to bind to
This section allows users to tune some system-wide options, either experimental or unsuitable for generic configurations.
Mapping of reindexing reasons and actions DipDup performs.
apscheduler scheduler config.
Do not start job scheduler until all indexes reach the realtime state.
Establish realtime connection and start collecting messages while sync is in progress (faster, but consumes more RAM).
Disable warning about running unstable or out-of-date DipDup version.
A number of levels to keep for rollback.
Overwrite precision if it's not guessed correctly based on project models.
Disable journaling and data integrity checks. Use only for testing.
Use different algorithm to match Tezos operations (dev only)