DipDup
DipDup project configuration file
| Type | object |
|---|---|
| File match |
**/dipdup.yaml
**/dipdup.yml
**/dipdup.*.yaml
**/dipdup.*.yml
|
| Schema URL | https://catalog.lintel.tools/schemas/schemastore/dipdup/latest.json |
| Source | https://raw.githubusercontent.com/dipdup-io/dipdup/next/schemas/dipdup-3.0.json |
Versions
Validate with Lintel
npx @lintel/lintel check
DipDup project configuration file
Properties
This section allows users to tune some system-wide options, either experimental or unsuitable for generic configurations.
9 nested properties
Use different algorithm to match Tezos operations (dev only)
Overwrite precision if it's not guessed correctly based on project models.
Establish realtime connection and start collecting messages while sync is in progress (faster, but consumes more RAM).
Do not start job scheduler until all indexes reach the realtime state.
Mapping of reindexing reasons and actions DipDup performs.
A number of blocks to keep for rollback (affects all datasources)
apscheduler scheduler config.
Disable journaling and data integrity checks. Use only for testing.
Mapping of watchdog triggers and actions DipDup performs.
Management API config
Mapping of contract aliases and contract configs
User-defined configuration to use in callbacks
Database config
Mapping of datasource aliases and datasource configs
Hasura integration config
Mapping of hook aliases and hook configs
Mapping of index aliases and index configs
Mapping of job aliases and job configs
Modify logging verbosity
MCP server config
Name of indexer's Python package, existing or not
Prometheus integration config
Mapping of runtime aliases and runtime configs
Sentry integration config
Mapping of template aliases and index templates
Definitions
This section allows users to tune some system-wide options, either experimental or unsuitable for generic configurations.
Use different algorithm to match Tezos operations (dev only)
Overwrite precision if it's not guessed correctly based on project models.
Establish realtime connection and start collecting messages while sync is in progress (faster, but consumes more RAM).
Do not start job scheduler until all indexes reach the realtime state.
Mapping of reindexing reasons and actions DipDup performs.
A number of blocks to keep for rollback (affects all datasources)
apscheduler scheduler config.
Disable journaling and data integrity checks. Use only for testing.
Mapping of watchdog triggers and actions DipDup performs.
Management API config
Host to bind to
Port to bind to
Coinbase datasource config
API key
HTTP client configuration
always 'coinbase'
API passphrase
API secret key
Blockvision datasource config
API key
HTTP client configuration
always 'evm.blockvision'
EVM contract config
Contract ABI
Contract address
Always evm
Alias for the contract script
Etherscan datasource config
API key
HTTP client configuration
always 'evm.etherscan'
Subsquid event handler
Callback name
EVM contract
Event name
Subsquid datasource config
evm datasources to use
Event handlers
Level to start indexing from
Always 'evm.events'
Level to stop indexing and disable this index
EVM node datasource config
HTTP client configuration
Always 'evm.node'
A number of blocks to store in database for rollback
EVM node WebSocket URL
Sourcify datasource config
Chain ID
API key
HTTP client configuration
always 'evm.sourcify'
Subsquid datasource config
Subsquid transaction handler
Callback name
Transaction sender
Method name
Method signature
Transaction receiver
Index that uses Subsquid Network as a datasource for transactions
evm datasources to use
Transaction handlers
Level to start indexing from
always 'evm.transactions'
Level to stop indexing at
Config for the Hasura integration.
Admin secret of the Hasura instance.
Whether to allow aggregations in unauthenticated queries.
Whether to ignore errors when applying Hasura metadata.
Whether to use camelCase instead of default pascal_case for the field names.
Whether source should be added to Hasura if missing.
List of table/view names to make private.
Whether to make internal tables (prefixed with "dipdup") private.
HTTP connection tunables
Enable REST API both for autogenerated and custom queries.
Row limit for unauthenticated queries.
Hasura source for DipDup to configure, others will be left untouched.
Hook config
Callback name
Mapping of argument names and annotations (checked lazily when possible)
Wrap hook in a single database transaction
Advanced configuration of HTTP client
Alias for this HTTP client (dev only)
Number of items fetched in a single paginated request (when applicable)
Number of simultaneous connections
Connection timeout in seconds
Interval between polling requests in seconds (when applicable)
Time period for rate limiting in seconds
Number of requests per period ("drops" in leaky bucket)
Sleep time between requests when rate limit is reached
Use cached HTTP responses instead of making real requests (dev only)
Use cached HTTP responses instead of making real requests (dev only)
Request timeout in seconds
Number of retries after request failed before giving up
Multiplier for sleep time between retries
Sleep time between retries
Generic HTTP datasource config
Index template config
Template alias in templates section
Values to be substituted in template (<key> -> value)
Level to start indexing from
always 'template'
Level to stop indexing at
IPFS datasource config
Job schedule config
Name of hook to run
Arguments to pass to the hook
Schedule with crontab syntax (* * * * *)
Run hook as a daemon (never stops)
Schedule with interval in seconds
Config for MCP server
URL of the management API
Whether to expose resources as tools for clients that don't support MCP resources
Host to bind to
Port to bind to
Postgres database connection config
Host
Connection timeout
Database name
List of tables to preserve during reindexing
always 'postgres'
Password
Port
Schema name
User
Config for Prometheus integration.
Host to bind to
Port to bind to
Interval to update some metrics in seconds
Action that should be performed on reindexing
:param exception: Raise ReindexingRequiredError exception.
:param wipe: Wipe the database and reindex from scratch. (WARNING: This action is irreversible! All indexed data will be lost!)
:param ignore: Ignore the reindexing cause and continue.
Reason that caused reindexing
:param manual: Manual reindexing. :param migration: Migration of the database schema. :param rollback: Rollback that couldn't be handled automatically. :param config_modified: Index config was modified. :param schema_modified: Project models or database schema were modified.
Config for Sentry integration.
Catch warning messages, increase verbosity.
DSN of the Sentry instance
Environment; if not set, guessed from docker/ci/gha/local.
Release version; defaults to DipDup package version.
Server name; defaults to obfuscated hostname.
User ID; defaults to obfuscated package/environment.
Whether to skip indexing big map history and use only current state
:param never: Always index big map historical updates. :param once: Skip history once after reindexing; process updates as usual on the next resync. :param always: Always skip big map history.
SQLite connection config
List of tables to preserve during reindexing
always 'sqlite'
Path to .sqlite file, leave default for in-memory database (:memory:)
Starknet contract config
Contract ABI
Contract address
Always starknet
Alias for the contract script
Subsquid event handler
Callback name
Starknet contract
Event name
Starknet events index config
Aliases of index datasources in datasources section
Event handlers
Level to start indexing from
Always 'starknet.events'
Level to stop indexing at
Starknet node datasource config
Flag signalling that this datasource can be used for block headers fetching
HTTP client configuration
Always 'starknet.node'
A number of blocks to store in database for rollback
Starknet node WebSocket URL
Subsquid datasource config
Subsquid event handler
Callback name
Event name (pallet.event)
Subsquid datasource config
substrate datasources to use
Event handlers
Substrate runtime
Level to start indexing from
Always 'substrate.events'
Level to stop indexing and disable this index
Substrate node datasource config
HTTP client configuration
Always 'substrate.node'
Substrate node WebSocket URL
Substrate runtime config
Always 'substrate'
Path to type registry or its alias
Subscan datasource config
API key
HTTP client configuration
always 'substrate.subscan'
Subsquid datasource config
Big map handler config
Callback name
Contract to fetch big map from
Path to big map (alphanumeric string with dots)
Big map index config
Tezos datasources to use
Mapping of big map diff handlers
Level to start indexing from
always 'tezos.big_maps'
Level to stop indexing at
Whether to skip indexing big map history and use only current state
:param never: Always index big map historical updates. :param once: Skip history once after reindexing; process updates as usual on the next resync. :param always: Always skip big map history.
Tezos contract config.
Contract address
Contract code hash or address to fetch it from
Always tezos
Alias for the contract script
Event handler config
Callback name
Contract which emits event
Event tag
Event index config
evm datasources to use
Event handlers
First block level to index
always 'tezos.events'
Last block level to index
Unknown event handler config
Callback name
Contract which emits event
Head block index config
Callback name
tezos datasources to use
always 'tezos.head'
Type of blockchain operation
:param transaction: transaction :param origination: origination :param migration: migration :param sr_execute: sr_execute :param sr_cement: sr_cement
Operation handler config
Callback name
Filters to match operation groups
Origination handler pattern config
Alias for operation (helps to avoid duplicates)
Whether can operation be missing in operation group
Match origination of exact contract
Match operations by source contract alias
Match operations by storage only or by the whole code
always 'origination'
Operation handler pattern config
Alias for operation (helps to avoid duplicates)
Match operations by destination contract alias
Whether can operation be missing in operation group
Match operations by source contract alias
always 'sr_cement'
Operation handler pattern config
Alias for operation (helps to avoid duplicates)
Match operations by destination contract alias
Whether can operation be missing in operation group
Match operations by source contract alias
always 'sr_execute'
Transaction handler pattern config
Alias for operation (helps to avoid duplicates)
Match operations by destination contract alias
Match operations by contract entrypoint
Whether can operation be missing in operation group
Match operations by source contract alias
always 'transaction'
Operation index config
tezos datasources to use
List of indexer handlers
Aliases of contracts being indexed in contracts section
Level to start indexing from
always 'tezos.operations'
Level to stop indexing at
Operation index config
Callback name
tezos datasources to use
Level to start indexing from
always 'tezos.operations_unfiltered'
Level to stop indexing at
Token balance handler config
Callback name
Filter by contract
Filter by token ID
Token balance index config
tezos datasources to use
Mapping of token transfer handlers
Level to start indexing from
always 'tezos.token_balances'
Level to stop indexing at
Token transfer handler config
Callback name
Filter by contract
Filter by sender
Filter by recipient
Filter by token ID
Token transfer index config
tezos datasources to use
Mapping of token transfer handlers
Level to start indexing from
always 'tezos.token_transfers'
Level to stop indexing at
TzKT datasource config
Number of levels to keep in FIFO buffer before processing
HTTP client configuration
always 'tezos.tzkt'
Whether to merge realtime subscriptions
Number of blocks to keep in the database to handle reorgs
DipDup Metadata datasource config
Network name, e.g. mainnet, ghostnet, etc.
HTTP client configuration
always 'tzip_metadata'
Config for the watchdog
Action to perform when watchdog timeout is reached
Watchdog timeout in seconds