1.) Validators subscribe to Arcadia through an endpoint to receive chunks from the builders. Once verified, validator connections are stored in the 'subscribedValidatorMap'.
2.) Rollups Register for Shared Block Building through Arcadia once. Until they opt-out entirely out of Shared Block Building or opt-out for a particular epoch, A rollup is considered for Shared Block Building and is included in the list of participating rollups for AOT Auction.
3.) Auction starts for every 12s (maintaining synchroneity with Ethereum) at epoch n-1 of Ethereum for block building rights of epoch n. The bidder's behaviour will be static if the highest bid in the auction(epoch n) is not revealed for epoch(n-1). So, with every highest bid recorded in the auction, the highest bid is revealed publicly. The winner of the auction is determined and will start building blocks when their epoch starts and sends them to Arcadia. Auction winner and results are sent to SEQ, the L1 we have purpose built for redistributing fees to stakers and providing a shared common ledger amongst rollups.
4.) Assume there are two types of blocks, one called Rest of Block(RoB) and another called Top of Block(ToB), where ToBs are built by Javelin superbuilder that contains cross chain bundles and RoB are built by MEV builders that contains transactions dedicated for one chain. Block builder for epoch n, builds blocks for rollups(TOB chunks & ROB chunk) and propagates the chunks to Arcadia.
5.) Arcadia then simulates these chunks. Simulation removes any bad bundles and/or transactions.
6.) Once the chunk is valid, it is sent to our subscribedValidators map to be preconf'd. Once the chunk is signed by majority of the validators, it is sent back to Arcadia. Arcadia will verify the signature and weight.
7.) Once preconf signatures are verified, the chunk is added to a map of preconf'd chunks where rollups can make GetPayload requests to retrieve their chunk(s).
# Start PostgreSQL & Redis individually:
docker run -d -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=postgres postgres
docker run -d -p 6379:6379 redis
# [optional] Start Memcached
docker run -d -p 11211:11211 memcached
# Or with docker-compose:
docker-compose upNote: docker-compose also runs an Adminer (a web frontend for Postgres) on http://localhost:8093/?username=postgres (db: postgres, username: postgres, password: postgres)
Now start the services:
# The housekeeper sets up the validators, and does various housekeeping
go run . housekeeper --network sepolia --db postgres://postgres:postgres@localhost:5432/postgres?sslmode=disable
# Run APIs for sepolia (using a dummy BLS secret key)
go run . api --network sepolia --secret-key 0x607a11b45a7219cc61a3d9c5fd08c7eebd602a6a19a977f8d3771d5711a550f2 --db postgres://postgres:postgres@localhost:5432/postgres?sslmode=disable
# Run Website for sepolia
go run . website --network sepolia --db postgres://postgres:postgres@localhost:5432/postgres?sslmode=disable
# Query status
curl localhost:9062/eth/v1/builder/status
# Send test validator registrations
curl -X POST -H'Content-Encoding: gzip' localhost:9062/eth/v1/builder/validators --data-binary @testdata/valreg2.json.gz
# Delete previous registrations
redis-cli DEL boost-relay/sepolia:validators-registration boost-relay/sepolia:validators-registration-timestampACTIVE_VALIDATOR_HOURS- number of hours to track active proposers in redis (default:3)API_MAX_HEADER_BYTES- http maximum header bytes (default:60_000)API_TIMEOUT_READ_MS- http read timeout in milliseconds (default:1_500)API_TIMEOUT_READHEADER_MS- http read header timeout in milliseconds (default:600)API_TIMEOUT_WRITE_MS- http write timeout in milliseconds (default:10_000)API_TIMEOUT_IDLE_MS- http idle timeout in milliseconds (default:3_000)API_SHUTDOWN_WAIT_SEC- how long to wait on shutdown before stopping server, to allow draining of requests (default:30)API_SHUTDOWN_STOP_SENDING_BIDS- whether API should stop sending bids during shutdown (nly useful in single-instance/testnet setups, default:false)BLOCKSIM_MAX_CONCURRENT- maximum number of concurrent block-sim requests (0 for no maximum, default:4)BLOCKSIM_TIMEOUT_MS- builder block submission validation request timeout (default:3000)BROADCAST_MODE- which broadcast mode to use for block publishing (default:consensus_and_equivocation)DB_DONT_APPLY_SCHEMA- disable applying DB schema on startup (useful for connecting data API to read-only replica)DB_TABLE_PREFIX- prefix to use for db tables (default usesdev)GETPAYLOAD_RETRY_TIMEOUT_MS- getPayload retry getting a payload if first try failed (default:100)MEMCACHED_URIS- optional comma separated list of memcached endpoints, typically used as secondary storage alongside RedisMEMCACHED_EXPIRY_SECONDS- item expiry timeout when using memcache (default:45)MEMCACHED_CLIENT_TIMEOUT_MS- client timeout in milliseconds (default:250)MEMCACHED_MAX_IDLE_CONNS- client max idle conns (default:10)NUM_ACTIVE_VALIDATOR_PROCESSORS- proposer API - number of goroutines to listen to the active validators channelNUM_VALIDATOR_REG_PROCESSORS- proposer API - number of goroutines to listen to the validator registration channelNO_HEADER_USERAGENTS- proposer API - comma separated list of user agents for which no bids should be returnedENABLE_BUILDER_CANCELLATIONS- whether to enable block builder cancellationsREDIS_URI- main redis URI (default:localhost:6379)REDIS_READONLY_URI- optional, a secondary redis instance for heavy read operations
DISABLE_PAYLOAD_DATABASE_STORAGE- builder API - disable storing execution payloads in the database (i.e. when using memcached as data availability redundancy)DISABLE_LOWPRIO_BUILDERS- reject block submissions by low-prio buildersFORCE_GET_HEADER_204- force 204 as getHeader responseENABLE_IGNORABLE_VALIDATION_ERRORS- enable ignorable validation errorsUSE_V2_PUBLISH_BLOCK_ENDPOINT- uses the v2 publish block endpoint on the beacon node
RUN_DB_TESTS- when set to "1" enables integration tests with Postgres using endpoint specified by environment variableTEST_DB_DSNRUN_INTEGRATION_TESTS- when set to "1" enables integration tests, currently used for testing Memcached using comma separated list of endpoints specified byMEMCACHED_URISTEST_DB_DSN- specifies connection string using Data Source Name (DSN) for Postgres (default: postgres://postgres:postgres@localhost:5432/postgres?sslmode=disable)
REDIS_CONNECTION_POOL_SIZE,REDIS_MIN_IDLE_CONNECTIONS,REDIS_READ_TIMEOUT_SEC,REDIS_POOL_TIMEOUT_SEC,REDIS_WRITE_TIMEOUT_SEC(see also [the code here]
By default, the execution payloads for all block submission are stored in Redis and also in the Postgres database,
to provide redundant data availability for getPayload responses. But the database table is not pruned automatically,
because it takes a lot of resources to rebuild the indexes (and a better option is using TRUNCATE).
Storing all the payloads in the database can lead to terabytes of data in this particular table. Now it's also possible to use memcached as a second data availability layer. Using memcached is optional and disabled by default.
To enable memcached, you just need to supply the memcached URIs either via environment variable (i.e.
MEMCACHED_URIS=localhost:11211) or through command line flag (--memcached-uris).
You can disable storing the execution payloads in the database with this environment variable:
DISABLE_PAYLOAD_DATABASE_STORAGE=1.
You can use the javelin project to validate block builder submissions: https://github.com/AnomalyFi/javelin_rpc
Here's an example systemd config:
/etc/systemd/system/geth.service
[Unit]
Description=mev-boost
Wants=network-online.target
After=network-online.target
[Service]
User=ubuntu
Group=ubuntu
Environment=HOME=/home/ubuntu
Type=simple
KillMode=mixed
KillSignal=SIGINT
TimeoutStopSec=90
Restart=on-failure
RestartSec=10s
ExecStart=/home/ubuntu/builder/build/bin/geth \
--syncmode=snap \
--datadir /var/lib/goethereum \
--metrics \
--metrics.expensive \
--http \
--http.api="engine,eth,web3,net,debug,flashbots" \
--http.corsdomain "*" \
--http.addr "0.0.0.0" \
--http.port 8545 \
--http.vhosts '*' \
--ws \
--ws.api="engine,eth,web3,net,debug" \
--ws.addr 0.0.0.0 \
--ws.port 8546 \
--ws.api engine,eth,net,web3 \
--ws.origins '*' \
--graphql \
--graphql.corsdomain '*' \
--graphql.vhosts '*' \
--authrpc.addr="0.0.0.0" \
--authrpc.jwtsecret=/var/lib/goethereum/jwtsecret \
--authrpc.vhosts '*' \
--cache=8192
[Install]
WantedBy=multi-user.targetSending blocks to the validation node:
- The built-in blocksim-ratelimiter is a simple example queue implementation.
- By default,
BLOCKSIM_MAX_CONCURRENTis set to 4, which allows 4 concurrent block simulations per API node - For production use, use the prio-load-balancer project for a single priority queue,
and disable the internal concurrency limit (set
BLOCKSIM_MAX_CONCURRENTto0).
Block builders can opt into cancellations by submitting blocks to /relay/v1/builder/blocks?cancellations=1. This may incur a performance penalty (i.e. validation of submissions taking significantly longer). See also flashbots/mev-boost-relay#348


