Blobster Book

Documentation for Blobster

Telegram Chat

Blobster is a blazing fast and cheap Ethereum Blob storage system.

Currently, the system can be used locally with the Reth development settings and comes with a Consensus Layer client to send simulated blobs to the node. We will move to using Holesky once the node is done syncing (soon!).

Why?

The motivation behind creating a network to offer longer term storage solutions to blob storage is to build support and tooling to support the underexplored world of blobs and to test out Danksharding techniques in a real world setting. We think by taking advantage of the KZG commitment nature of blobs, we can streamline a SNARK proving system for storage nodes. Also taking advantage of erasure encoding and some mixing techniques, we believe we can achieve lower storage requirements than replication while still maintaining significant economic security. These ideas are a WIP and may change or be proven incorrect, however we think we have enough of a base to field outside opinions.

How?

We run an Reth Node w/ an ExEx that detects blobs, queries the consensus layer, erasure encodes the blobs and stores them in a series of storage nodes erasure encoding for cheap and efficient long term storage of blobs.

Curious about blobs? Check this guide by Ethereum and the EIP-4844 website.

Roadmap

Currently the system can be run locally, and we are working to supporting bringing the system live on Holesky after we complete the following steps.

High level goals:

  1. Finish syncing holesky (WIP)
  2. Implement Retrieval of blobs
  3. Create a SNARK proving system for storage nodes to prove they are storing data & accompanying Merkle Proof state root tracking
  4. Host network to open up storage node solutions
  5. Implement a Global Storage Node Register

Quick Start

  1. Clone the Repo

git clone https://github.com/align_network/blobster

  1. Run Reth and the ExEx

cargo run --bin remote-exex --release -- node --dev

  1. Run the Mock Consensus Layer

Ensure you have libsql installed sudo apt-get install libsqlite3-dev

cargo run --bin mock-cl --release

  1. Send 10 Random data blobs

cargo run --bin update_blocks --release

Storage Nodes

Storage Nodes are currently setup to have a max of 3

  1. Run a Storage node

cargo run --release --bin storage-node -- --node-id=1 --storage-dir=storage/node1

Code

├── mock_cl
│   ├── bin
│   │   ├── mock_cl.rs
│   │   └── update_blocks.rs
│   ├── blobs.db
│   ├── Cargo.toml
│   ├── example_returned_blob.json
│   └── src
│       ├── consensus_storage.rs
│       └── lib.rs
├── remote
│   ├── bin
│   │   ├── exex.rs
│   │   ├── read.rs
│   │   └── s_node.rs
│   ├── build.rs
│   ├── Cargo.toml
│   ├── proto
│   │   ├── exex.proto
│   │   ├── exex.rs
│   │   └── mod.rs
│   └── src
│       ├── blobs.rs
│       ├── codec.rs
│       ├── example_blob_sidecar.json
│       ├── lib.rs
│       ├── sequencer
│       │   ├── sequencer.rs
│       │   └── utils.rs
│       └── sequencer.rs

Mock Consensus Layer

Folder: mock_cl

Ensure you have libsql installed sudo apt-get install libsqlite3-dev

Endpoints:

/eth/v1/beacon/blob_sidecars/<block header> - block header that has a tx w/ a bob

/eth/v1/beacon/all_blobs - list all the blobs

/etc/v1/beacon/delete_all_blobs - clears out db

To Run: cargo run --bin mock-cl --release

This crate is tasked with mimicing a Consensus Layer to aid in development. It is very bare bones but should allow you to test with an Reth ExEx in dev mode i.e. without having to sync the full node. The Update Blocks saves files to a sqlite db and when queried, this server responds with the blob data, commitment and proof. Requires one blob per tx. Reccommended to follow the update_blocks function to mimic sending.

ExEx

Folder: remote

Devmode: cargo run --bin remote-exex --release -- node --dev

The Reth ExEx is adapted from Reth's Remote ExEx example.

Process:

  1. Block is found w/ blob sidecar
  2. Reed Solomon Encode blob data into chunks
  3. Randomly send to one of three nodes (for testing purposes)

Roadmap:

  1. Explore saving blob data commitments in a merkle tree so to easily verify when we implement zk proofs

Holesky

To sync with Holesky

Holesky Reth (EL Node):

export ETHERSCAN_API_KEY=<YOUR-KEY> && cargo run --bin remote-exex --release -- node --chain holesky --debug.etherscan --datadir /<YOUR_DIR>/holesky/reth/ --authrpc.jwtsecret /mnt/<YOUR_DIR/holesky/jwt.hex --http --http.api all

Holesky Lighthouse (CL Node):

lighthouse bn --network holesky --checkpoint-sync-url https://holesky.beaconstate.ethstaker.cc/ --execution-endpoint http://localhost:8551 --execution-jwt /mnt/<YOUR_DIR>/holesky/jwt.hex --datadir /mnt/<YOUR_DIR>/holesky/lighthouse/ --disable-deposit-contract-sync

Storage Nodes

Storage Nodes are currently setup to have a max of 3

  1. Run a Storage node

cargo run --release --bin storage-node -- --node-id=1 --storage-dir=storage/node1