Create stream parameters

Creating a stream is a matter of minutes. To create a stream:

  1. Click Create stream to display the first Create stream page
  2. Set stream source parameters on the Source page
  3. Choose exact dataset and apply transformations on the Dataset & Filter page
  4. Specify where to stream the data on the Destination page

See detailed specifications for each of the pages below

Source

Create stream window

The following table describes the fields on the Create stream - Source page:

NameDescription
NameA label to identify the stream. This field is randomly pre-populated but you can enter a new name.
Network selectionThe blockchain network from which Project Zero retrieves data.
Stream startThe first block of the stream. The following options are available:
  • Latest block
  • Block #
Stream endThe last block of the stream. The following options are available:
  • Never
  • Block #
  • Latest Block

Dataset & filter

Create stream window

The following table describes the fields on the Create stream - Dataset & filter page:

NameDescription
Dataset typeThe dataset that Project Zero retrieves. For more information, see Dataset specifications.
Batch sizeThe number of blocks in a batch.
Latest block delayA lag from real time specified in blocks. This delay helps ensure data consistency and reliability by allowing time for any potential changes or reorganizations in the blockchain to stabilize before processing. A higher delay value may result in slightly delayed data delivery but can help mitigate the impact of chain reorganizations on data accuracy.
Bock functionsIf turned on, specifies a function to filter the data streamed. For more information, see Example block functions.

Destination

Create stream window

The following table describes the fields on the Create stream - Destination page:

NameDescription
TypeThe destination where Project Zero delivers the streamed data. The following options are currently available:
  • Webhook - facilitates real-time data delivery to external systems or services.
  • S3 - enables you to store streamed data directly in an Amazon Simple Storage Service (S3) bucket.
  • Kafka - facilitates real-time data streaming to Apache Kafka, a distributed event streaming platform.

Stream to a webhook

Webhook stream fields

The following table describes the webhook destination fields on the Create stream - Destination page:

NameDescription
Reorg handlingSpecifies how Project Zero handles any blockchain reorganizations encountered as part of the stream. The following options are available:
  • None - Project Zero streams the data as normal.
  • Resend - the stream resends any reorganized blocks, ensuring that the delivered data remains consistent with the latest blockchain state.
  • Rollback and resend - Project Zero performs a rollback on the data to the reorganization root before resending all of the reorganized blocks from that point onward.
For more information about blockchain reorganization and selecting a reorg method, see Blockchain Reorganisation.
Destination URLDefines the URL of the webhook where you want Project Zero to deliver the data.
CompressionSpecifies if Project Zero compresses data by Gzip during transmission to reduce data size and optimize bandwidth usage.
Request timeout (Optional)Defines the duration in seconds after which the webhook request times out if no response is received.
Stop stream after (Optional)Defines the number of retries if the webhook request fails.
Wait between retries (Optional)Defines the delay between retry attempts for failed webhook requests.
Custom headersSpecifies if Project Zero adds custom headers to the webhook request for authentication or additional metadata. If turned on, the page displays Key and Value fields to define one or more header values.

Stream to an S3 bucket

S3 stream fields

The following table describes the Amazon Simple Storage Service (S3) destination fields on the Create stream - Destination page:

NameDescription
Reorg handlingSpecifies how Project Zero handles any blockchain reorganizations encountered as part of the stream. The following options are available:
  • None - Project Zero streams the data as normal.
  • Resend - the stream resends any reorganized blocks, ensuring that the delivered data remains consistent with the latest blockchain state.
For more information about blockchain reorganization and selecting a reorg method, see Blockchain Reorganisation.
Bucket nameSpecifies the name of the S3 bucket where Project Zero stores the streamed data.
EndpointSpecifies the endpoint URL of the S3 bucket.
PrefixSpecifies a prefix to organize and categorize stored data within the S3 bucket.
File typeSpecifies the file format for the stored data. The following options are currently avaialable:
  • JSON
File compressionSpecifies if Project Zero compresses data by Gzip during transmission to reduce data size and optimize bandwidth usage.
Access key IDSpecifies the access key ID associated with your AWS (Amazon Web Services) account for authentication.
Secret access keySpecifies the secret access key corresponding to the specified access key ID for authentication.
Stop stream after (Optional)Defines the number of retries if the request fails.
Wait between retries (Optional)Defines the delay between retry attempts for failed requests.

Stream to Kafka

Kafka stream fields

The following table describes the Kafka destination fields on the Create stream page:

NameDescription
Reorg handlingSpecifies how Project Zero handles any blockchain reorganizations encountered as part of the stream. The following options are available:
  • None - Project Zero streams the data as normal.
  • Resend - the stream resends any reorganized blocks, ensuring that the delivered data remains consistent with the latest blockchain state.
  • Rollback and resend - Project Zero performs a rollback on the data to the reorganization root before resending all of the reorganized blocks from that point onward.
For more information about blockchain reorganization and selecting a reorg method, see Blockchain Reorganisation.
TopicSpecifies the Kafka topic where Project Zero publishes streamed data.
CompressionSpecifies if Project Zero compresses data by Gzip during transmission to reduce data size and optimize bandwidth usage.
BrokersSpecifies the address of Kafka brokers to direct data transmission. You can add as many brokers as needed to distribute data across Kafka clusters for high-availability and fault tolerance.
Acks (Optional)Specifies the acknowledgement mode for message delivery. This defines the required level of acknowledgment from Kafka brokers.
Partitions (Optional)Defines the number of partitions that the Kafka topic distributes data across for improved scalability and parallel processing.
Replicas (Optional)Defines the number of replicas for each partition to ensure fault tolerance and data redundancy within the Kafka cluster.
Username (Optional)Specifies the username for authentication with Kafka brokers, if required.
Password (Optional)Specifies the password for the specified username for authentication.
Stop stream after (Optional)Defines the number of retries if the request fails.
Wait between retries (Optional)Defines the delay between retry attempts for failed requests.

On this page