Create stream parameters
Creating a stream is a matter of minutes. To create a stream:
- Click Create stream to display the first Create stream page
- Set stream source parameters on the Source page
- Choose exact dataset and apply transformations on the Dataset & Filter page
- Specify where to stream the data on the Destination page
See detailed specifications for each of the pages below
Source
The following table describes the fields on the Create stream - Source page:
Name | Description |
---|---|
Name | A label to identify the stream. This field is randomly pre-populated but you can enter a new name. |
Network selection | The blockchain network from which Project Zero retrieves data. |
Stream start | The first block of the stream. The following options are available:
|
Stream end | The last block of the stream. The following options are available:
|
Dataset & filter
The following table describes the fields on the Create stream - Dataset & filter page:
Name | Description |
---|---|
Dataset type | The dataset that Project Zero retrieves. For more information, see Dataset specifications. |
Batch size | The number of blocks in a batch. |
Latest block delay | A lag from real time specified in blocks. This delay helps ensure data consistency and reliability by allowing time for any potential changes or reorganizations in the blockchain to stabilize before processing. A higher delay value may result in slightly delayed data delivery but can help mitigate the impact of chain reorganizations on data accuracy. |
Bock functions | If turned on, specifies a function to filter the data streamed. For more information, see Example block functions. |
Destination
The following table describes the fields on the Create stream - Destination page:
Name | Description |
---|---|
Type | The destination where Project Zero delivers the streamed data. The following options are currently available: |
Stream to a webhook
The following table describes the webhook destination fields on the Create stream - Destination page:
Name | Description |
---|---|
Reorg handling | Specifies how Project Zero handles any blockchain reorganizations encountered as part of the stream. The following options are available:
|
Destination URL | Defines the URL of the webhook where you want Project Zero to deliver the data. |
Compression | Specifies if Project Zero compresses data by Gzip during transmission to reduce data size and optimize bandwidth usage. |
Request timeout (Optional) | Defines the duration in seconds after which the webhook request times out if no response is received. |
Stop stream after (Optional) | Defines the number of retries if the webhook request fails. |
Wait between retries (Optional) | Defines the delay between retry attempts for failed webhook requests. |
Custom headers | Specifies if Project Zero adds custom headers to the webhook request for authentication or additional metadata. If turned on, the page displays Key and Value fields to define one or more header values. |
Stream to an S3 bucket
The following table describes the Amazon Simple Storage Service (S3) destination fields on the Create stream - Destination page:
Name | Description |
---|---|
Reorg handling | Specifies how Project Zero handles any blockchain reorganizations encountered as part of the stream. The following options are available:
|
Bucket name | Specifies the name of the S3 bucket where Project Zero stores the streamed data. |
Endpoint | Specifies the endpoint URL of the S3 bucket. |
Prefix | Specifies a prefix to organize and categorize stored data within the S3 bucket. |
File type | Specifies the file format for the stored data. The following options are currently avaialable:
|
File compression | Specifies if Project Zero compresses data by Gzip during transmission to reduce data size and optimize bandwidth usage. |
Access key ID | Specifies the access key ID associated with your AWS (Amazon Web Services) account for authentication. |
Secret access key | Specifies the secret access key corresponding to the specified access key ID for authentication. |
Stop stream after (Optional) | Defines the number of retries if the request fails. |
Wait between retries (Optional) | Defines the delay between retry attempts for failed requests. |
Stream to Kafka
The following table describes the Kafka destination fields on the Create stream page:
Name | Description |
---|---|
Reorg handling | Specifies how Project Zero handles any blockchain reorganizations encountered as part of the stream. The following options are available:
|
Topic | Specifies the Kafka topic where Project Zero publishes streamed data. |
Compression | Specifies if Project Zero compresses data by Gzip during transmission to reduce data size and optimize bandwidth usage. |
Brokers | Specifies the address of Kafka brokers to direct data transmission. You can add as many brokers as needed to distribute data across Kafka clusters for high-availability and fault tolerance. |
Acks (Optional) | Specifies the acknowledgement mode for message delivery. This defines the required level of acknowledgment from Kafka brokers. |
Partitions (Optional) | Defines the number of partitions that the Kafka topic distributes data across for improved scalability and parallel processing. |
Replicas (Optional) | Defines the number of replicas for each partition to ensure fault tolerance and data redundancy within the Kafka cluster. |
Username (Optional) | Specifies the username for authentication with Kafka brokers, if required. |
Password (Optional) | Specifies the password for the specified username for authentication. |
Stop stream after (Optional) | Defines the number of retries if the request fails. |
Wait between retries (Optional) | Defines the delay between retry attempts for failed requests. |