Create stream

To create a stream:

  1. Click Create stream to display the first Create stream page. Create stream page

  2. Enter an identifier for the stream in the Name field.

  3. Select a blockchain network from the Network selection dropdown list.

  4. Specify the date range of the stream:

    1. Select one of the following Stream start radio buttons to specify when the stream starts:
      • Latest block - begin streaming from the last available block.
      • Block # - begin streaming from a specific block.
    2. Select one of the following Stream end radio buttons to specify when the stream ends:
      • Never - continue to push real-time data indefinitely.
      • Block # - end streaming at a specific block.
      • Latest block - end streaming at the last available block.
  5. Click Continue to Dataset & Filter to display the Dataset & Filter page. Create stream - Dataset page

  6. From the Dataset type dropdown list, select the dataset to stream. You can click the Preview dataset to display a preview of the dataset. Dataset preview

  7. Optionally, select the size of data batches in the Batch size (Optional) field.

  8. Optionally, enter the number of blocks to introduce a lag from real time in the Latest block delay (Optional) field.

  9. Optionally, turn on the Block Functions toggle if you want to filter and transform the data streamed. Block functions are JavaScript functions that enable you to stream only the data that you require. You can use the Test button to verify that your code is valid. You can also specify a block number and click the Preview button to view the transformed data for that block. For more information, see Example block functions.

  10. Click Continue to Destination to display the Destination page. Create stream - Destination page

  11. Select one of the following from the Type dropdown list to specify where Project Zero streams the data:

    • Webhook
    • S3
    • Kafka

    Depending on your selection in this step, expand the appropriate option below for the relevant instructions:

    Webhook

    Webhook destination

    1. Select one of the following Reorg handling options to specify how the stream handles blockchain reorganizations:
      • None
      • Resend
      • Rollback and resend For more information about blockchain reorganization and selecting a reorg method, see Blockchain Reorganization.
    2. Enter the URL of the webhook in the Destination URL field.
    3. Optionally, select Gzip from the Compression dropdown list to compress data during transmission.
    4. Click Test Destination to ensure that you have configured your webhook correctly so that Project Zero can deliver data to the destination.
    5. To configure additional optional parameters, expand the Advanced options section. Webhook destination advanced options
      1. Change the default timeout in the Request timeout (Optional) field.
      2. Change the default number of retries in the Stop stream after (Optional) field.
      3. Change the default number of seconds between retry attempts for failed webhook requests in the Wait between retries (Optional) field.
      4. To define customer headers to add to the webhook request:
        1. Enable the Custom headers toggle.
        2. Enter the header values in the Key and Value fields.
        3. To add additional values click Add more and enter the values in the new Key and Value fields. Repeat this step for the number of header values required.
    S3

    S3 destination

    1. Select one of the following Reorg handling options to specify how the stream handles blockchain reorganizations:
      • None
      • Resend For more information about blockchain reorganization and selecting a reorg method, see Blockchain Reorganization.
    2. Enter the name of the S3 bucket in the Bucket name field.
    3. Optionally, change the endpoint URL in the Endpoint field.
    4. Enter a prefix to organize and categorize stored data within the S3 bucket in the Prefix field.
    5. Select the file format for the stored data from the File type dropdown list.
    6. Optionally, select Gzip from the Compression dropdown list to compress data during transmission.
    7. Enter the access key ID associated with your AWS account for authentication in the Access key ID field.
    8. Enter the secret access key corresponding to the specified access key ID in the Secret access key field.
    9. Click Test Destination to ensure that you have configured your S3 bucket correctly so that Project Zero can deliver data to the destination.
    10. To configure additional optional parameters, expand the Advanced options section. S3 destination advanced options
    11. Change the default number of retries in the Stop stream after (Optional) field.
    12. Change the default number of seconds between retry attempts for failed webhook requests in the Wait between retries (Optional) field.
    Kafka

    Kafka destination 11. Select one of the following Reorg handling options to specify how the stream handles blockchain reorganizations:

    • None
    • Resend
    • Rollback and resend For more information about blockchain reorganization and selecting a reorg method, see Blockchain Reorganization.
    1. In the Topic field, enter the name of the Kafka topic where Project Zero publishes streamed data.
    2. Optionally, select Gzip from the Compression dropdown list to compress data during transmission.
    3. Specify the address of one or more Kafka brokers:
    4. Enter a broker address in the Brokers field.
    5. To enter additional brokers, click Add more and enter the address in the new Brokers field. Repeat this step for the number of additional brokers required.
    6. Click Test Destination to ensure that you have configured your S3 bucket correctly so that Project Zero can deliver data to the destination.
    7. To configure additional optional parameters, expand the Advanced options section. Kafka destination advanced options
    8. Select an acknowledgement mode for message delivery from the Acks (Optional) dropdown list.
    9. Enter the number of partitions that the Kafka topic distributes data across in the Partitions (Optional) field.
    10. Enter the number of replicas for each partition in the Replicas (Optional) field.
    11. Enter your Kafka username in the Username (Optional) field.
    12. Enter your Kafka password for the specified username in the Password (Optional) field.
    13. Change the default number of retries in the Stop stream after (Optional) field.
    14. Change the default number of seconds between retry attempts for failed webhook requests in the Wait between retries (Optional) field.
  12. Click Create.