EventHubProducerClient (Azure SDK for Java Reference Documentation)

Publish date: 2024-06-02
A producer responsible for transmitting EventData to a specific Event Hub, grouped together in batches. Depending on the options specified at creation, the producer may be created to allow event data to be automatically routed to an available partition or specific to a partition.

Allowing automatic routing of partitions is recommended when:

If no partitionId is specified, the following rules are used for automatically selecting one:

  • Distribute the events equally amongst all available partitions using a round-robin approach.
  • If a partition becomes unavailable, the Event Hubs service will automatically detect it and forward the message to another available partition.
  • Create a producer that routes events to any partition

    To allow automatic routing of messages to available partition, do not specify the partitionId when creating the EventHubProducerClient.
    EventHubClient client = new EventHubClientBuilder() .connectionString("event-hubs-namespace-connection-string", "event-hub-name") .buildClient(); EventHubProducerClient producer = client.createProducer(); 

    Create a producer that publishes events to partition "foo" with a timeout of 45 seconds.

    Developers can push events to a single partition by specifying the partitionId when creating an EventHubProducerClient.
    EventData eventData = new EventData("data-to-partition-foo"); SendOptions options = new SendOptions() .setPartitionId("foo"); EventHubProducerClient producer = client.createProducer(); producer.send(eventData, options); 

    Publish events to the same partition, grouped together using SendOptions.setPartitionKey(String)

    If developers want to push similar events to end up at the same partition, but do not require them to go to a specific partition, they can use SendOptions.setPartitionKey(String).

    In the sample below, all the "sandwiches" end up in the same partition, but it could end up in partition 0, 1, etc. of the available partitions. All that matters to the end user is that they are grouped together.

    final List<EventData> events = Arrays.asList( new EventData("sourdough".getBytes(UTF_8)), new EventData("rye".getBytes(UTF_8)), new EventData("wheat".getBytes(UTF_8)) ); final EventHubProducerClient producer = client.createProducer(); final SendOptions options = new SendOptions() .setPartitionKey("bread"); producer.send(events, options); 

    Publish events using an EventDataBatch

    Developers can create an EventDataBatch, add the events they want into it, and publish these events together. When creating a batch, developers can specify a set of options to configure this batch.

    In the scenario below, the developer is creating a networked video game. They want to receive telemetry about their users' gaming systems, but do not want to slow down the network with telemetry. So they limit the size of their batches to be no larger than 256 bytes. The events within the batch also get hashed to the same partition because they all share the same BatchOptions.getPartitionKey().

    final List<EventData> telemetryEvents = Arrays.asList( new EventData("92".getBytes(UTF_8)).addProperty("telemetry", "latency"), new EventData("98".getBytes(UTF_8)).addProperty("telemetry", "cpu-temperature"), new EventData("120".getBytes(UTF_8)).addProperty("telemetry", "fps") ); final BatchOptions options = new BatchOptions() .setPartitionKey("telemetry") .setMaximumSizeInBytes(256); EventDataBatch currentBatch = producer.createBatch(options); // For each telemetry event, we try to add it to the current batch. // When the batch is full, send it then create another batch to add more events to. for (EventData event : telemetryEvents) { if (!currentBatch.tryAdd(event)) { producer.send(currentBatch); currentBatch = producer.createBatch(options); } } 

    ncG1vNJzZmiZqqq%2Fpr%2FDpJuom6Njr627wWeaqKqVY8SqusOorqxmnprBcHDWnploopGrrnCt2a6pnmWdmsC0rcaipaBllauyr8DHrpmsZ2VjfW98jKmpnq6ZmsRvgY6cpqZnka%2FCs7GOppysq5Gctq%2Bzjp6tnqakncKjv45%2BrZ6mpH3Co5zRqJuum5WnkK21xKerZ6Ckork%3D