You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A gcp_pubsub connector (v4.37.0) configured something like this:
input:
kafka: # btw, this is why we can't move to kafka_franz: https://github.com/redpanda-data/connect/issues/2745addresses: [ localhost:19092 ]topics: [ ordered-topic ]consumer_group: ordered-cgcheckpoint_limit: 1# https://docs.redpanda.com/redpanda-connect/components/inputs/kafka/#orderingbatching:
count: 100period: 1soutput:
gcp_pubsub:
project: <redacted>credentials: <redacted>topic: ordered-topicordering_key: ordering-key
Expectation
After, for example, an internet interruption, the connector should automatically recover and resume publishing.
What we see
The connector does not auto-recover, and keeps failing with:
{"@service":"redpanda-connect","label":"","level":"error","msg":"Failed to send message to gcp_pubsub: pubsub: Publishing for ordering key, ordering-key, paused due to previous error. Call topic.ResumePublish(orderingKey) before resuming publishing","path":"root.output","stream":"stream-pubsub"}
you have to restart the process to restore the flow.
It looks like a call to topic.ResumePublish is missing, either after a failure or before each batch.
The text was updated successfully, but these errors were encountered:
Scenario
A gcp_pubsub connector (v4.37.0) configured something like this:
Expectation
After, for example, an internet interruption, the connector should automatically recover and resume publishing.
What we see
The connector does not auto-recover, and keeps failing with:
you have to restart the process to restore the flow.
It looks like a call to topic.ResumePublish is missing, either after a failure or before each batch.
The text was updated successfully, but these errors were encountered: