Package io.debezium.connector.jdbc
Class JdbcSinkConnectorTask
java.lang.Object
org.apache.kafka.connect.sink.SinkTask
io.debezium.connector.jdbc.JdbcSinkConnectorTask
- All Implemented Interfaces:
org.apache.kafka.connect.connector.Task
public class JdbcSinkConnectorTask
extends org.apache.kafka.connect.sink.SinkTask
The main task executing streaming from sink connector.
Responsible for lifecycle management of the streaming code.
- Author:
- Hossein Torabi
-
Nested Class Summary
Nested Classes -
Field Summary
FieldsModifier and TypeFieldDescriptionprivate ChangeEventSinkprivate static final Class[]private static final org.slf4j.Loggerprivate final Map<org.apache.kafka.common.TopicPartition, org.apache.kafka.clients.consumer.OffsetAndMetadata> private Methodprivate Throwableprivate org.hibernate.SessionFactoryprivate final AtomicReference<JdbcSinkConnectorTask.State> private final ReentrantLockprivate booleanThere is a change inInternalSinkRecordAPI between Connect 3.7 and 3.8.Fields inherited from class org.apache.kafka.connect.sink.SinkTask
context, TOPICS_CONFIG, TOPICS_REGEX_CONFIG -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionvoidclose(Collection<org.apache.kafka.common.TopicPartition> partitions) private longgetOriginalKafkaOffset(org.apache.kafka.connect.sink.SinkRecord record) private IntegergetOriginalKafkaPartition(org.apache.kafka.connect.sink.SinkRecord record) private StringgetOriginalTopicName(org.apache.kafka.connect.sink.SinkRecord record) private voidmarkNotProcessed(org.apache.kafka.connect.sink.SinkRecord record) Marks a single record as not processed.private voidmarkProcessed(org.apache.kafka.connect.sink.SinkRecord record) Marks a sink record as processed.voidopen(Collection<org.apache.kafka.common.TopicPartition> partitions) Map<org.apache.kafka.common.TopicPartition, org.apache.kafka.clients.consumer.OffsetAndMetadata> preCommit(Map<org.apache.kafka.common.TopicPartition, org.apache.kafka.clients.consumer.OffsetAndMetadata> currentOffsets) voidput(Collection<org.apache.kafka.connect.sink.SinkRecord> records) voidvoidstop()version()Methods inherited from class org.apache.kafka.connect.sink.SinkTask
flush, initialize, onPartitionsAssigned, onPartitionsRevoked
-
Field Details
-
LOGGER
private static final org.slf4j.Logger LOGGER -
EMPTY_CLASS_ARRAY
-
sessionFactory
private org.hibernate.SessionFactory sessionFactory -
state
-
stateLock
-
changeEventSink
-
offsets
private final Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> offsets -
previousPutException
-
usePre380OriginalRecordAccess
private boolean usePre380OriginalRecordAccessThere is a change inInternalSinkRecordAPI between Connect 3.7 and 3.8. The code now uses 3.8 and use reflection to call old API if new one is not available. -
pre380OriginalRecordMethod
-
-
Constructor Details
-
JdbcSinkConnectorTask
public JdbcSinkConnectorTask()
-
-
Method Details
-
version
-
start
- Specified by:
startin interfaceorg.apache.kafka.connect.connector.Task- Specified by:
startin classorg.apache.kafka.connect.sink.SinkTask
-
put
- Specified by:
putin classorg.apache.kafka.connect.sink.SinkTask
-
open
- Overrides:
openin classorg.apache.kafka.connect.sink.SinkTask
-
close
- Overrides:
closein classorg.apache.kafka.connect.sink.SinkTask
-
preCommit
public Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> preCommit(Map<org.apache.kafka.common.TopicPartition, org.apache.kafka.clients.consumer.OffsetAndMetadata> currentOffsets) - Overrides:
preCommitin classorg.apache.kafka.connect.sink.SinkTask
-
stop
public void stop()- Specified by:
stopin interfaceorg.apache.kafka.connect.connector.Task- Specified by:
stopin classorg.apache.kafka.connect.sink.SinkTask
-
markProcessed
private void markProcessed(org.apache.kafka.connect.sink.SinkRecord record) Marks a sink record as processed.- Parameters:
record- sink record, should not benull
-
markNotProcessed
private void markNotProcessed(org.apache.kafka.connect.sink.SinkRecord record) Marks a single record as not processed.- Parameters:
record- sink record, should not benull
-
getOriginalTopicName
-
getOriginalKafkaPartition
-
getOriginalKafkaOffset
private long getOriginalKafkaOffset(org.apache.kafka.connect.sink.SinkRecord record)
-