p

org.apache.spark.sql

kafka011

package kafka011

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. kafka011
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. case class AssignStrategy(partitions: Array[TopicPartition]) extends ConsumerStrategy with Product with Serializable

    Specify a fixed collection of partitions.

  2. sealed trait ConsumerStrategy extends AnyRef

    Subscribe allows you to subscribe to a fixed collection of topics.

    Subscribe allows you to subscribe to a fixed collection of topics. SubscribePattern allows you to use a regex to specify topics of interest. Note that unlike the 0.8 integration, using Subscribe or SubscribePattern should respond to adding partitions during a running stream. Finally, Assign allows you to specify a fixed collection of partitions. All three strategies have overloaded constructors that allow you to specify the starting offset for a particular partition.

  3. case class KafkaContinuousInputPartition(topicPartition: TopicPartition, startOffset: Long, kafkaParams: Map[String, AnyRef], pollTimeoutMs: Long, failOnDataLoss: Boolean) extends ContinuousInputPartition[InternalRow] with Product with Serializable

    An input partition for continuous Kafka processing.

    An input partition for continuous Kafka processing. This will be serialized and transformed into a full reader on executors.

    topicPartition

    The (topic, partition) pair this task is responsible for.

    startOffset

    The offset to start reading from within the partition.

    kafkaParams

    Kafka consumer params to use.

    pollTimeoutMs

    The timeout for Kafka consumer polling.

    failOnDataLoss

    Flag indicating whether data reader should fail if some offsets are skipped.

  4. class KafkaContinuousInputPartitionReader extends ContinuousInputPartitionReader[InternalRow]

    A per-task data reader for continuous Kafka processing.

  5. class KafkaContinuousReader extends ContinuousReader with Logging

    A ContinuousReader for data from kafka.

  6. class KafkaStreamDataWriter extends KafkaRowWriter with DataWriter[InternalRow]

    A DataWriter for Kafka writing.

    A DataWriter for Kafka writing. One data writer will be created in each partition to process incoming rows.

  7. class KafkaStreamWriter extends StreamWriter

    A StreamWriter for Kafka writing.

    A StreamWriter for Kafka writing. Responsible for generating the writer factory.

  8. case class KafkaStreamWriterFactory(topic: Option[String], producerParams: Map[String, AnyRef], schema: StructType) extends DataWriterFactory[InternalRow] with Product with Serializable

    A DataWriterFactory for Kafka writing.

    A DataWriterFactory for Kafka writing. Will be serialized and sent to executors to generate the per-task data writers.

    topic

    The topic that should be written to. If None, topic will be inferred from a topic field in the incoming data.

    producerParams

    Parameters for Kafka producers in each task.

    schema

    The schema of the input data.

  9. type PartitionOffsetMap = Map[TopicPartition, Long]
  10. case class SubscribePatternStrategy(topicPattern: String) extends ConsumerStrategy with Product with Serializable

    Use a regex to specify topics of interest.

  11. case class SubscribeStrategy(topics: Seq[String]) extends ConsumerStrategy with Product with Serializable

    Subscribe to a fixed collection of topics.

Value Members

  1. object KafkaWriterCommitMessage extends WriterCommitMessage with Product with Serializable

    Dummy commit message.

    Dummy commit message. The DataSourceV2 framework requires a commit message implementation but we don't need to really send one.

Inherited from AnyRef

Inherited from Any

Ungrouped