org.apache.spark.sql.execution.streaming.sources
ContinuousMemoryStream
Companion object ContinuousMemoryStream
class ContinuousMemoryStream[A] extends MemoryStreamBase[A] with ContinuousStream
The overall strategy here is: * ContinuousMemoryStream maintains a list of records for each partition. addData() will distribute records evenly-ish across partitions. * RecordEndpoint is set up as an endpoint for executor-side ContinuousMemoryStreamInputPartitionReader instances to poll. It returns the record at the specified offset within the list, or null if that offset doesn't yet have a record.
- Alphabetic
- By Inheritance
- ContinuousMemoryStream
- ContinuousStream
- MemoryStreamBase
- SparkDataStream
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Instance Constructors
- new ContinuousMemoryStream(id: Int, sqlContext: SQLContext, numPartitions: Int = 2)(implicit arg0: Encoder[A])
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- def addData(data: TraversableOnce[A]): connector.read.streaming.Offset
- Definition Classes
- ContinuousMemoryStream → MemoryStreamBase
- def addData(data: A*): connector.read.streaming.Offset
- Definition Classes
- MemoryStreamBase
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- val attributes: Seq[AttributeReference]
- Attributes
- protected
- Definition Classes
- MemoryStreamBase
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- def commit(end: connector.read.streaming.Offset): Unit
- Definition Classes
- ContinuousMemoryStream → MemoryStreamBase → SparkDataStream
- def createContinuousReaderFactory(): ContinuousPartitionReaderFactory
- Definition Classes
- ContinuousMemoryStream → ContinuousStream
- def deserializeOffset(json: String): ContinuousMemoryStreamOffset
- Definition Classes
- ContinuousMemoryStream → MemoryStreamBase → SparkDataStream
- val encoder: ExpressionEncoder[A]
- Definition Classes
- MemoryStreamBase
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- def fullSchema(): StructType
- Definition Classes
- MemoryStreamBase
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def initialOffset(): connector.read.streaming.Offset
- Definition Classes
- ContinuousMemoryStream → MemoryStreamBase → SparkDataStream
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- val logicalPlan: LogicalPlan
- Attributes
- protected
- Definition Classes
- MemoryStreamBase
- def mergeOffsets(offsets: Array[PartitionOffset]): ContinuousMemoryStreamOffset
- Definition Classes
- ContinuousMemoryStream → ContinuousStream
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def needsReconfiguration(): Boolean
- Definition Classes
- ContinuousStream
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def planInputPartitions(start: connector.read.streaming.Offset): Array[InputPartition]
- Definition Classes
- ContinuousMemoryStream → ContinuousStream
- def stop(): Unit
- Definition Classes
- ContinuousMemoryStream → SparkDataStream
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def toDF(): DataFrame
- Definition Classes
- MemoryStreamBase
- def toDS(): Dataset[A]
- Definition Classes
- MemoryStreamBase
- lazy val toRow: Serializer[A]
- Attributes
- protected
- Definition Classes
- MemoryStreamBase
- def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()