package python
- Alphabetic
- Public
- All
Type Members
-
class
BatchQueue extends AutoCloseable
A simple queue that holds the pending batches that need to line up with and combined with batches coming back from python
- class BufferToStreamWriter extends HostBufferConsumer
-
class
CoGroupedIterator extends Iterator[(ColumnarBatch, ColumnarBatch)]
Iterates over the left and right BatchGroupedIterators and returns the cogrouped data, i.e.
Iterates over the left and right BatchGroupedIterators and returns the cogrouped data, i.e. each record is rows having the same grouping key from the two BatchGroupedIterators.
Note: we assume the output of each BatchGroupedIterator is ordered by the grouping key.
-
class
CombiningIterator extends Iterator[ColumnarBatch]
An iterator combines the batches in a
inputBatchQueueand the result batches inpythonOutputIterone by one.An iterator combines the batches in a
inputBatchQueueand the result batches inpythonOutputIterone by one.Both the batches from
inputBatchQueueandpythonOutputItershould have the same row number.In each batch returned by calling to the
next, the columns of the result batch are appended to the columns of the input batch. -
case class
GpuAggregateInPandasExec(gpuGroupingExpressions: Seq[NamedExpression], udfExpressions: Seq[GpuPythonFunction], pyOutAttributes: Seq[Attribute], resultExpressions: Seq[NamedExpression], child: SparkPlan)(cpuGroupingExpressions: Seq[NamedExpression]) extends SparkPlan with ShimUnaryExecNode with GpuPythonExecBase with Product with Serializable
Physical node for aggregation with group aggregate Pandas UDF.
Physical node for aggregation with group aggregate Pandas UDF.
This plan works by sending the necessary (projected) input grouped data as Arrow record batches to the Python worker, the Python worker invokes the UDF and sends the results to the executor. Finally the executor evaluates any post-aggregation expressions and join the result with the grouped key.
This node aims at accelerating the data transfer between JVM and Python for GPU pipeline, and scheduling GPU resources for its Python processes.
-
case class
GpuArrowEvalPythonExec(udfs: Seq[GpuPythonUDF], resultAttrs: Seq[Attribute], child: SparkPlan, evalType: Int) extends SparkPlan with ShimUnaryExecNode with GpuPythonExecBase with Product with Serializable
A physical plan that evaluates a GpuPythonUDF.
A physical plan that evaluates a GpuPythonUDF. The transformation of the data to arrow happens on the GPU (practically a noop), But execution of the UDFs are on the CPU.
-
abstract
class
GpuArrowPythonRunnerBase extends GpuPythonRunnerBase[ColumnarBatch] with GpuPythonArrowOutput
Similar to
PythonUDFRunner, but exchange data with Python worker via Arrow stream. -
case class
GpuFlatMapCoGroupsInPandasExec(leftGroup: Seq[Attribute], rightGroup: Seq[Attribute], udf: Expression, output: Seq[Attribute], left: SparkPlan, right: SparkPlan) extends SparkPlan with ShimBinaryExecNode with GpuPythonExecBase with Product with Serializable
GPU version of Spark's
FlatMapCoGroupsInPandasExecGPU version of Spark's
FlatMapCoGroupsInPandasExecThis node aims at accelerating the data transfer between JVM and Python for GPU pipeline, and scheduling GPU resources for its Python processes.
- class GpuFlatMapCoGroupsInPandasExecMeta extends SparkPlanMeta[FlatMapCoGroupsInPandasExec]
-
case class
GpuFlatMapGroupsInPandasExec(groupingAttributes: Seq[Attribute], func: Expression, output: Seq[Attribute], child: SparkPlan) extends SparkPlan with ShimUnaryExecNode with GpuPythonExecBase with Product with Serializable
GPU version of Spark's
FlatMapGroupsInPandasExecGPU version of Spark's
FlatMapGroupsInPandasExecRows in each group are passed to the Python worker as an Arrow record batch. The Python worker turns the record batch to a
pandas.DataFrame, invoke the user-defined function, and passes the resultingpandas.DataFrameas an Arrow record batch. Finally, each record batch is turned to a ColumnarBatch.This node aims at accelerating the data transfer between JVM and Python for GPU pipeline, and scheduling GPU resources for its Python processes.
- class GpuFlatMapGroupsInPandasExecMeta extends SparkPlanMeta[FlatMapGroupsInPandasExec]
- trait GpuMapInBatchExec extends SparkPlan with ShimUnaryExecNode with GpuPythonExecBase
- case class GpuMapInPandasExec(func: Expression, output: Seq[Attribute], child: SparkPlan) extends SparkPlan with GpuMapInBatchExec with Product with Serializable
- class GpuMapInPandasExecMeta extends SparkPlanMeta[MapInPandasExec]
- trait GpuPythonExecBase extends SparkPlan with GpuExec
-
abstract
class
GpuPythonFunction extends Expression with GpuUnevaluable with NonSQLExpression with UserDefinedExpression with GpuAggregateWindowFunction with Serializable
A serialized version of a Python lambda function.
A serialized version of a Python lambda function. This is a special expression, which needs a dedicated physical operator to execute it, and thus can't be pushed down to data sources.
-
abstract
class
GpuPythonRunnerBase[IN] extends ShimBasePythonRunner[IN, ColumnarBatch]
Base class of GPU Python runners who will be mixed with GpuPythonArrowOutput to produce columnar batches.
- case class GpuPythonUDAF(name: String, func: PythonFunction, dataType: DataType, children: Seq[Expression], evalType: Int, udfDeterministic: Boolean, resultId: ExprId = NamedExpression.newExprId) extends GpuPythonFunction with GpuAggregateFunction with Product with Serializable
- case class GpuPythonUDF(name: String, func: PythonFunction, dataType: DataType, children: Seq[Expression], evalType: Int, udfDeterministic: Boolean, resultId: ExprId = NamedExpression.newExprId) extends GpuPythonFunction with Product with Serializable
- trait GpuWindowInPandasExecBase extends SparkPlan with ShimUnaryExecNode with GpuPythonExecBase
- abstract class GpuWindowInPandasExecMetaBase extends SparkPlanMeta[WindowInPandasExec]
-
case class
GroupArgs(dedupAttrs: Seq[Attribute], argOffsets: Array[Int], groupingOffsets: Seq[Int]) extends Product with Serializable
A helper class to pack the group related items for the Python input.
A helper class to pack the group related items for the Python input.
- dedupAttrs
the deduplicated attributes for the output of a Spark plan.
- argOffsets
the argument offsets which will be used to distinguish grouping columns and data columns by the Python workers.
- groupingOffsets
the grouping offsets(aka column indices) in the deduplicated attributes.
-
class
GroupingIterator extends Iterator[ColumnarBatch]
This iterator will group the rows in the incoming batches per the window "partitionBy" specification to make sure each group goes into only one batch, and each batch contains only one group data.
-
class
RebatchingRoundoffIterator extends Iterator[ColumnarBatch]
This iterator will round incoming batches to multiples of targetRoundoff rows, if possible.
This iterator will round incoming batches to multiples of targetRoundoff rows, if possible. The last batch might not be a multiple of it.
- class StreamToBufferProvider extends HostBufferProvider
Value Members
- object GpuAggregateInPandasExec extends Serializable
- object GpuPythonHelper extends Logging
-
object
GpuPythonUDF extends Serializable
Helper functions for GpuPythonUDF