back to collapsed details

When in processing time?

Configurable triggering
Event-time triggers
Processing-time triggers
Count triggers
Composite triggers
Allowed lateness
Timers
Google Cloud DataflowApache FlinkApache Spark (RDD/DStream based)Apache Spark Structured Streaming (Dataset based)Apache SamzaApache NemoHazelcast JetTwister2Python Direct FnRunnerGo Direct Runner

Yes : fully supported


Fully supported in streaming mode. In batch mode, intermediate trigger firings are effectively meaningless.

Yes : fully supported


Yes : fully supported


Partially : fully supported in batch mode


Yes : fully supported


Yes : fully supported


Yes : fully supported


Yes : fully supported


Yes : yes in streaming, fixed granularity in batch


Fully supported in streaming mode. In batch mode, currently watermark progress jumps from the beginning of time to the end of time once the input has been fully consumed, thus no additional triggering granularity is available.

Yes : fully supported


Yes : fully supported


Partially : fully supported in batch mode


Yes : fully supported


Yes : fully supported


Yes : fully supported


Yes : fully supported


Yes : yes in streaming, fixed granularity in batch


Fully supported in streaming mode. In batch mode, from the perspective of triggers, processing time currently jumps from the beginning of time to the end of time once the input has been fully consumed, thus no additional triggering granularity is available.

Yes : fully supported


Yes : This is Spark streaming's native model


Spark processes streams in micro-batches. The micro-batch size is actually a pre-set, fixed, time interval. Currently, the runner takes the first window size in the pipeline and sets it's size as the batch interval. Any following window operations will be considered processing time windows and will affect triggering.

Partially : fully supported in batch mode


Yes : fully supported


Yes : fully supported


Yes : fully supported


Yes : fully supported


Yes : fully supported


Fully supported in streaming mode. In batch mode, elements are processed in the largest bundles possible, so count-based triggers are effectively meaningless.

Yes : fully supported


Yes : fully supported


Partially : fully supported in batch mode


Yes : fully supported


Yes : fully supported


Yes : fully supported


Yes : fully supported


Yes : fully supported


Yes : fully supported


Yes : fully supported


Partially : fully supported in batch mode


Yes : fully supported


Yes : fully supported


Yes : fully supported


Partially :


Yes : fully supported


Fully supported in streaming mode. In batch mode no data is ever late.

Yes : fully supported


No


No : no streaming support in the runner


Yes : fully supported


Yes : fully supported


Yes : fully supported


Partially :


Partially : non-merging windows


Dataflow supports timers in non-merging windows.

Partially : non-merging windows


The Flink Runner supports timers in non-merging windows.

Partially : fully supported in batch mode


No : not implemented


Partially : non-merging windows


The Samza Runner supports timers in non-merging windows.

No : not implemented


Partially : non-merging windows


Partially :


Last updated on 2024/11/14

Have you found everything you were looking for?

Was it all useful and clear? Is there anything that you would like to change? Let us know!