== Physical Plan ==
AdaptiveSparkPlan (9)
+- == Final Plan ==
* HashAggregate (5)
+- ShuffleQueryStage (4), Statistics(sizeInBytes=16.0 B, rowCount=1)
+- Exchange (3)
+- * HashAggregate (2)
+- Scan csv (1)
+- == Initial Plan ==
HashAggregate (8)
+- Exchange (7)
+- HashAggregate (6)
+- Scan csv (1)
(1) Scan csv
Output: []
Batched: false
Location: InMemoryFileIndex [file:/data/input/depot/csv/execution/empty.csv]
ReadSchema: struct<>
(2) HashAggregate [codegen id : 1]
Input: []
Keys: []
Functions [1]: [partial_count(1)]
Aggregate Attributes [1]: [count#1372531L]
Results [1]: [count#1372532L]
(3) Exchange
Input [1]: [count#1372532L]
Arguments: SinglePartition, ENSURE_REQUIREMENTS, [plan_id=717211]
(4) ShuffleQueryStage
Output [1]: [count#1372532L]
Arguments: 0
(5) HashAggregate [codegen id : 2]
Input [1]: [count#1372532L]
Keys: []
Functions [1]: [count(1)]
Aggregate Attributes [1]: [count(1)#1372528L]
Results [1]: [count(1)#1372528L AS count#1372529L]
(6) HashAggregate
Input: []
Keys: []
Functions [1]: [partial_count(1)]
Aggregate Attributes [1]: [count#1372531L]
Results [1]: [count#1372532L]
(7) Exchange
Input [1]: [count#1372532L]
Arguments: SinglePartition, ENSURE_REQUIREMENTS, [plan_id=717203]
(8) HashAggregate
Input [1]: [count#1372532L]
Keys: []
Functions [1]: [count(1)]
Aggregate Attributes [1]: [count(1)#1372528L]
Results [1]: [count(1)#1372528L AS count#1372529L]
(9) AdaptiveSparkPlan
Output [1]: [count#1372529L]
Arguments: isFinalPlan=true