Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support array_append #1072

Merged
merged 8 commits into from
Nov 13, 2024
Merged

feat: support array_append #1072

merged 8 commits into from
Nov 13, 2024

Conversation

NoeB
Copy link
Contributor

@NoeB NoeB commented Nov 9, 2024

Which issue does this PR close?

Related to #1042: Adds support for array_append operation

Rationale for this change

What changes are included in this PR?

How are these changes tested?

@NoeB
Copy link
Contributor Author

NoeB commented Nov 9, 2024

I am unsure if I implemented the operation correctly, so I just implemented the array_append operation as a first step. If the approach is correct, I think I can extract it to a function and add the support for the other array operations as well (for which DataFusion already has support)

@NoeB NoeB marked this pull request as ready for review November 9, 2024 15:35
@@ -2313,4 +2313,22 @@ class CometExpressionSuite extends CometTestBase with AdaptiveSparkPlanHelper {
}
}
}

test("array_append") {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to also have a test where the first argument to array_append is null, but where the argument is not a literal null but an expression that evaluates to null. I am not sure how easy it is to add that test until we have support for reading arrays from Parquet (which is coming soon) so I am fine if we want to handle this as a separate issue. It does look like DataFusion's array_append does not support null as the first argument, but Spark does. Maybe we could improve DataFusion's implementation to return null if the first argument is null.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I think I managed to create a test which creates this scenario:
checkSparkAnswerAndOperator( df.select(array_append(expr("CASE WHEN _2=_3 THEN array(_19) END"), col("_19"))))
Which fails with the following error message:

  !== Correct Answer - 10 ==                                                             == Spark Answer - 10 ==
   struct<array_append(CASE WHEN (_2 = _3) THEN array(_19) END, _19):array<timestamp>>   struct<array_append(CASE WHEN (_2 = _3) THEN array(_19) END, _19):array<timestamp>>
   [ArrayBuffer(1969-12-31 16:00:00.0, 1969-12-31 16:00:00.0)]                           [ArrayBuffer(1969-12-31 16:00:00.0, 1969-12-31 16:00:00.0)]
   [ArrayBuffer(1969-12-31 16:00:00.0, 1969-12-31 16:00:00.0)]                           [ArrayBuffer(1969-12-31 16:00:00.0, 1969-12-31 16:00:00.0)]
   [ArrayBuffer(1969-12-31 16:00:00.000001, 1969-12-31 16:00:00.000001)]                 [ArrayBuffer(1969-12-31 16:00:00.000001, 1969-12-31 16:00:00.000001)]
   [ArrayBuffer(1969-12-31 16:00:00.000002, 1969-12-31 16:00:00.000002)]                 [ArrayBuffer(1969-12-31 16:00:00.000002, 1969-12-31 16:00:00.000002)]
   [ArrayBuffer(1969-12-31 16:00:00.000003, 1969-12-31 16:00:00.000003)]                 [ArrayBuffer(1969-12-31 16:00:00.000003, 1969-12-31 16:00:00.000003)]
   [ArrayBuffer(1969-12-31 16:00:00.000003, 1969-12-31 16:00:00.000003)]                 [ArrayBuffer(1969-12-31 16:00:00.000003, 1969-12-31 16:00:00.000003)]
  ![null]                                                                                [ArrayBuffer(null)]
  ![null]                                                                                [ArrayBuffer(null)]

Should I create an issue on the DataFusion repo? Or is it unlikely that they will support it because their implementation seems to match the one of Postgres?

Datafusion / Postgres

SELECT array_append( CASE WHEN 1=2 THEN Array[1,2,3] END,  4);
returns: [4]

Spark

spark.sql("SELECT array_prepend(     CASE WHEN 1=2 THEN ARRAY(1,2,3) END,  4);").show
returns:  NULL

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. Let's handle this in Comet rather than modify DataFusion. One option is to fork the array_append code and modify it. Another option would be to translate array_append(a, b) into something like CASE WHEN a IS NULL THEN null ELSE array_append(a, b) END.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have now updated the branch with a version which adds a case statement to check for NULL. In my opinion it is less effort that way than fork that part of the DataFusion code.

@codecov-commenter
Copy link

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 34.12%. Comparing base (845b654) to head (1715e49).
Report is 14 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff              @@
##               main    #1072      +/-   ##
============================================
- Coverage     34.46%   34.12%   -0.35%     
+ Complexity      888      883       -5     
============================================
  Files           113      113              
  Lines         43580    42832     -748     
  Branches       9658     9367     -291     
============================================
- Hits          15021    14617     -404     
+ Misses        25507    25352     -155     
+ Partials       3052     2863     -189     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

vec![(is_null_expr, null_literal_expr)],
Some(array_append_expr),
)
.unwrap();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we use ? rather than unwrap here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I have added a commit to use ? instead of unwrap

@NoeB
Copy link
Contributor Author

NoeB commented Nov 12, 2024

Because array_append has been added in Spark 3.4 I had to rewrite the match statement in QueryPlanSerde because it would not compile with spark 3.3 the same thing applies to the test which I rewrote to spark.sql because the test would also not compile with spark3.3.
Let me know if the approach is ok or if you have a different option in mind. I think a lot of the array functions have been added in spark 3.4 and spark3.5 so a lot more cases like this will exist in the future.
Regarding the Spark 4.0 failures I could not yet check them because the tests do not seem to start on my machine with spark 4.0. I am working to fix that.

Copy link
Member

@andygrove andygrove left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM pending CI. Thanks @NoeB

@NoeB
Copy link
Contributor Author

NoeB commented Nov 12, 2024

Thank you @andygrove for the review
I have updated the code to match the refactor you did yesterday I guess that's also the reason for the failing tests.

@NoeB
Copy link
Contributor Author

NoeB commented Nov 13, 2024

It seems that Spark 4.0 rewrites array_append into an array_insert expression, therefor the test is disabled for spark 4.0+. see https://github.com/apache/spark/blob/f0d465e09b8d89d5e56ec21f4bd7e3ecbeeb318a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala#L1758.

@andygrove andygrove merged commit 9657b75 into apache:main Nov 13, 2024
74 checks passed
andygrove added a commit that referenced this pull request Dec 20, 2024
* feat: support array_append (#1072)

* feat: support array_append

* formatted code

* rewrite array_append plan to match spark behaviour and fixed bug in QueryPlan serde

* remove unwrap

* Fix for Spark 3.3

* refactor array_append binary expression serde code

* Disabled array_append test for spark 4.0+

* chore: Simplify CometShuffleMemoryAllocator to use Spark unified memory allocator (#1063)

* docs: Update benchmarking.md (#1085)

* feat: Require offHeap memory to be enabled (always use unified memory) (#1062)

* Require offHeap memory

* remove unused import

* use off heap memory in stability tests

* reorder imports

* test: Restore one test in CometExecSuite by adding COMET_SHUFFLE_MODE config (#1087)

* Add changelog for 0.4.0 (#1089)

* chore: Prepare for 0.5.0 development (#1090)

* Update version number for build

* update docs

* build: Skip installation of spark-integration  and fuzz testing modules (#1091)

* Add hint for finding the GPG key to use when publishing to maven (#1093)

* docs: Update documentation for 0.4.0 release (#1096)

* update TPC-H results

* update Maven links

* update benchmarking guide and add TPC-DS results

* include q72

* fix: Unsigned type related bugs (#1095)

## Which issue does this PR close?

Closes #1067

## Rationale for this change

Bug fix. A few expressions were failing some unsigned type related tests

## What changes are included in this PR?

 - For `u8`/`u16`, switched to use `generate_cast_to_signed!` in order to copy full i16/i32 width instead of padding zeros in the higher bits
 - `u64` becomes `Decimal(20, 0)` but there was a bug in `round()`  (`>` vs `>=`)

## How are these changes tested?

Put back tests for unsigned types

* chore: Include first ScanExec batch in metrics (#1105)

* include first batch in ScanExec metrics

* record row count metric

* fix regression

* chore: Improve CometScan metrics (#1100)

* Add native metrics for plan creation

* make messages consistent

* Include get_next_batch cost in metrics

* formatting

* fix double count of rows

* chore: Add custom metric for native shuffle fetching batches from JVM (#1108)

* feat: support array_insert (#1073)

* Part of the implementation of array_insert

* Missing methods

* Working version

* Reformat code

* Fix code-style

* Add comments about spark's implementation.

* Implement negative indices

+ fix tests for spark < 3.4

* Fix code-style

* Fix scalastyle

* Fix tests for spark < 3.4

* Fixes & tests

- added test for the negative index
- added test for the legacy spark mode

* Use assume(isSpark34Plus) in tests

* Test else-branch & improve coverage

* Update native/spark-expr/src/list.rs

Co-authored-by: Andy Grove <[email protected]>

* Fix fallback test

In one case there is a zero in index and test fails due to spark error

* Adjust the behaviour for the NULL case to Spark

* Move the logic of type checking to the method

* Fix code-style

---------

Co-authored-by: Andy Grove <[email protected]>

* feat: enable decimal to decimal cast of different precision and scale (#1086)

* enable decimal to decimal cast of different precision and scale

* add more test cases for negative scale and higher precision

* add check for compatibility for decimal to decimal

* fix code style

* Update spark/src/main/scala/org/apache/comet/expressions/CometCast.scala

Co-authored-by: Andy Grove <[email protected]>

* fix the nit in comment

---------

Co-authored-by: himadripal <[email protected]>
Co-authored-by: Andy Grove <[email protected]>

* docs: fix readme FGPA/FPGA typo (#1117)

* fix: Use RDD partition index (#1112)

* fix: Use RDD partition index

* fix

* fix

* fix

* fix: Various metrics bug fixes and improvements (#1111)

* fix: Don't create CometScanExec for subclasses of ParquetFileFormat (#1129)

* Use exact class comparison for parquet scan

* Add test

* Add comment

* fix: Fix metrics regressions (#1132)

* fix metrics issues

* clippy

* update tests

* docs: Add more technical detail and new diagram to Comet plugin overview (#1119)

* Add more technical detail and new diagram to Comet plugin overview

* update diagram

* add info on Arrow IPC

* update diagram

* update diagram

* update docs

* address feedback

* Stop passing Java config map into native createPlan (#1101)

* feat: Improve ScanExec native metrics (#1133)

* save

* remove shuffle jvm metric and update tuning guide

* docs

* add source for all ScanExecs

* address feedback

* address feedback

* chore: Remove unused StringView struct (#1143)

* Remove unused StringView struct

* remove more dead code

* docs: Add some documentation explaining how shuffle works (#1148)

* add some notes on shuffle

* reads

* improve docs

* test: enable more Spark 4.0 tests (#1145)

## Which issue does this PR close?

Part of #372 and #551

## Rationale for this change

To be ready for Spark 4.0

## What changes are included in this PR?

This PR enables more Spark 4.0 tests that were fixed by recent changes

## How are these changes tested?

tests enabled

* chore: Refactor cast to use SparkCastOptions param (#1146)

* Refactor cast to use SparkCastOptions param

* update tests

* update benches

* update benches

* update benches

* Enable more scenarios in CometExecBenchmark. (#1151)

* chore: Move more expressions from core crate to spark-expr crate (#1152)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* remove dead code (#1155)

* fix: Spark 4.0-preview1 SPARK-47120 (#1156)

## Which issue does this PR close?

Part of #372 and #551

## Rationale for this change

To be ready for Spark 4.0

## What changes are included in this PR?

This PR fixes the new test SPARK-47120 added in Spark 4.0

## How are these changes tested?

tests enabled

* chore: Move string kernels and expressions to spark-expr crate (#1164)

* Move string kernels and expressions to spark-expr crate

* remove unused hash kernel

* remove unused dependencies

* chore: Move remaining expressions to spark-expr crate + some minor refactoring (#1165)

* move CheckOverflow to spark-expr crate

* move NegativeExpr to spark-expr crate

* move UnboundColumn to spark-expr crate

* move ExpandExec from execution::datafusion::operators to execution::operators

* refactoring to remove datafusion subpackage

* update imports in benches

* fix

* fix

* chore: Add ignored tests for reading complex types from Parquet (#1167)

* Add ignored tests for reading structs from Parquet

* add basic map test

* add tests for Map and Array

* feat: Add Spark-compatible implementation of SchemaAdapterFactory (#1169)

* Add Spark-compatible SchemaAdapterFactory implementation

* remove prototype code

* fix

* refactor

* implement more cast logic

* implement more cast logic

* add basic test

* improve test

* cleanup

* fmt

* add support for casting unsigned int to signed int

* clippy

* address feedback

* fix test

* fix: Document enabling comet explain plan usage in Spark (4.0) (#1176)

* test: enabling Spark tests with offHeap requirement (#1177)

## Which issue does this PR close?

## Rationale for this change

After #1062 We have not running Spark tests for native execution

## What changes are included in this PR?

Removed the off heap requirement for testing

## How are these changes tested?

Bringing back Spark tests for native execution

* feat: Improve shuffle metrics (second attempt) (#1175)

* improve shuffle metrics

* docs

* more metrics

* refactor

* address feedback

* Fix redundancy in Cargo.lock.

* Format, more post-merge cleanup.

* Compiles

* Compiles

* Remove empty file.

* Attempt to fix JNI issue and native test build issues.

* Test Fix

* Update planner.rs

Remove println from test.

---------

Co-authored-by: NoeB <[email protected]>
Co-authored-by: Liang-Chi Hsieh <[email protected]>
Co-authored-by: Raz Luvaton <[email protected]>
Co-authored-by: Andy Grove <[email protected]>
Co-authored-by: Parth Chandra <[email protected]>
Co-authored-by: KAZUYUKI TANIMURA <[email protected]>
Co-authored-by: Sem <[email protected]>
Co-authored-by: Himadri Pal <[email protected]>
Co-authored-by: himadripal <[email protected]>
Co-authored-by: gstvg <[email protected]>
Co-authored-by: Adam Binford <[email protected]>
andygrove pushed a commit to andygrove/datafusion-comet that referenced this pull request Jan 14, 2025
* feat: support array_append

* formatted code

* rewrite array_append plan to match spark behaviour and fixed bug in QueryPlan serde

* remove unwrap

* Fix for Spark 3.3

* refactor array_append binary expression serde code

* Disabled array_append test for spark 4.0+
andygrove added a commit that referenced this pull request Jan 16, 2025
* feat: support array_append (#1072)

* feat: support array_append

* formatted code

* rewrite array_append plan to match spark behaviour and fixed bug in QueryPlan serde

* remove unwrap

* Fix for Spark 3.3

* refactor array_append binary expression serde code

* Disabled array_append test for spark 4.0+

* chore: Simplify CometShuffleMemoryAllocator to use Spark unified memory allocator (#1063)

* docs: Update benchmarking.md (#1085)

* feat: Require offHeap memory to be enabled (always use unified memory) (#1062)

* Require offHeap memory

* remove unused import

* use off heap memory in stability tests

* reorder imports

* test: Restore one test in CometExecSuite by adding COMET_SHUFFLE_MODE config (#1087)

* Add changelog for 0.4.0 (#1089)

* chore: Prepare for 0.5.0 development (#1090)

* Update version number for build

* update docs

* build: Skip installation of spark-integration  and fuzz testing modules (#1091)

* Add hint for finding the GPG key to use when publishing to maven (#1093)

* docs: Update documentation for 0.4.0 release (#1096)

* update TPC-H results

* update Maven links

* update benchmarking guide and add TPC-DS results

* include q72

* fix: Unsigned type related bugs (#1095)

## Which issue does this PR close?

Closes #1067

## Rationale for this change

Bug fix. A few expressions were failing some unsigned type related tests

## What changes are included in this PR?

 - For `u8`/`u16`, switched to use `generate_cast_to_signed!` in order to copy full i16/i32 width instead of padding zeros in the higher bits
 - `u64` becomes `Decimal(20, 0)` but there was a bug in `round()`  (`>` vs `>=`)

## How are these changes tested?

Put back tests for unsigned types

* chore: Include first ScanExec batch in metrics (#1105)

* include first batch in ScanExec metrics

* record row count metric

* fix regression

* chore: Improve CometScan metrics (#1100)

* Add native metrics for plan creation

* make messages consistent

* Include get_next_batch cost in metrics

* formatting

* fix double count of rows

* chore: Add custom metric for native shuffle fetching batches from JVM (#1108)

* feat: support array_insert (#1073)

* Part of the implementation of array_insert

* Missing methods

* Working version

* Reformat code

* Fix code-style

* Add comments about spark's implementation.

* Implement negative indices

+ fix tests for spark < 3.4

* Fix code-style

* Fix scalastyle

* Fix tests for spark < 3.4

* Fixes & tests

- added test for the negative index
- added test for the legacy spark mode

* Use assume(isSpark34Plus) in tests

* Test else-branch & improve coverage

* Update native/spark-expr/src/list.rs

Co-authored-by: Andy Grove <[email protected]>

* Fix fallback test

In one case there is a zero in index and test fails due to spark error

* Adjust the behaviour for the NULL case to Spark

* Move the logic of type checking to the method

* Fix code-style

---------

Co-authored-by: Andy Grove <[email protected]>

* feat: enable decimal to decimal cast of different precision and scale (#1086)

* enable decimal to decimal cast of different precision and scale

* add more test cases for negative scale and higher precision

* add check for compatibility for decimal to decimal

* fix code style

* Update spark/src/main/scala/org/apache/comet/expressions/CometCast.scala

Co-authored-by: Andy Grove <[email protected]>

* fix the nit in comment

---------

Co-authored-by: himadripal <[email protected]>
Co-authored-by: Andy Grove <[email protected]>

* docs: fix readme FGPA/FPGA typo (#1117)

* fix: Use RDD partition index (#1112)

* fix: Use RDD partition index

* fix

* fix

* fix

* fix: Various metrics bug fixes and improvements (#1111)

* fix: Don't create CometScanExec for subclasses of ParquetFileFormat (#1129)

* Use exact class comparison for parquet scan

* Add test

* Add comment

* fix: Fix metrics regressions (#1132)

* fix metrics issues

* clippy

* update tests

* docs: Add more technical detail and new diagram to Comet plugin overview (#1119)

* Add more technical detail and new diagram to Comet plugin overview

* update diagram

* add info on Arrow IPC

* update diagram

* update diagram

* update docs

* address feedback

* Stop passing Java config map into native createPlan (#1101)

* feat: Improve ScanExec native metrics (#1133)

* save

* remove shuffle jvm metric and update tuning guide

* docs

* add source for all ScanExecs

* address feedback

* address feedback

* chore: Remove unused StringView struct (#1143)

* Remove unused StringView struct

* remove more dead code

* docs: Add some documentation explaining how shuffle works (#1148)

* add some notes on shuffle

* reads

* improve docs

* test: enable more Spark 4.0 tests (#1145)

## Which issue does this PR close?

Part of #372 and #551

## Rationale for this change

To be ready for Spark 4.0

## What changes are included in this PR?

This PR enables more Spark 4.0 tests that were fixed by recent changes

## How are these changes tested?

tests enabled

* chore: Refactor cast to use SparkCastOptions param (#1146)

* Refactor cast to use SparkCastOptions param

* update tests

* update benches

* update benches

* update benches

* Enable more scenarios in CometExecBenchmark. (#1151)

* chore: Move more expressions from core crate to spark-expr crate (#1152)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* remove dead code (#1155)

* fix: Spark 4.0-preview1 SPARK-47120 (#1156)

## Which issue does this PR close?

Part of #372 and #551

## Rationale for this change

To be ready for Spark 4.0

## What changes are included in this PR?

This PR fixes the new test SPARK-47120 added in Spark 4.0

## How are these changes tested?

tests enabled

* chore: Move string kernels and expressions to spark-expr crate (#1164)

* Move string kernels and expressions to spark-expr crate

* remove unused hash kernel

* remove unused dependencies

* chore: Move remaining expressions to spark-expr crate + some minor refactoring (#1165)

* move CheckOverflow to spark-expr crate

* move NegativeExpr to spark-expr crate

* move UnboundColumn to spark-expr crate

* move ExpandExec from execution::datafusion::operators to execution::operators

* refactoring to remove datafusion subpackage

* update imports in benches

* fix

* fix

* chore: Add ignored tests for reading complex types from Parquet (#1167)

* Add ignored tests for reading structs from Parquet

* add basic map test

* add tests for Map and Array

* feat: Add Spark-compatible implementation of SchemaAdapterFactory (#1169)

* Add Spark-compatible SchemaAdapterFactory implementation

* remove prototype code

* fix

* refactor

* implement more cast logic

* implement more cast logic

* add basic test

* improve test

* cleanup

* fmt

* add support for casting unsigned int to signed int

* clippy

* address feedback

* fix test

* fix: Document enabling comet explain plan usage in Spark (4.0) (#1176)

* test: enabling Spark tests with offHeap requirement (#1177)

## Which issue does this PR close?

## Rationale for this change

After #1062 We have not running Spark tests for native execution

## What changes are included in this PR?

Removed the off heap requirement for testing

## How are these changes tested?

Bringing back Spark tests for native execution

* feat: Improve shuffle metrics (second attempt) (#1175)

* improve shuffle metrics

* docs

* more metrics

* refactor

* address feedback

* fix: stddev_pop should not directly return 0.0 when count is 1.0 (#1184)

* add test

* fix

* fix

* fix

* feat: Make native shuffle compression configurable and respect `spark.shuffle.compress` (#1185)

* Make shuffle compression codec and level configurable

* remove lz4 references

* docs

* update comment

* clippy

* fix benches

* clippy

* clippy

* disable test for miri

* remove lz4 reference from proto

* minor: move shuffle classes from common to spark (#1193)

* minor: refactor decodeBatches to make private in broadcast exchange (#1195)

* minor: refactor prepare_output so that it does not require an ExecutionContext (#1194)

* fix: fix missing explanation for then branch in case when (#1200)

* minor: remove unused source files (#1202)

* chore: Upgrade to DataFusion 44.0.0-rc2 (#1154)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* save

* save

* save

* remove unused imports

* clippy

* implement more hashers

* implement Hash and PartialEq

* implement Hash and PartialEq

* implement Hash and PartialEq

* benches

* fix ScalarUDFImpl.return_type failure

* exclude test from miri

* ignore correct test

* ignore another test

* remove miri checks

* use return_type_from_exprs

* Revert "use return_type_from_exprs"

This reverts commit febc1f1.

* use DF main branch

* hacky workaround for regression in ScalarUDFImpl.return_type

* fix repo url

* pin to revision

* bump to latest rev

* bump to latest DF rev

* bump DF to rev 9f530dd

* add Cargo.lock

* bump DF version

* no default features

* Revert "remove miri checks"

This reverts commit 4638fe3.

* Update pin to DataFusion e99e02b9b9093ceb0c13a2dd32a2a89beba47930

* update pin

* Update Cargo.toml

Bump to 44.0.0-rc2

* update cargo lock

* revert miri change

---------

Co-authored-by: Andrew Lamb <[email protected]>

* feat: add support for array_contains expression (#1163)

* feat: add support for array_contains expression

* test: add unit test for array_contains function

* Removes unnecessary case expression for handling null values

* chore: Move more expressions from core crate to spark-expr crate (#1152)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* remove dead code (#1155)

* fix: Spark 4.0-preview1 SPARK-47120 (#1156)

## Which issue does this PR close?

Part of #372 and #551

## Rationale for this change

To be ready for Spark 4.0

## What changes are included in this PR?

This PR fixes the new test SPARK-47120 added in Spark 4.0

## How are these changes tested?

tests enabled

* chore: Move string kernels and expressions to spark-expr crate (#1164)

* Move string kernels and expressions to spark-expr crate

* remove unused hash kernel

* remove unused dependencies

* chore: Move remaining expressions to spark-expr crate + some minor refactoring (#1165)

* move CheckOverflow to spark-expr crate

* move NegativeExpr to spark-expr crate

* move UnboundColumn to spark-expr crate

* move ExpandExec from execution::datafusion::operators to execution::operators

* refactoring to remove datafusion subpackage

* update imports in benches

* fix

* fix

* chore: Add ignored tests for reading complex types from Parquet (#1167)

* Add ignored tests for reading structs from Parquet

* add basic map test

* add tests for Map and Array

* feat: Add Spark-compatible implementation of SchemaAdapterFactory (#1169)

* Add Spark-compatible SchemaAdapterFactory implementation

* remove prototype code

* fix

* refactor

* implement more cast logic

* implement more cast logic

* add basic test

* improve test

* cleanup

* fmt

* add support for casting unsigned int to signed int

* clippy

* address feedback

* fix test

* fix: Document enabling comet explain plan usage in Spark (4.0) (#1176)

* test: enabling Spark tests with offHeap requirement (#1177)

## Which issue does this PR close?

## Rationale for this change

After #1062 We have not running Spark tests for native execution

## What changes are included in this PR?

Removed the off heap requirement for testing

## How are these changes tested?

Bringing back Spark tests for native execution

* feat: Improve shuffle metrics (second attempt) (#1175)

* improve shuffle metrics

* docs

* more metrics

* refactor

* address feedback

* fix: stddev_pop should not directly return 0.0 when count is 1.0 (#1184)

* add test

* fix

* fix

* fix

* feat: Make native shuffle compression configurable and respect `spark.shuffle.compress` (#1185)

* Make shuffle compression codec and level configurable

* remove lz4 references

* docs

* update comment

* clippy

* fix benches

* clippy

* clippy

* disable test for miri

* remove lz4 reference from proto

* minor: move shuffle classes from common to spark (#1193)

* minor: refactor decodeBatches to make private in broadcast exchange (#1195)

* minor: refactor prepare_output so that it does not require an ExecutionContext (#1194)

* fix: fix missing explanation for then branch in case when (#1200)

* minor: remove unused source files (#1202)

* chore: Upgrade to DataFusion 44.0.0-rc2 (#1154)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* save

* save

* save

* remove unused imports

* clippy

* implement more hashers

* implement Hash and PartialEq

* implement Hash and PartialEq

* implement Hash and PartialEq

* benches

* fix ScalarUDFImpl.return_type failure

* exclude test from miri

* ignore correct test

* ignore another test

* remove miri checks

* use return_type_from_exprs

* Revert "use return_type_from_exprs"

This reverts commit febc1f1.

* use DF main branch

* hacky workaround for regression in ScalarUDFImpl.return_type

* fix repo url

* pin to revision

* bump to latest rev

* bump to latest DF rev

* bump DF to rev 9f530dd

* add Cargo.lock

* bump DF version

* no default features

* Revert "remove miri checks"

This reverts commit 4638fe3.

* Update pin to DataFusion e99e02b9b9093ceb0c13a2dd32a2a89beba47930

* update pin

* Update Cargo.toml

Bump to 44.0.0-rc2

* update cargo lock

* revert miri change

---------

Co-authored-by: Andrew Lamb <[email protected]>

* update UT

Signed-off-by: Dharan Aditya <[email protected]>

* fix typo in UT

Signed-off-by: Dharan Aditya <[email protected]>

---------

Signed-off-by: Dharan Aditya <[email protected]>
Co-authored-by: Andy Grove <[email protected]>
Co-authored-by: KAZUYUKI TANIMURA <[email protected]>
Co-authored-by: Parth Chandra <[email protected]>
Co-authored-by: Liang-Chi Hsieh <[email protected]>
Co-authored-by: Raz Luvaton <[email protected]>
Co-authored-by: Andrew Lamb <[email protected]>

* feat: Add a `spark.comet.exec.memoryPool` configuration for experimenting with various datafusion memory pool setups. (#1021)

* feat: Reenable tests for filtered SMJ anti join (#1211)

* feat: reenable filtered SMJ Anti join tests

* feat: reenable filtered SMJ Anti join tests

* feat: reenable filtered SMJ Anti join tests

* feat: reenable filtered SMJ Anti join tests

* Add CoalesceBatchesExec around SMJ with join filter

* adding `CoalesceBatches`

* adding `CoalesceBatches`

* adding `CoalesceBatches`

* feat: reenable filtered SMJ Anti join tests

* feat: reenable filtered SMJ Anti join tests

---------

Co-authored-by: Andy Grove <[email protected]>

* chore: Add safety check to CometBuffer (#1050)

* chore: Add safety check to CometBuffer

* Add CometColumnarToRowExec

* fix

* fix

* more

* Update plan stability results

* fix

* fix

* fix

* Revert "fix"

This reverts commit 9bad173.

* Revert "Revert "fix""

This reverts commit d527ad1.

* fix BucketedReadWithoutHiveSupportSuite

* fix SparkPlanSuite

* remove unreachable code (#1213)

* test: Enable Comet by default except some tests in SparkSessionExtensionSuite (#1201)

## Which issue does this PR close?

Part of #1197

## Rationale for this change

Since `loadCometExtension` in the diffs were not using `isCometEnabled`, `SparkSessionExtensionSuite` was not using Comet. Once enabled, some test failures discovered

## What changes are included in this PR?

`loadCometExtension` now uses `isCometEnabled` that enables Comet by default
Temporary ignore the failing tests in SparkSessionExtensionSuite

## How are these changes tested?

existing tests

* extract struct expressions to folders based on spark grouping (#1216)

* chore: extract static invoke expressions to folders based on spark grouping (#1217)

* extract static invoke expressions to folders based on spark grouping

* Update native/spark-expr/src/static_invoke/mod.rs

Co-authored-by: Andy Grove <[email protected]>

---------

Co-authored-by: Andy Grove <[email protected]>

* chore: Follow-on PR to fully enable onheap memory usage (#1210)

* Make datafusion's native memory pool configurable

* save

* fix

* Update memory calculation and add draft documentation

* ready for review

* ready for review

* address feedback

* Update docs/source/user-guide/tuning.md

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* Update docs/source/user-guide/tuning.md

Co-authored-by: Kristin Cowalcijk <[email protected]>

* Update docs/source/user-guide/tuning.md

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* Update docs/source/user-guide/tuning.md

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* remove unused config

---------

Co-authored-by: Kristin Cowalcijk <[email protected]>
Co-authored-by: Liang-Chi Hsieh <[email protected]>

* feat: Move shuffle block decompression and decoding to native code and add LZ4 & Snappy support (#1192)

* Implement native decoding and decompression

* revert some variable renaming for smaller diff

* fix oom issues?

* make NativeBatchDecoderIterator more consistent with ArrowReaderIterator

* fix oom and prep for review

* format

* Add LZ4 support

* clippy, new benchmark

* rename metrics, clean up lz4 code

* update test

* Add support for snappy

* format

* change default back to lz4

* make metrics more accurate

* format

* clippy

* use faster unsafe version of lz4_flex

* Make compression codec configurable for columnar shuffle

* clippy

* fix bench

* fmt

* address feedback

* address feedback

* address feedback

* minor code simplification

* cargo fmt

* overflow check

* rename compression level config

* address feedback

* address feedback

* rename constant

* chore: extract agg_funcs expressions to folders based on spark grouping (#1224)

* extract agg_funcs expressions to folders based on spark grouping

* fix rebase

* extract datetime_funcs expressions to folders based on spark grouping (#1222)

Co-authored-by: Andy Grove <[email protected]>

* chore: use datafusion from crates.io (#1232)

* chore: extract strings file to `strings_func` like in spark grouping (#1215)

* chore: extract predicate_functions expressions to folders based on spark grouping (#1218)

* extract predicate_functions expressions to folders based on spark grouping

* code review changes

---------

Co-authored-by: Andy Grove <[email protected]>

* build(deps): bump protobuf version to 3.21.12 (#1234)

* extract json_funcs expressions to folders based on spark grouping (#1220)

Co-authored-by: Andy Grove <[email protected]>

* test: Enable shuffle by default in Spark tests (#1240)

## Which issue does this PR close?

## Rationale for this change

Because `isCometShuffleEnabled` is false by default, some tests were not reached

## What changes are included in this PR?

Removed `isCometShuffleEnabled` and updated spark test diff

## How are these changes tested?

existing test

* chore: extract hash_funcs expressions to folders based on spark grouping (#1221)

* extract hash_funcs expressions to folders based on spark grouping

* extract hash_funcs expressions to folders based on spark grouping

---------

Co-authored-by: Andy Grove <[email protected]>

* fix: Fall back to Spark for unsupported partition or sort expressions in window aggregates (#1253)

* perf: Improve query planning to more reliably fall back to columnar shuffle when native shuffle is not supported (#1209)

* fix regression (#1259)

* feat: add support for array_remove expression (#1179)

* wip: array remove

* added comet expression test

* updated test cases

* fixed array_remove function for null values

* removed commented code

* remove unnecessary code

* updated the test for 'array_remove'

* added test for array_remove in case the input array is null

* wip: case array is empty

* removed test case for empty array

* fix: Fall back to Spark for distinct aggregates (#1262)

* fall back to Spark for distinct aggregates

* update expected plans for 3.4

* update expected plans for 3.5

* force build

* add comment

* feat: Implement custom RecordBatch serde for shuffle for improved performance (#1190)

* Implement faster encoder for shuffle blocks

* make code more concise

* enable fast encoding for columnar shuffle

* update benches

* test all int types

* test float

* remaining types

* add Snappy and Zstd(6) back to benchmark

* fix regression

* Update native/core/src/execution/shuffle/codec.rs

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* address feedback

* support nullable flag

---------

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* docs: Update TPC-H benchmark results (#1257)

* fix: disable initCap by default (#1276)

* fix: disable initCap by default

* Update spark/src/main/scala/org/apache/comet/serde/QueryPlanSerde.scala

Co-authored-by: Andy Grove <[email protected]>

* address review comments

---------

Co-authored-by: Andy Grove <[email protected]>

* chore: Add changelog for 0.5.0 (#1278)

* Add changelog

* revert accidental change

* move 2 items to performance section

* update TPC-DS results for 0.5.0 (#1277)

* fix: cast timestamp to decimal is unsupported (#1281)

* fix: cast timestamp to decimal is unsupported

* fix style

* revert test name and mark as ignore

* add comment

* Fix build after merge

* Fix tests after merge

* Fix plans after merge

* fix partition id in execute plan after merge (from Andy Grove)

---------

Signed-off-by: Dharan Aditya <[email protected]>
Co-authored-by: NoeB <[email protected]>
Co-authored-by: Liang-Chi Hsieh <[email protected]>
Co-authored-by: Raz Luvaton <[email protected]>
Co-authored-by: Andy Grove <[email protected]>
Co-authored-by: KAZUYUKI TANIMURA <[email protected]>
Co-authored-by: Sem <[email protected]>
Co-authored-by: Himadri Pal <[email protected]>
Co-authored-by: himadripal <[email protected]>
Co-authored-by: gstvg <[email protected]>
Co-authored-by: Adam Binford <[email protected]>
Co-authored-by: Matt Butrovich <[email protected]>
Co-authored-by: Raz Luvaton <[email protected]>
Co-authored-by: Andrew Lamb <[email protected]>
Co-authored-by: Dharan Aditya <[email protected]>
Co-authored-by: Kristin Cowalcijk <[email protected]>
Co-authored-by: Oleks V <[email protected]>
Co-authored-by: Zhen Wang <[email protected]>
Co-authored-by: Jagdish Parihar <[email protected]>
andygrove added a commit that referenced this pull request Jan 18, 2025
* feat: support array_append (#1072)

* feat: support array_append

* formatted code

* rewrite array_append plan to match spark behaviour and fixed bug in QueryPlan serde

* remove unwrap

* Fix for Spark 3.3

* refactor array_append binary expression serde code

* Disabled array_append test for spark 4.0+

* chore: Simplify CometShuffleMemoryAllocator to use Spark unified memory allocator (#1063)

* docs: Update benchmarking.md (#1085)

* feat: Require offHeap memory to be enabled (always use unified memory) (#1062)

* Require offHeap memory

* remove unused import

* use off heap memory in stability tests

* reorder imports

* test: Restore one test in CometExecSuite by adding COMET_SHUFFLE_MODE config (#1087)

* Add changelog for 0.4.0 (#1089)

* chore: Prepare for 0.5.0 development (#1090)

* Update version number for build

* update docs

* build: Skip installation of spark-integration  and fuzz testing modules (#1091)

* Add hint for finding the GPG key to use when publishing to maven (#1093)

* docs: Update documentation for 0.4.0 release (#1096)

* update TPC-H results

* update Maven links

* update benchmarking guide and add TPC-DS results

* include q72

* fix: Unsigned type related bugs (#1095)

## Which issue does this PR close?

Closes #1067

## Rationale for this change

Bug fix. A few expressions were failing some unsigned type related tests

## What changes are included in this PR?

 - For `u8`/`u16`, switched to use `generate_cast_to_signed!` in order to copy full i16/i32 width instead of padding zeros in the higher bits
 - `u64` becomes `Decimal(20, 0)` but there was a bug in `round()`  (`>` vs `>=`)

## How are these changes tested?

Put back tests for unsigned types

* chore: Include first ScanExec batch in metrics (#1105)

* include first batch in ScanExec metrics

* record row count metric

* fix regression

* chore: Improve CometScan metrics (#1100)

* Add native metrics for plan creation

* make messages consistent

* Include get_next_batch cost in metrics

* formatting

* fix double count of rows

* chore: Add custom metric for native shuffle fetching batches from JVM (#1108)

* feat: support array_insert (#1073)

* Part of the implementation of array_insert

* Missing methods

* Working version

* Reformat code

* Fix code-style

* Add comments about spark's implementation.

* Implement negative indices

+ fix tests for spark < 3.4

* Fix code-style

* Fix scalastyle

* Fix tests for spark < 3.4

* Fixes & tests

- added test for the negative index
- added test for the legacy spark mode

* Use assume(isSpark34Plus) in tests

* Test else-branch & improve coverage

* Update native/spark-expr/src/list.rs

Co-authored-by: Andy Grove <[email protected]>

* Fix fallback test

In one case there is a zero in index and test fails due to spark error

* Adjust the behaviour for the NULL case to Spark

* Move the logic of type checking to the method

* Fix code-style

---------

Co-authored-by: Andy Grove <[email protected]>

* feat: enable decimal to decimal cast of different precision and scale (#1086)

* enable decimal to decimal cast of different precision and scale

* add more test cases for negative scale and higher precision

* add check for compatibility for decimal to decimal

* fix code style

* Update spark/src/main/scala/org/apache/comet/expressions/CometCast.scala

Co-authored-by: Andy Grove <[email protected]>

* fix the nit in comment

---------

Co-authored-by: himadripal <[email protected]>
Co-authored-by: Andy Grove <[email protected]>

* docs: fix readme FGPA/FPGA typo (#1117)

* fix: Use RDD partition index (#1112)

* fix: Use RDD partition index

* fix

* fix

* fix

* fix: Various metrics bug fixes and improvements (#1111)

* fix: Don't create CometScanExec for subclasses of ParquetFileFormat (#1129)

* Use exact class comparison for parquet scan

* Add test

* Add comment

* fix: Fix metrics regressions (#1132)

* fix metrics issues

* clippy

* update tests

* docs: Add more technical detail and new diagram to Comet plugin overview (#1119)

* Add more technical detail and new diagram to Comet plugin overview

* update diagram

* add info on Arrow IPC

* update diagram

* update diagram

* update docs

* address feedback

* Stop passing Java config map into native createPlan (#1101)

* feat: Improve ScanExec native metrics (#1133)

* save

* remove shuffle jvm metric and update tuning guide

* docs

* add source for all ScanExecs

* address feedback

* address feedback

* chore: Remove unused StringView struct (#1143)

* Remove unused StringView struct

* remove more dead code

* docs: Add some documentation explaining how shuffle works (#1148)

* add some notes on shuffle

* reads

* improve docs

* test: enable more Spark 4.0 tests (#1145)

## Which issue does this PR close?

Part of #372 and #551

## Rationale for this change

To be ready for Spark 4.0

## What changes are included in this PR?

This PR enables more Spark 4.0 tests that were fixed by recent changes

## How are these changes tested?

tests enabled

* chore: Refactor cast to use SparkCastOptions param (#1146)

* Refactor cast to use SparkCastOptions param

* update tests

* update benches

* update benches

* update benches

* Enable more scenarios in CometExecBenchmark. (#1151)

* chore: Move more expressions from core crate to spark-expr crate (#1152)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* remove dead code (#1155)

* fix: Spark 4.0-preview1 SPARK-47120 (#1156)

## Which issue does this PR close?

Part of #372 and #551

## Rationale for this change

To be ready for Spark 4.0

## What changes are included in this PR?

This PR fixes the new test SPARK-47120 added in Spark 4.0

## How are these changes tested?

tests enabled

* chore: Move string kernels and expressions to spark-expr crate (#1164)

* Move string kernels and expressions to spark-expr crate

* remove unused hash kernel

* remove unused dependencies

* chore: Move remaining expressions to spark-expr crate + some minor refactoring (#1165)

* move CheckOverflow to spark-expr crate

* move NegativeExpr to spark-expr crate

* move UnboundColumn to spark-expr crate

* move ExpandExec from execution::datafusion::operators to execution::operators

* refactoring to remove datafusion subpackage

* update imports in benches

* fix

* fix

* chore: Add ignored tests for reading complex types from Parquet (#1167)

* Add ignored tests for reading structs from Parquet

* add basic map test

* add tests for Map and Array

* feat: Add Spark-compatible implementation of SchemaAdapterFactory (#1169)

* Add Spark-compatible SchemaAdapterFactory implementation

* remove prototype code

* fix

* refactor

* implement more cast logic

* implement more cast logic

* add basic test

* improve test

* cleanup

* fmt

* add support for casting unsigned int to signed int

* clippy

* address feedback

* fix test

* fix: Document enabling comet explain plan usage in Spark (4.0) (#1176)

* test: enabling Spark tests with offHeap requirement (#1177)

## Which issue does this PR close?

## Rationale for this change

After #1062 We have not running Spark tests for native execution

## What changes are included in this PR?

Removed the off heap requirement for testing

## How are these changes tested?

Bringing back Spark tests for native execution

* feat: Improve shuffle metrics (second attempt) (#1175)

* improve shuffle metrics

* docs

* more metrics

* refactor

* address feedback

* fix: stddev_pop should not directly return 0.0 when count is 1.0 (#1184)

* add test

* fix

* fix

* fix

* feat: Make native shuffle compression configurable and respect `spark.shuffle.compress` (#1185)

* Make shuffle compression codec and level configurable

* remove lz4 references

* docs

* update comment

* clippy

* fix benches

* clippy

* clippy

* disable test for miri

* remove lz4 reference from proto

* minor: move shuffle classes from common to spark (#1193)

* minor: refactor decodeBatches to make private in broadcast exchange (#1195)

* minor: refactor prepare_output so that it does not require an ExecutionContext (#1194)

* fix: fix missing explanation for then branch in case when (#1200)

* minor: remove unused source files (#1202)

* chore: Upgrade to DataFusion 44.0.0-rc2 (#1154)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* save

* save

* save

* remove unused imports

* clippy

* implement more hashers

* implement Hash and PartialEq

* implement Hash and PartialEq

* implement Hash and PartialEq

* benches

* fix ScalarUDFImpl.return_type failure

* exclude test from miri

* ignore correct test

* ignore another test

* remove miri checks

* use return_type_from_exprs

* Revert "use return_type_from_exprs"

This reverts commit febc1f1.

* use DF main branch

* hacky workaround for regression in ScalarUDFImpl.return_type

* fix repo url

* pin to revision

* bump to latest rev

* bump to latest DF rev

* bump DF to rev 9f530dd

* add Cargo.lock

* bump DF version

* no default features

* Revert "remove miri checks"

This reverts commit 4638fe3.

* Update pin to DataFusion e99e02b9b9093ceb0c13a2dd32a2a89beba47930

* update pin

* Update Cargo.toml

Bump to 44.0.0-rc2

* update cargo lock

* revert miri change

---------

Co-authored-by: Andrew Lamb <[email protected]>

* feat: add support for array_contains expression (#1163)

* feat: add support for array_contains expression

* test: add unit test for array_contains function

* Removes unnecessary case expression for handling null values

* chore: Move more expressions from core crate to spark-expr crate (#1152)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* remove dead code (#1155)

* fix: Spark 4.0-preview1 SPARK-47120 (#1156)

## Which issue does this PR close?

Part of #372 and #551

## Rationale for this change

To be ready for Spark 4.0

## What changes are included in this PR?

This PR fixes the new test SPARK-47120 added in Spark 4.0

## How are these changes tested?

tests enabled

* chore: Move string kernels and expressions to spark-expr crate (#1164)

* Move string kernels and expressions to spark-expr crate

* remove unused hash kernel

* remove unused dependencies

* chore: Move remaining expressions to spark-expr crate + some minor refactoring (#1165)

* move CheckOverflow to spark-expr crate

* move NegativeExpr to spark-expr crate

* move UnboundColumn to spark-expr crate

* move ExpandExec from execution::datafusion::operators to execution::operators

* refactoring to remove datafusion subpackage

* update imports in benches

* fix

* fix

* chore: Add ignored tests for reading complex types from Parquet (#1167)

* Add ignored tests for reading structs from Parquet

* add basic map test

* add tests for Map and Array

* feat: Add Spark-compatible implementation of SchemaAdapterFactory (#1169)

* Add Spark-compatible SchemaAdapterFactory implementation

* remove prototype code

* fix

* refactor

* implement more cast logic

* implement more cast logic

* add basic test

* improve test

* cleanup

* fmt

* add support for casting unsigned int to signed int

* clippy

* address feedback

* fix test

* fix: Document enabling comet explain plan usage in Spark (4.0) (#1176)

* test: enabling Spark tests with offHeap requirement (#1177)

## Which issue does this PR close?

## Rationale for this change

After #1062 We have not running Spark tests for native execution

## What changes are included in this PR?

Removed the off heap requirement for testing

## How are these changes tested?

Bringing back Spark tests for native execution

* feat: Improve shuffle metrics (second attempt) (#1175)

* improve shuffle metrics

* docs

* more metrics

* refactor

* address feedback

* fix: stddev_pop should not directly return 0.0 when count is 1.0 (#1184)

* add test

* fix

* fix

* fix

* feat: Make native shuffle compression configurable and respect `spark.shuffle.compress` (#1185)

* Make shuffle compression codec and level configurable

* remove lz4 references

* docs

* update comment

* clippy

* fix benches

* clippy

* clippy

* disable test for miri

* remove lz4 reference from proto

* minor: move shuffle classes from common to spark (#1193)

* minor: refactor decodeBatches to make private in broadcast exchange (#1195)

* minor: refactor prepare_output so that it does not require an ExecutionContext (#1194)

* fix: fix missing explanation for then branch in case when (#1200)

* minor: remove unused source files (#1202)

* chore: Upgrade to DataFusion 44.0.0-rc2 (#1154)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* save

* save

* save

* remove unused imports

* clippy

* implement more hashers

* implement Hash and PartialEq

* implement Hash and PartialEq

* implement Hash and PartialEq

* benches

* fix ScalarUDFImpl.return_type failure

* exclude test from miri

* ignore correct test

* ignore another test

* remove miri checks

* use return_type_from_exprs

* Revert "use return_type_from_exprs"

This reverts commit febc1f1.

* use DF main branch

* hacky workaround for regression in ScalarUDFImpl.return_type

* fix repo url

* pin to revision

* bump to latest rev

* bump to latest DF rev

* bump DF to rev 9f530dd

* add Cargo.lock

* bump DF version

* no default features

* Revert "remove miri checks"

This reverts commit 4638fe3.

* Update pin to DataFusion e99e02b9b9093ceb0c13a2dd32a2a89beba47930

* update pin

* Update Cargo.toml

Bump to 44.0.0-rc2

* update cargo lock

* revert miri change

---------

Co-authored-by: Andrew Lamb <[email protected]>

* update UT

Signed-off-by: Dharan Aditya <[email protected]>

* fix typo in UT

Signed-off-by: Dharan Aditya <[email protected]>

---------

Signed-off-by: Dharan Aditya <[email protected]>
Co-authored-by: Andy Grove <[email protected]>
Co-authored-by: KAZUYUKI TANIMURA <[email protected]>
Co-authored-by: Parth Chandra <[email protected]>
Co-authored-by: Liang-Chi Hsieh <[email protected]>
Co-authored-by: Raz Luvaton <[email protected]>
Co-authored-by: Andrew Lamb <[email protected]>

* feat: Add a `spark.comet.exec.memoryPool` configuration for experimenting with various datafusion memory pool setups. (#1021)

* feat: Reenable tests for filtered SMJ anti join (#1211)

* feat: reenable filtered SMJ Anti join tests

* feat: reenable filtered SMJ Anti join tests

* feat: reenable filtered SMJ Anti join tests

* feat: reenable filtered SMJ Anti join tests

* Add CoalesceBatchesExec around SMJ with join filter

* adding `CoalesceBatches`

* adding `CoalesceBatches`

* adding `CoalesceBatches`

* feat: reenable filtered SMJ Anti join tests

* feat: reenable filtered SMJ Anti join tests

---------

Co-authored-by: Andy Grove <[email protected]>

* chore: Add safety check to CometBuffer (#1050)

* chore: Add safety check to CometBuffer

* Add CometColumnarToRowExec

* fix

* fix

* more

* Update plan stability results

* fix

* fix

* fix

* Revert "fix"

This reverts commit 9bad173.

* Revert "Revert "fix""

This reverts commit d527ad1.

* fix BucketedReadWithoutHiveSupportSuite

* fix SparkPlanSuite

* remove unreachable code (#1213)

* test: Enable Comet by default except some tests in SparkSessionExtensionSuite (#1201)

## Which issue does this PR close?

Part of #1197

## Rationale for this change

Since `loadCometExtension` in the diffs were not using `isCometEnabled`, `SparkSessionExtensionSuite` was not using Comet. Once enabled, some test failures discovered

## What changes are included in this PR?

`loadCometExtension` now uses `isCometEnabled` that enables Comet by default
Temporary ignore the failing tests in SparkSessionExtensionSuite

## How are these changes tested?

existing tests

* extract struct expressions to folders based on spark grouping (#1216)

* chore: extract static invoke expressions to folders based on spark grouping (#1217)

* extract static invoke expressions to folders based on spark grouping

* Update native/spark-expr/src/static_invoke/mod.rs

Co-authored-by: Andy Grove <[email protected]>

---------

Co-authored-by: Andy Grove <[email protected]>

* chore: Follow-on PR to fully enable onheap memory usage (#1210)

* Make datafusion's native memory pool configurable

* save

* fix

* Update memory calculation and add draft documentation

* ready for review

* ready for review

* address feedback

* Update docs/source/user-guide/tuning.md

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* Update docs/source/user-guide/tuning.md

Co-authored-by: Kristin Cowalcijk <[email protected]>

* Update docs/source/user-guide/tuning.md

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* Update docs/source/user-guide/tuning.md

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* remove unused config

---------

Co-authored-by: Kristin Cowalcijk <[email protected]>
Co-authored-by: Liang-Chi Hsieh <[email protected]>

* feat: Move shuffle block decompression and decoding to native code and add LZ4 & Snappy support (#1192)

* Implement native decoding and decompression

* revert some variable renaming for smaller diff

* fix oom issues?

* make NativeBatchDecoderIterator more consistent with ArrowReaderIterator

* fix oom and prep for review

* format

* Add LZ4 support

* clippy, new benchmark

* rename metrics, clean up lz4 code

* update test

* Add support for snappy

* format

* change default back to lz4

* make metrics more accurate

* format

* clippy

* use faster unsafe version of lz4_flex

* Make compression codec configurable for columnar shuffle

* clippy

* fix bench

* fmt

* address feedback

* address feedback

* address feedback

* minor code simplification

* cargo fmt

* overflow check

* rename compression level config

* address feedback

* address feedback

* rename constant

* chore: extract agg_funcs expressions to folders based on spark grouping (#1224)

* extract agg_funcs expressions to folders based on spark grouping

* fix rebase

* extract datetime_funcs expressions to folders based on spark grouping (#1222)

Co-authored-by: Andy Grove <[email protected]>

* chore: use datafusion from crates.io (#1232)

* chore: extract strings file to `strings_func` like in spark grouping (#1215)

* chore: extract predicate_functions expressions to folders based on spark grouping (#1218)

* extract predicate_functions expressions to folders based on spark grouping

* code review changes

---------

Co-authored-by: Andy Grove <[email protected]>

* build(deps): bump protobuf version to 3.21.12 (#1234)

* extract json_funcs expressions to folders based on spark grouping (#1220)

Co-authored-by: Andy Grove <[email protected]>

* test: Enable shuffle by default in Spark tests (#1240)

## Which issue does this PR close?

## Rationale for this change

Because `isCometShuffleEnabled` is false by default, some tests were not reached

## What changes are included in this PR?

Removed `isCometShuffleEnabled` and updated spark test diff

## How are these changes tested?

existing test

* chore: extract hash_funcs expressions to folders based on spark grouping (#1221)

* extract hash_funcs expressions to folders based on spark grouping

* extract hash_funcs expressions to folders based on spark grouping

---------

Co-authored-by: Andy Grove <[email protected]>

* fix: Fall back to Spark for unsupported partition or sort expressions in window aggregates (#1253)

* perf: Improve query planning to more reliably fall back to columnar shuffle when native shuffle is not supported (#1209)

* fix regression (#1259)

* feat: add support for array_remove expression (#1179)

* wip: array remove

* added comet expression test

* updated test cases

* fixed array_remove function for null values

* removed commented code

* remove unnecessary code

* updated the test for 'array_remove'

* added test for array_remove in case the input array is null

* wip: case array is empty

* removed test case for empty array

* fix: Fall back to Spark for distinct aggregates (#1262)

* fall back to Spark for distinct aggregates

* update expected plans for 3.4

* update expected plans for 3.5

* force build

* add comment

* feat: Implement custom RecordBatch serde for shuffle for improved performance (#1190)

* Implement faster encoder for shuffle blocks

* make code more concise

* enable fast encoding for columnar shuffle

* update benches

* test all int types

* test float

* remaining types

* add Snappy and Zstd(6) back to benchmark

* fix regression

* Update native/core/src/execution/shuffle/codec.rs

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* address feedback

* support nullable flag

---------

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* docs: Update TPC-H benchmark results (#1257)

* fix: disable initCap by default (#1276)

* fix: disable initCap by default

* Update spark/src/main/scala/org/apache/comet/serde/QueryPlanSerde.scala

Co-authored-by: Andy Grove <[email protected]>

* address review comments

---------

Co-authored-by: Andy Grove <[email protected]>

* chore: Add changelog for 0.5.0 (#1278)

* Add changelog

* revert accidental change

* move 2 items to performance section

* update TPC-DS results for 0.5.0 (#1277)

* fix: cast timestamp to decimal is unsupported (#1281)

* fix: cast timestamp to decimal is unsupported

* fix style

* revert test name and mark as ignore

* add comment

* chore: Start 0.6.0 development (#1286)

* start 0.6.0 development

* update some docs

* Revert a change

* update CI

* docs: Fix links and provide complete benchmarking scripts (#1284)

* fix links and provide complete scripts

* fix path

* fix incorrect text

* feat: Add HasRowIdMapping interface (#1288)

* fix style

* fix

* fix for plan serialization

---------

Signed-off-by: Dharan Aditya <[email protected]>
Co-authored-by: NoeB <[email protected]>
Co-authored-by: Liang-Chi Hsieh <[email protected]>
Co-authored-by: Raz Luvaton <[email protected]>
Co-authored-by: Andy Grove <[email protected]>
Co-authored-by: KAZUYUKI TANIMURA <[email protected]>
Co-authored-by: Sem <[email protected]>
Co-authored-by: Himadri Pal <[email protected]>
Co-authored-by: himadripal <[email protected]>
Co-authored-by: gstvg <[email protected]>
Co-authored-by: Adam Binford <[email protected]>
Co-authored-by: Matt Butrovich <[email protected]>
Co-authored-by: Raz Luvaton <[email protected]>
Co-authored-by: Andrew Lamb <[email protected]>
Co-authored-by: Dharan Aditya <[email protected]>
Co-authored-by: Kristin Cowalcijk <[email protected]>
Co-authored-by: Oleks V <[email protected]>
Co-authored-by: Zhen Wang <[email protected]>
Co-authored-by: Jagdish Parihar <[email protected]>
andygrove added a commit that referenced this pull request Jan 21, 2025
…exec - 20240121 (#1316)

* feat: support array_append (#1072)

* feat: support array_append

* formatted code

* rewrite array_append plan to match spark behaviour and fixed bug in QueryPlan serde

* remove unwrap

* Fix for Spark 3.3

* refactor array_append binary expression serde code

* Disabled array_append test for spark 4.0+

* chore: Simplify CometShuffleMemoryAllocator to use Spark unified memory allocator (#1063)

* docs: Update benchmarking.md (#1085)

* feat: Require offHeap memory to be enabled (always use unified memory) (#1062)

* Require offHeap memory

* remove unused import

* use off heap memory in stability tests

* reorder imports

* test: Restore one test in CometExecSuite by adding COMET_SHUFFLE_MODE config (#1087)

* Add changelog for 0.4.0 (#1089)

* chore: Prepare for 0.5.0 development (#1090)

* Update version number for build

* update docs

* build: Skip installation of spark-integration  and fuzz testing modules (#1091)

* Add hint for finding the GPG key to use when publishing to maven (#1093)

* docs: Update documentation for 0.4.0 release (#1096)

* update TPC-H results

* update Maven links

* update benchmarking guide and add TPC-DS results

* include q72

* fix: Unsigned type related bugs (#1095)

## Which issue does this PR close?

Closes #1067

## Rationale for this change

Bug fix. A few expressions were failing some unsigned type related tests

## What changes are included in this PR?

 - For `u8`/`u16`, switched to use `generate_cast_to_signed!` in order to copy full i16/i32 width instead of padding zeros in the higher bits
 - `u64` becomes `Decimal(20, 0)` but there was a bug in `round()`  (`>` vs `>=`)

## How are these changes tested?

Put back tests for unsigned types

* chore: Include first ScanExec batch in metrics (#1105)

* include first batch in ScanExec metrics

* record row count metric

* fix regression

* chore: Improve CometScan metrics (#1100)

* Add native metrics for plan creation

* make messages consistent

* Include get_next_batch cost in metrics

* formatting

* fix double count of rows

* chore: Add custom metric for native shuffle fetching batches from JVM (#1108)

* feat: support array_insert (#1073)

* Part of the implementation of array_insert

* Missing methods

* Working version

* Reformat code

* Fix code-style

* Add comments about spark's implementation.

* Implement negative indices

+ fix tests for spark < 3.4

* Fix code-style

* Fix scalastyle

* Fix tests for spark < 3.4

* Fixes & tests

- added test for the negative index
- added test for the legacy spark mode

* Use assume(isSpark34Plus) in tests

* Test else-branch & improve coverage

* Update native/spark-expr/src/list.rs

Co-authored-by: Andy Grove <[email protected]>

* Fix fallback test

In one case there is a zero in index and test fails due to spark error

* Adjust the behaviour for the NULL case to Spark

* Move the logic of type checking to the method

* Fix code-style

---------

Co-authored-by: Andy Grove <[email protected]>

* feat: enable decimal to decimal cast of different precision and scale (#1086)

* enable decimal to decimal cast of different precision and scale

* add more test cases for negative scale and higher precision

* add check for compatibility for decimal to decimal

* fix code style

* Update spark/src/main/scala/org/apache/comet/expressions/CometCast.scala

Co-authored-by: Andy Grove <[email protected]>

* fix the nit in comment

---------

Co-authored-by: himadripal <[email protected]>
Co-authored-by: Andy Grove <[email protected]>

* docs: fix readme FGPA/FPGA typo (#1117)

* fix: Use RDD partition index (#1112)

* fix: Use RDD partition index

* fix

* fix

* fix

* fix: Various metrics bug fixes and improvements (#1111)

* fix: Don't create CometScanExec for subclasses of ParquetFileFormat (#1129)

* Use exact class comparison for parquet scan

* Add test

* Add comment

* fix: Fix metrics regressions (#1132)

* fix metrics issues

* clippy

* update tests

* docs: Add more technical detail and new diagram to Comet plugin overview (#1119)

* Add more technical detail and new diagram to Comet plugin overview

* update diagram

* add info on Arrow IPC

* update diagram

* update diagram

* update docs

* address feedback

* Stop passing Java config map into native createPlan (#1101)

* feat: Improve ScanExec native metrics (#1133)

* save

* remove shuffle jvm metric and update tuning guide

* docs

* add source for all ScanExecs

* address feedback

* address feedback

* chore: Remove unused StringView struct (#1143)

* Remove unused StringView struct

* remove more dead code

* docs: Add some documentation explaining how shuffle works (#1148)

* add some notes on shuffle

* reads

* improve docs

* test: enable more Spark 4.0 tests (#1145)

## Which issue does this PR close?

Part of #372 and #551

## Rationale for this change

To be ready for Spark 4.0

## What changes are included in this PR?

This PR enables more Spark 4.0 tests that were fixed by recent changes

## How are these changes tested?

tests enabled

* chore: Refactor cast to use SparkCastOptions param (#1146)

* Refactor cast to use SparkCastOptions param

* update tests

* update benches

* update benches

* update benches

* Enable more scenarios in CometExecBenchmark. (#1151)

* chore: Move more expressions from core crate to spark-expr crate (#1152)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* remove dead code (#1155)

* fix: Spark 4.0-preview1 SPARK-47120 (#1156)

## Which issue does this PR close?

Part of #372 and #551

## Rationale for this change

To be ready for Spark 4.0

## What changes are included in this PR?

This PR fixes the new test SPARK-47120 added in Spark 4.0

## How are these changes tested?

tests enabled

* chore: Move string kernels and expressions to spark-expr crate (#1164)

* Move string kernels and expressions to spark-expr crate

* remove unused hash kernel

* remove unused dependencies

* chore: Move remaining expressions to spark-expr crate + some minor refactoring (#1165)

* move CheckOverflow to spark-expr crate

* move NegativeExpr to spark-expr crate

* move UnboundColumn to spark-expr crate

* move ExpandExec from execution::datafusion::operators to execution::operators

* refactoring to remove datafusion subpackage

* update imports in benches

* fix

* fix

* chore: Add ignored tests for reading complex types from Parquet (#1167)

* Add ignored tests for reading structs from Parquet

* add basic map test

* add tests for Map and Array

* feat: Add Spark-compatible implementation of SchemaAdapterFactory (#1169)

* Add Spark-compatible SchemaAdapterFactory implementation

* remove prototype code

* fix

* refactor

* implement more cast logic

* implement more cast logic

* add basic test

* improve test

* cleanup

* fmt

* add support for casting unsigned int to signed int

* clippy

* address feedback

* fix test

* fix: Document enabling comet explain plan usage in Spark (4.0) (#1176)

* test: enabling Spark tests with offHeap requirement (#1177)

## Which issue does this PR close?

## Rationale for this change

After #1062 We have not running Spark tests for native execution

## What changes are included in this PR?

Removed the off heap requirement for testing

## How are these changes tested?

Bringing back Spark tests for native execution

* feat: Improve shuffle metrics (second attempt) (#1175)

* improve shuffle metrics

* docs

* more metrics

* refactor

* address feedback

* fix: stddev_pop should not directly return 0.0 when count is 1.0 (#1184)

* add test

* fix

* fix

* fix

* feat: Make native shuffle compression configurable and respect `spark.shuffle.compress` (#1185)

* Make shuffle compression codec and level configurable

* remove lz4 references

* docs

* update comment

* clippy

* fix benches

* clippy

* clippy

* disable test for miri

* remove lz4 reference from proto

* minor: move shuffle classes from common to spark (#1193)

* minor: refactor decodeBatches to make private in broadcast exchange (#1195)

* minor: refactor prepare_output so that it does not require an ExecutionContext (#1194)

* fix: fix missing explanation for then branch in case when (#1200)

* minor: remove unused source files (#1202)

* chore: Upgrade to DataFusion 44.0.0-rc2 (#1154)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* save

* save

* save

* remove unused imports

* clippy

* implement more hashers

* implement Hash and PartialEq

* implement Hash and PartialEq

* implement Hash and PartialEq

* benches

* fix ScalarUDFImpl.return_type failure

* exclude test from miri

* ignore correct test

* ignore another test

* remove miri checks

* use return_type_from_exprs

* Revert "use return_type_from_exprs"

This reverts commit febc1f1.

* use DF main branch

* hacky workaround for regression in ScalarUDFImpl.return_type

* fix repo url

* pin to revision

* bump to latest rev

* bump to latest DF rev

* bump DF to rev 9f530dd

* add Cargo.lock

* bump DF version

* no default features

* Revert "remove miri checks"

This reverts commit 4638fe3.

* Update pin to DataFusion e99e02b9b9093ceb0c13a2dd32a2a89beba47930

* update pin

* Update Cargo.toml

Bump to 44.0.0-rc2

* update cargo lock

* revert miri change

---------

Co-authored-by: Andrew Lamb <[email protected]>

* feat: add support for array_contains expression (#1163)

* feat: add support for array_contains expression

* test: add unit test for array_contains function

* Removes unnecessary case expression for handling null values

* chore: Move more expressions from core crate to spark-expr crate (#1152)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* remove dead code (#1155)

* fix: Spark 4.0-preview1 SPARK-47120 (#1156)

## Which issue does this PR close?

Part of #372 and #551

## Rationale for this change

To be ready for Spark 4.0

## What changes are included in this PR?

This PR fixes the new test SPARK-47120 added in Spark 4.0

## How are these changes tested?

tests enabled

* chore: Move string kernels and expressions to spark-expr crate (#1164)

* Move string kernels and expressions to spark-expr crate

* remove unused hash kernel

* remove unused dependencies

* chore: Move remaining expressions to spark-expr crate + some minor refactoring (#1165)

* move CheckOverflow to spark-expr crate

* move NegativeExpr to spark-expr crate

* move UnboundColumn to spark-expr crate

* move ExpandExec from execution::datafusion::operators to execution::operators

* refactoring to remove datafusion subpackage

* update imports in benches

* fix

* fix

* chore: Add ignored tests for reading complex types from Parquet (#1167)

* Add ignored tests for reading structs from Parquet

* add basic map test

* add tests for Map and Array

* feat: Add Spark-compatible implementation of SchemaAdapterFactory (#1169)

* Add Spark-compatible SchemaAdapterFactory implementation

* remove prototype code

* fix

* refactor

* implement more cast logic

* implement more cast logic

* add basic test

* improve test

* cleanup

* fmt

* add support for casting unsigned int to signed int

* clippy

* address feedback

* fix test

* fix: Document enabling comet explain plan usage in Spark (4.0) (#1176)

* test: enabling Spark tests with offHeap requirement (#1177)

## Which issue does this PR close?

## Rationale for this change

After #1062 We have not running Spark tests for native execution

## What changes are included in this PR?

Removed the off heap requirement for testing

## How are these changes tested?

Bringing back Spark tests for native execution

* feat: Improve shuffle metrics (second attempt) (#1175)

* improve shuffle metrics

* docs

* more metrics

* refactor

* address feedback

* fix: stddev_pop should not directly return 0.0 when count is 1.0 (#1184)

* add test

* fix

* fix

* fix

* feat: Make native shuffle compression configurable and respect `spark.shuffle.compress` (#1185)

* Make shuffle compression codec and level configurable

* remove lz4 references

* docs

* update comment

* clippy

* fix benches

* clippy

* clippy

* disable test for miri

* remove lz4 reference from proto

* minor: move shuffle classes from common to spark (#1193)

* minor: refactor decodeBatches to make private in broadcast exchange (#1195)

* minor: refactor prepare_output so that it does not require an ExecutionContext (#1194)

* fix: fix missing explanation for then branch in case when (#1200)

* minor: remove unused source files (#1202)

* chore: Upgrade to DataFusion 44.0.0-rc2 (#1154)

* move aggregate expressions to spark-expr crate

* move more expressions

* move benchmark

* normalize_nan

* bitwise not

* comet scalar funcs

* update bench imports

* save

* save

* save

* remove unused imports

* clippy

* implement more hashers

* implement Hash and PartialEq

* implement Hash and PartialEq

* implement Hash and PartialEq

* benches

* fix ScalarUDFImpl.return_type failure

* exclude test from miri

* ignore correct test

* ignore another test

* remove miri checks

* use return_type_from_exprs

* Revert "use return_type_from_exprs"

This reverts commit febc1f1.

* use DF main branch

* hacky workaround for regression in ScalarUDFImpl.return_type

* fix repo url

* pin to revision

* bump to latest rev

* bump to latest DF rev

* bump DF to rev 9f530dd

* add Cargo.lock

* bump DF version

* no default features

* Revert "remove miri checks"

This reverts commit 4638fe3.

* Update pin to DataFusion e99e02b9b9093ceb0c13a2dd32a2a89beba47930

* update pin

* Update Cargo.toml

Bump to 44.0.0-rc2

* update cargo lock

* revert miri change

---------

Co-authored-by: Andrew Lamb <[email protected]>

* update UT

Signed-off-by: Dharan Aditya <[email protected]>

* fix typo in UT

Signed-off-by: Dharan Aditya <[email protected]>

---------

Signed-off-by: Dharan Aditya <[email protected]>
Co-authored-by: Andy Grove <[email protected]>
Co-authored-by: KAZUYUKI TANIMURA <[email protected]>
Co-authored-by: Parth Chandra <[email protected]>
Co-authored-by: Liang-Chi Hsieh <[email protected]>
Co-authored-by: Raz Luvaton <[email protected]>
Co-authored-by: Andrew Lamb <[email protected]>

* feat: Add a `spark.comet.exec.memoryPool` configuration for experimenting with various datafusion memory pool setups. (#1021)

* feat: Reenable tests for filtered SMJ anti join (#1211)

* feat: reenable filtered SMJ Anti join tests

* feat: reenable filtered SMJ Anti join tests

* feat: reenable filtered SMJ Anti join tests

* feat: reenable filtered SMJ Anti join tests

* Add CoalesceBatchesExec around SMJ with join filter

* adding `CoalesceBatches`

* adding `CoalesceBatches`

* adding `CoalesceBatches`

* feat: reenable filtered SMJ Anti join tests

* feat: reenable filtered SMJ Anti join tests

---------

Co-authored-by: Andy Grove <[email protected]>

* chore: Add safety check to CometBuffer (#1050)

* chore: Add safety check to CometBuffer

* Add CometColumnarToRowExec

* fix

* fix

* more

* Update plan stability results

* fix

* fix

* fix

* Revert "fix"

This reverts commit 9bad173.

* Revert "Revert "fix""

This reverts commit d527ad1.

* fix BucketedReadWithoutHiveSupportSuite

* fix SparkPlanSuite

* remove unreachable code (#1213)

* test: Enable Comet by default except some tests in SparkSessionExtensionSuite (#1201)

## Which issue does this PR close?

Part of #1197

## Rationale for this change

Since `loadCometExtension` in the diffs were not using `isCometEnabled`, `SparkSessionExtensionSuite` was not using Comet. Once enabled, some test failures discovered

## What changes are included in this PR?

`loadCometExtension` now uses `isCometEnabled` that enables Comet by default
Temporary ignore the failing tests in SparkSessionExtensionSuite

## How are these changes tested?

existing tests

* extract struct expressions to folders based on spark grouping (#1216)

* chore: extract static invoke expressions to folders based on spark grouping (#1217)

* extract static invoke expressions to folders based on spark grouping

* Update native/spark-expr/src/static_invoke/mod.rs

Co-authored-by: Andy Grove <[email protected]>

---------

Co-authored-by: Andy Grove <[email protected]>

* chore: Follow-on PR to fully enable onheap memory usage (#1210)

* Make datafusion's native memory pool configurable

* save

* fix

* Update memory calculation and add draft documentation

* ready for review

* ready for review

* address feedback

* Update docs/source/user-guide/tuning.md

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* Update docs/source/user-guide/tuning.md

Co-authored-by: Kristin Cowalcijk <[email protected]>

* Update docs/source/user-guide/tuning.md

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* Update docs/source/user-guide/tuning.md

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* remove unused config

---------

Co-authored-by: Kristin Cowalcijk <[email protected]>
Co-authored-by: Liang-Chi Hsieh <[email protected]>

* feat: Move shuffle block decompression and decoding to native code and add LZ4 & Snappy support (#1192)

* Implement native decoding and decompression

* revert some variable renaming for smaller diff

* fix oom issues?

* make NativeBatchDecoderIterator more consistent with ArrowReaderIterator

* fix oom and prep for review

* format

* Add LZ4 support

* clippy, new benchmark

* rename metrics, clean up lz4 code

* update test

* Add support for snappy

* format

* change default back to lz4

* make metrics more accurate

* format

* clippy

* use faster unsafe version of lz4_flex

* Make compression codec configurable for columnar shuffle

* clippy

* fix bench

* fmt

* address feedback

* address feedback

* address feedback

* minor code simplification

* cargo fmt

* overflow check

* rename compression level config

* address feedback

* address feedback

* rename constant

* chore: extract agg_funcs expressions to folders based on spark grouping (#1224)

* extract agg_funcs expressions to folders based on spark grouping

* fix rebase

* extract datetime_funcs expressions to folders based on spark grouping (#1222)

Co-authored-by: Andy Grove <[email protected]>

* chore: use datafusion from crates.io (#1232)

* chore: extract strings file to `strings_func` like in spark grouping (#1215)

* chore: extract predicate_functions expressions to folders based on spark grouping (#1218)

* extract predicate_functions expressions to folders based on spark grouping

* code review changes

---------

Co-authored-by: Andy Grove <[email protected]>

* build(deps): bump protobuf version to 3.21.12 (#1234)

* extract json_funcs expressions to folders based on spark grouping (#1220)

Co-authored-by: Andy Grove <[email protected]>

* test: Enable shuffle by default in Spark tests (#1240)

## Which issue does this PR close?

## Rationale for this change

Because `isCometShuffleEnabled` is false by default, some tests were not reached

## What changes are included in this PR?

Removed `isCometShuffleEnabled` and updated spark test diff

## How are these changes tested?

existing test

* chore: extract hash_funcs expressions to folders based on spark grouping (#1221)

* extract hash_funcs expressions to folders based on spark grouping

* extract hash_funcs expressions to folders based on spark grouping

---------

Co-authored-by: Andy Grove <[email protected]>

* fix: Fall back to Spark for unsupported partition or sort expressions in window aggregates (#1253)

* perf: Improve query planning to more reliably fall back to columnar shuffle when native shuffle is not supported (#1209)

* fix regression (#1259)

* feat: add support for array_remove expression (#1179)

* wip: array remove

* added comet expression test

* updated test cases

* fixed array_remove function for null values

* removed commented code

* remove unnecessary code

* updated the test for 'array_remove'

* added test for array_remove in case the input array is null

* wip: case array is empty

* removed test case for empty array

* fix: Fall back to Spark for distinct aggregates (#1262)

* fall back to Spark for distinct aggregates

* update expected plans for 3.4

* update expected plans for 3.5

* force build

* add comment

* feat: Implement custom RecordBatch serde for shuffle for improved performance (#1190)

* Implement faster encoder for shuffle blocks

* make code more concise

* enable fast encoding for columnar shuffle

* update benches

* test all int types

* test float

* remaining types

* add Snappy and Zstd(6) back to benchmark

* fix regression

* Update native/core/src/execution/shuffle/codec.rs

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* address feedback

* support nullable flag

---------

Co-authored-by: Liang-Chi Hsieh <[email protected]>

* docs: Update TPC-H benchmark results (#1257)

* fix: disable initCap by default (#1276)

* fix: disable initCap by default

* Update spark/src/main/scala/org/apache/comet/serde/QueryPlanSerde.scala

Co-authored-by: Andy Grove <[email protected]>

* address review comments

---------

Co-authored-by: Andy Grove <[email protected]>

* chore: Add changelog for 0.5.0 (#1278)

* Add changelog

* revert accidental change

* move 2 items to performance section

* update TPC-DS results for 0.5.0 (#1277)

* fix: cast timestamp to decimal is unsupported (#1281)

* fix: cast timestamp to decimal is unsupported

* fix style

* revert test name and mark as ignore

* add comment

* chore: Start 0.6.0 development (#1286)

* start 0.6.0 development

* update some docs

* Revert a change

* update CI

* docs: Fix links and provide complete benchmarking scripts (#1284)

* fix links and provide complete scripts

* fix path

* fix incorrect text

* feat: Add HasRowIdMapping interface (#1288)

---------

Signed-off-by: Dharan Aditya <[email protected]>
Co-authored-by: NoeB <[email protected]>
Co-authored-by: Liang-Chi Hsieh <[email protected]>
Co-authored-by: Raz Luvaton <[email protected]>
Co-authored-by: Andy Grove <[email protected]>
Co-authored-by: KAZUYUKI TANIMURA <[email protected]>
Co-authored-by: Sem <[email protected]>
Co-authored-by: Himadri Pal <[email protected]>
Co-authored-by: himadripal <[email protected]>
Co-authored-by: gstvg <[email protected]>
Co-authored-by: Adam Binford <[email protected]>
Co-authored-by: Matt Butrovich <[email protected]>
Co-authored-by: Raz Luvaton <[email protected]>
Co-authored-by: Andrew Lamb <[email protected]>
Co-authored-by: Dharan Aditya <[email protected]>
Co-authored-by: Kristin Cowalcijk <[email protected]>
Co-authored-by: Oleks V <[email protected]>
Co-authored-by: Zhen Wang <[email protected]>
Co-authored-by: Jagdish Parihar <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants