Skip to content

[pull] main from apache:main#110

Merged
pull[bot] merged 3 commits intoburaksenn:mainfrom
apache:main
Apr 16, 2026
Merged

[pull] main from apache:main#110
pull[bot] merged 3 commits intoburaksenn:mainfrom
apache:main

Conversation

@pull
Copy link
Copy Markdown

@pull pull bot commented Apr 16, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

erenavsarogullari and others added 3 commits April 16, 2026 07:19
…ting issue (#20469)

## Which issue does this PR close?
- Closes #20466.

## Rationale for this change
Currently, Spark `slice` function accepts Null Arrays and return `Null`
for this particular queries. DataFusion-Spark `slice` function also
needs to return `NULL` when Null Array is set.
**Spark Behavior** (tested with latest Spark master):
```
> SELECT slice(NULL, 1, 2);
+-----------------+
|slice(NULL, 1, 2)|
+-----------------+
|             null|
+-----------------+
```

**DF Behaviour:**
Current:
```
query error
SELECT slice(NULL, 1, 2);
----
DataFusion error: Internal error: could not cast array of type Null to arrow_array::array::list_array::GenericListArray<i32>.
This issue was likely caused by a bug in DataFusion's code. Please help us to resolve this by filing a bug report in our issue tracker: https://github.com/apache/datafusion/issues
```
New:
```
query ?
SELECT slice(NULL, 1, 2);
----
NULL
```

## What changes are included in this PR?
Explained under first section.

## Are these changes tested?
Added new UT cases for both `slice.rs` and `slice.slt`.

## Are there any user-facing changes?
Yes, currently, `slice` function returns error message for `Null` Array
inputs, however, expected behavior is to be returned `NULL` so end-user
will get expected result instead of error message.

---------

Co-authored-by: Martin Grigorov <martin-g@users.noreply.github.com>
## Which issue does this PR close?

<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes #123` indicates that this PR will close issue #123.
-->

Update tokio from 1.51 to 1.52

## Rationale for this change

<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->

Allow experimenting with a new setting
`Builder::enable_eager_driver_handoff` in tokio, among other
improvements in the latest tokio version.

## What changes are included in this PR?

<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->

Tokio version update in Cargo.toml

## Are these changes tested?

<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code

If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->

All existing tests pass

## Are there any user-facing changes?

No

<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->

<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
…ushDecoderStreamState (#21663)

## Which issue does this PR close?

- Closes #21662.

## Rationale for this change

Miri detects a Stacked Borrows violation in
`PushDecoderStreamState::transition`. A nested `async` block captures
`&mut self` as a single opaque mutable reference. At the `.await` on
`get_byte_ranges`, the future yields, and the `Unique` tag on the borrow
stack is invalidated by a `SharedReadOnly` retag. When the future
resumes, `push_ranges` attempts a two-phase retag through the
now-invalidated tag.

This was found by [Apache DataFusion
Comet](https://github.com/apache/datafusion-comet), which runs Miri in
CI.

## What changes are included in this PR?

Remove the nested `async` block in the `NeedsData` arm of
`PushDecoderStreamState::transition` and inline the IO
(`get_byte_ranges`) and CPU (`push_ranges`) operations as separate
statements. Since `transition` is already an `async fn`, the `.await`
works directly in the loop body. Without the nested block, the compiler
can split the borrows of `self.reader` and `self.decoder` into disjoint
field borrows, keeping the borrow stack valid across the yield point.

Also removes the now-unused `parquet::errors::ParquetError` import.

## Are these changes tested?

Covered by existing parquet reader tests. The original violation was
caught by Miri, which DataFusion does not currently run in CI.

## Are there any user-facing changes?

No.
@pull pull bot locked and limited conversation to collaborators Apr 16, 2026
@pull pull bot added the ⤵️ pull label Apr 16, 2026
@pull pull bot merged commit 8b47f45 into buraksenn:main Apr 16, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants