-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add support for UnboundPartitionSpec
.
#98
Comments
I'd like to have a try |
Thanks! |
Sorry, I am confusing about this issue. According to iceberg#4360, I think UnboundPartitionSpec provides a build method without a schema and can be later bound to a schema. It seems that the primary action in the bind function involves retrieving the source type from the schema and initializing the transform with the source type. |
Hi, @my-vegetable-has-exploded Sorry for the confusion.
Though we can use some other techniques, for example make them As with |
Sorry for my misunderstanding and thanks for your patience @liurenjie1024. Without pub struct UnboundPartitionField {
/// A source column id from the table’s schema
pub source_id: i32,
/// A partition name.
pub name: String,
/// A transform that is applied to the source column to produce a partition value.
pub transform: Transform,
}
pub struct UnboundPartitionField {
/// A source column id from the table’s schema
pub source_id: i32,
/// A partition name.
pub name: String,
/// A transform that is applied to the source column to produce a partition value.
pub transform: Transform,
}
pub struct UnboundPartitionSpec {
pub fields: Vec<UnboundPartitionField>,
}
impl UnboundPartitionSpec{
pub fn bind(&self, schema: SchemaRef) -> Result<PartitionSpec> {
// progress
}
} |
Hi, @my-vegetable-has-exploded I mean I don't want to add |
get it!, thanks. |
I think I'm still misunderstanding the UnboundPartitionSpec binding process, especially determining the spec_id during committing transaction. |
Cool, I'll take a look. |
We can see java implementation as reference.
The text was updated successfully, but these errors were encountered: