-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Iceberg CDC streaming source #37191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Iceberg CDC streaming source #37191
Conversation
Summary of ChangesHello @ahmedabu98, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers a robust Iceberg Change Data Capture (CDC) streaming source for Apache Beam. It allows Beam pipelines to consume a continuous stream of changes from an Iceberg table, including inserts, deletes, and updates. The implementation intelligently handles different types of change events, processing straightforward additions or removals directly, while employing a sophisticated reconciliation step for complex update scenarios. This significantly expands Beam's capabilities for real-time data integration with Iceberg. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a significant new feature: an Iceberg CDC (Change Data Capture) streaming source. The overall design is solid, leveraging Beam's splittable DoFns and CoGroupByKey for reconciling changes. The code is well-structured into different components for scanning, reading, and reconciling changelogs.
I've identified a few areas for improvement, mainly around cleaning up TODO comments and debugging statements. My specific comments are below.
There is one critical issue I couldn't comment on directly due to tooling limitations: in SerializableDataFile.java, the equals and hashCode methods have not been updated to include the newly added fields (dataSequenceNumber, fileSequenceNumber, firstRowId). This violates the Object contract and can lead to subtle bugs. Please ensure this is fixed.
sdks/java/io/iceberg/src/main/java/org/apache/beam/sdk/io/iceberg/ReadUtils.java
Show resolved
Hide resolved
sdks/java/io/iceberg/src/main/java/org/apache/beam/sdk/io/iceberg/ReadUtils.java
Show resolved
Hide resolved
| // TODO: use TableCache here | ||
| Table table = scanConfig.getTable(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TODO comment suggests using TableCache. This is a good practice to avoid repeatedly loading table metadata, which can be expensive. Please use TableCache.get(element.getKey()) here. You will also need to add a @Setup method to initialize the TableCache with TableCache.setup(scanConfig);.
| // TODO: use TableCache here | |
| Table table = scanConfig.getTable(); | |
| Table table = TableCache.get(element.getKey()); |
| // TODO: use TableCache | ||
| Table table = scanConfig.getTable(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TODO comment suggests using TableCache. This is a good practice to avoid repeatedly loading table metadata. Please use TableCache.get(element.getKey().getTableIdentifierString()) here. You will also need to add TableCache.setup(scanConfig); to the @Setup method.
| // TODO: use TableCache | |
| Table table = scanConfig.getTable(); | |
| Table table = TableCache.get(element.getKey().getTableIdentifierString()); |
sdks/java/io/iceberg/src/main/java/org/apache/beam/sdk/io/iceberg/cdc/ReadFromChangelogs.java
Outdated
Show resolved
Hide resolved
sdks/java/io/iceberg/src/main/java/org/apache/beam/sdk/io/iceberg/cdc/ReconcileChanges.java
Show resolved
Hide resolved
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new Iceberg CDC (Change Data Capture) streaming source, which is a significant and valuable addition. The overall architecture is well-designed, with a clear separation of concerns for scanning changelogs, reading data, and reconciling changes to identify updates. The implementation of serializable classes for Iceberg objects is also a good approach.
However, there are a few critical issues that need to be addressed before this can be merged. The most important is that the output of the CDC source does not include any metadata about the type of change (INSERT, DELETE, UPDATE), which is fundamental for a CDC stream. Additionally, there are several leftover debugging statements and some dead code that should be cleaned up. I've left specific comments with suggestions on how to address these points.
| public PCollection<Row> expand(PBegin input) { | ||
| Table table = | ||
| scanConfig | ||
| .getCatalogConfig() | ||
| .catalog() | ||
| .loadTable(TableIdentifier.parse(scanConfig.getTableIdentifier())); | ||
|
|
||
| PCollection<KV<String, List<SnapshotInfo>>> snapshots = | ||
| MoreObjects.firstNonNull(scanConfig.getStreaming(), false) | ||
| ? unboundedSnapshots(input) | ||
| : boundedSnapshots(input, table); | ||
|
|
||
| // scan each interval of snapshots and create groups of changelog tasks | ||
| PCollectionTuple changelogTasks = | ||
| snapshots | ||
| .apply(Redistribute.byKey()) | ||
| .apply( | ||
| "Create Changelog Tasks", | ||
| ParDo.of(new ChangelogScanner(scanConfig)) | ||
| .withOutputTags( | ||
| UNIDIRECTIONAL_CHANGES, TupleTagList.of(BIDIRECTIONAL_CHANGES))); | ||
|
|
||
| // for changelog ordinal groups that have UNIDIRECTIONAL changes (i.e. all deletes, or all | ||
| // inserts), | ||
| // take the fast approach of just reading and emitting CDC records. | ||
| PCollection<Row> uniDirectionalCdcRows = | ||
| processUniDirectionalChanges( | ||
| changelogTasks.get(UNIDIRECTIONAL_CHANGES).setCoder(ChangelogScanner.OUTPUT_CODER)); | ||
|
|
||
| // changelog ordinal groups that have BIDIRECTIONAL changes (i.e. both deletes and inserts) | ||
| // will need extra processing (including a shuffle) to identify any updates | ||
| PCollection<Row> biDirectionalCdcRows = | ||
| processBiDirectionalChanges( | ||
| changelogTasks.get(BIDIRECTIONAL_CHANGES).setCoder(ChangelogScanner.OUTPUT_CODER)); | ||
|
|
||
| // Merge UNIDIRECTIONAL and BIDIRECTIONAL outputs | ||
| return PCollectionList.of(uniDirectionalCdcRows) | ||
| .and(biDirectionalCdcRows) | ||
| .apply(Flatten.pCollections()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great start on a CDC source for Iceberg. However, a critical piece of functionality is missing: the output PCollection<Row> does not contain any information about the type of change (e.g., INSERT, DELETE, UPDATE). A CDC stream is not very useful without this metadata.
Several TODO comments in the code confirm this is not yet implemented:
ReadFromChangelogs.java:// TODO: output with DELETE kindReconcileChanges.java:// TODO: output as UPDATE_BEFORE kind,// TODO: output as UPDATE_AFTER kind,// TODO: output as DELETE kind
To address this, the output Row schema should be augmented to include a field for the change type. A common practice is to add a change_type string field. Alternatively, you could align with Beam's ChangeStream format. Without this, consumers of this source cannot distinguish between different types of changes.
| Row id = structToBeamRow(snapshotId, recId, recordIdSchema, rowAndSnapshotIDBeamSchema); | ||
| outputReceiver.get(keyedTag).outputWithTimestamp(KV.of(id, row), timestamp); | ||
| } else { // fast path | ||
| System.out.printf("[UNIDIRECTIONAL] -- Output(%s, %s)\n%s%n", snapshotId, timestamp, row); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| public void processElement( | ||
| @Element KV<Row, CoGbkResult> element, | ||
| @Timestamp Instant timestamp, | ||
| OutputReceiver<Row> out) { | ||
| CoGbkResult result = element.getValue(); | ||
| System.out.println("xxx [MIXED] Process timestamp: " + timestamp); | ||
|
|
||
| // iterables are lazy-loaded from the shuffle service | ||
| Iterable<TimestampedValue<Row>> deletes = result.getAll(DELETES); | ||
| Iterable<TimestampedValue<Row>> inserts = result.getAll(INSERTS); | ||
|
|
||
| boolean hasDeletes = deletes.iterator().hasNext(); | ||
| boolean hasInserts = inserts.iterator().hasNext(); | ||
|
|
||
| if (hasInserts && hasDeletes) { | ||
| // UPDATE: row ID exists in both streams | ||
| // - emit all deletes as 'UPDATE_BEFORE', and all inserts as 'UPDATE_AFTER' | ||
| // - emit extra inserts as 'UPDATE_AFTER' | ||
| // - ignore extra deletes (TODO: double check if this is a good decision) | ||
| Iterator<TimestampedValue<Row>> deletesIterator = deletes.iterator(); | ||
| Iterator<TimestampedValue<Row>> insertsIterator = inserts.iterator(); | ||
| while (deletesIterator.hasNext() && insertsIterator.hasNext()) { | ||
| // TODO: output as UPDATE_BEFORE kind | ||
| TimestampedValue<Row> updateBefore = deletesIterator.next(); | ||
| out.outputWithTimestamp(updateBefore.getValue(), updateBefore.getTimestamp()); | ||
| System.out.printf("[MIXED] -- UpdateBefore\n%s\n", updateBefore); | ||
|
|
||
| // TODO: output as UPDATE_AFTER kind | ||
| TimestampedValue<Row> updateAfter = insertsIterator.next(); | ||
| out.outputWithTimestamp(updateAfter.getValue(), updateAfter.getTimestamp()); | ||
| System.out.printf("[MIXED] -- UpdateAfter\n%s\n", updateAfter); | ||
| } | ||
| while (insertsIterator.hasNext()) { | ||
| // TODO: output as UPDATE_AFTER kind | ||
| TimestampedValue<Row> insert = insertsIterator.next(); | ||
| out.outputWithTimestamp(insert.getValue(), insert.getTimestamp()); | ||
| System.out.printf("[MIXED] -- Added(extra)\n%s\n", insert); | ||
| } | ||
| } else if (hasInserts) { | ||
| // INSERT only | ||
| for (TimestampedValue<Row> rec : inserts) { | ||
| System.out.printf("[MIXED] -- Added\n%s\n", rec); | ||
| out.outputWithTimestamp(rec.getValue(), rec.getTimestamp()); | ||
| } | ||
| } else if (hasDeletes) { | ||
| // DELETE only | ||
| for (TimestampedValue<Row> rec : deletes) { | ||
| // TODO: output as DELETE kind | ||
| System.out.printf("[MIXED] -- Deleted\n%s\n", rec); | ||
| out.outputWithTimestamp(rec.getValue(), rec.getTimestamp()); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // TODO: remove this (unused) | ||
| @SuppressWarnings("method.invocation") | ||
| public BeamDeleteFilter( | ||
| FileIO io, | ||
| SerializableChangelogTask scanTask, | ||
| Schema tableSchema, | ||
| Schema projectedSchema, | ||
| List<DeleteFile> deleteFiles) { | ||
| super(scanTask.getDataFile().getPath(), deleteFiles, tableSchema, projectedSchema); | ||
| this.io = io; | ||
| this.asStructLike = new InternalRecordWrapper(requiredSchema().asStruct()); | ||
| } | ||
|
|
||
| // TODO: remove this (unused) | ||
| @SuppressWarnings("method.invocation") | ||
| public BeamDeleteFilter(FileIO io, ContentScanTask<?> scanTask, List<DeleteFile> deleteFiles) { | ||
| super( | ||
| scanTask.file().location(), | ||
| deleteFiles, | ||
| scanTask.spec().schema(), | ||
| scanTask.spec().schema()); | ||
| this.io = io; | ||
| this.asStructLike = new InternalRecordWrapper(requiredSchema().asStruct()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // TODO: remove this (unused) | ||
| @SuppressWarnings("method.invocation") | ||
| public BeamDeleteReader( | ||
| FileIO io, | ||
| SerializableChangelogTask scanTask, | ||
| Schema tableSchema, | ||
| Schema projectedSchema, | ||
| List<DeleteFile> deleteFiles) { | ||
| super(scanTask.getDataFile().getPath(), deleteFiles, tableSchema, projectedSchema); | ||
| this.io = io; | ||
| this.asStructLike = new InternalRecordWrapper(requiredSchema().asStruct()); | ||
| } | ||
|
|
||
| // TODO: remove this (unused) | ||
| @SuppressWarnings("method.invocation") | ||
| public BeamDeleteReader(FileIO io, ContentScanTask<?> scanTask, List<DeleteFile> deleteFiles) { | ||
| super( | ||
| scanTask.file().location(), | ||
| deleteFiles, | ||
| scanTask.spec().schema(), | ||
| scanTask.spec().schema()); | ||
| this.io = io; | ||
| this.asStructLike = new InternalRecordWrapper(requiredSchema().asStruct()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| import java.util.Arrays; | ||
| import java.util.HashMap; | ||
| import java.util.List; | ||
| import java.util.Map; | ||
| import java.util.Objects; | ||
| import org.apache.beam.sdk.schemas.AutoValueSchema; | ||
| import org.apache.beam.sdk.schemas.annotations.DefaultSchema; | ||
| import org.apache.beam.sdk.schemas.annotations.SchemaFieldNumber; | ||
| import org.apache.beam.vendor.guava.v32_1_2_jre.com.google.common.base.Equivalence; | ||
| import org.apache.beam.vendor.guava.v32_1_2_jre.com.google.common.collect.Maps; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // TODO: use TableCache here | ||
| Table table = scanConfig.getTable(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No description provided.