You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi guys,
I just started to get my hand on the latest release of ksqlDB and tested it.
Right now I'm facing the problem that in contrast to the little example given on your landing page a table shows duplicate entries when inserting the same message again and again. According to issue #530 this should be fixed.
I set up a test environment with this tutorial and ran following commands in the cli:
CREATE STREAM input_stream_json (id STRING, diff STRING, name STRING, date STRING)
WITH (VALUE_FORMAT='JSON', KAFKA_TOPIC='input', KEY='id', PARTITIONS=1, REPLICAS=1);
CREATETABLEdedup_table (id STRING, diff STRING)
WITH (
kafka_topic ='input',
key ='id',
value_format ='json'
);
CREATE STREAM output_stream ASSELECTs.id, s.diff, s.name, s.dateFROM input_stream_json s
WHEREs.diff!='dedup_table.diff';
The desired output should be that output_stream only keeps unique messages regarding the diff value. However it contains all messages (and so does the table that should filter the duplicated entries).
Hi guys,
I just started to get my hand on the latest release of ksqlDB and tested it.
Right now I'm facing the problem that in contrast to the little example given on your landing page a table shows duplicate entries when inserting the same message again and again. According to issue #530 this should be fixed.
I set up a test environment with this tutorial and ran following commands in the cli:
The desired output should be that
output_stream
only keeps unique messages regarding thediff
value. However it contains all messages (and so does the table that should filter the duplicated entries).Any help or advice would be highly appreciated. Thanks in advance.
The text was updated successfully, but these errors were encountered: