I’m trying to do Iceberg data files compaction via below code snippet.
SparkActions
.get()
.rewriteDataFiles(table)
.option(“target-file-size-bytes”, (128 * 1024 * 1024).toString)
.execute()
But I’m unable to apply filter of timestamp on it without getting error.
I have tried below filters but all of them are failing with the respective errors:
-
filter(Expressions.greaterThan(“track_ts”, “2024-04-29 00:00:00”)) : Exception in thread “main” java.time.format.DateTimeParseException: Text ‘2024-04-29 00:00:00’ could not be parsed at index 10
-
filter(Expressions.greaterThan(“track_ts”, “2024-04-29”)) : Exception in thread “main” java.time.format.DateTimeParseException: Text ‘2024-04-29’ could not be parsed at index 10
-
filter(Expressions.greaterThan(“track_ts”, Timestamp.valueOf(“2024-04-29 00:00:00”))) : Exception in thread “main” java.time.format.DateTimeParseException: Text ‘2024-04-29 00:00:00’ could not be parsed at index 10
And this is one of the sample value of track_ts column of type Timestamp: 2024-04-26 20:39:38
What am I doing wrong or what approach will work