I really don't get a lot of this criticism. For example, who is using iceberg with hundreds of concurrent committers, especially at the scale mentioned in the article (10k rows per second)? Using iceberg or any table format over object storage would be insane in that case. But for your typical spark application, you have one main writer (the spark driver) appending or merging a large number of records in > 1 minute microbatches and maybe a handful of maintenance jobs for compaction and retention; Iceberg's concurrency system works fine there.
If you have any use case like one the author describes, maybe use an in-memory cloud database with tiered storage or a plain RDBMS. Iceberg (and similar formats) work great for the use cases for which they're designed.
> But for your typical spark application, you have one main writer (the spark driver) appending or merging a large number of records...
The multi-writer architecture can't be proven scalable because a single writer doesn't cause it to fall over.
I have caused issues by using 500 concurrent writers on embarrassingly parallel workloads. I have watched people choose sharding schemes to accommodate Iceberg's metadata throughput NOT the natural/logical sharding of the underlying data.
Last I half-knew (so check me), Spark may have done some funky stuff to workaround the Iceberg shortcomings. That is useless if you're not using Spark. If scalability of the architecture requires a funky client in one language and a cooperative backend, we might as well be sticking HDF5 on Lustre. HDF5 on Lustre never fell over for me in the 1000+ embarrassingly parallel concurrent writer use case (massive HPC turbulence restart files with 32K concurrent writers per https://ieeexplore.ieee.org/abstract/document/6799149 )
Great analysis of what iceberg does but don’t agree with so much criticism.
It is very basic compared to a database, and even when you go into details of databases there are many things that don’t make sense in terms of doing the absolute best thing.
You could ciritisize parquet in a similar way if you go through the spec but because it is open and so popular people are going to use it no matter what.
If you need more performance/efficiency simplicity etc. just don’t use parquet but have conversion between your format and parquet.
Or you can build on top of parquet with external indices, keeping metadata in memory and having a separate WAL for consistency.
Similarly it should be possible to build on top of iceberg spec to create something like a db server that is efficient.
It is unlikely for something so usable for so many use cases to be the technically pure and most sensible option.
I think this criticism is missing the order of magnitude aspect -- I agree, people do not choose the most technically pure option. But one that launches on day 1 that can be used in SQL or Python with a few lines of code, across any cloud provider, and it basically "just works" is an order of magnitude or more simple than using Iceberg, at least in my experience in Python. It's always been odd how every non-JVM client for Iceberg has supported reads, but never writes...
People don't choose on tech on technical purity, but they often chose on simplicity & ease of use
Yeah that's been our biggest issue in this ecosystem (the non-JVM clients). They can't do writes and are often far behind on feature parity with the blessed JVM clients.
I am currently considering whether it is worth moving our stack from Hive type tables to Iceberg.
Iceberg is obviously technically more competent, but the Hive tables are just so nice because the data is almost orthogonal from the tables.
You can throw away a table and recreate it in minutes and vice versa you can edit the data and the table will adapt.
I am so used to this and I am worried of loosing this flexibility with Iceberg.
Maybe a mix is the way to go.
TFA is very well written by the way. From my perspective I see Iceberg as Hive tables 2.0. Solving a lot of the Hive related problems but not all generic database problems. So all new features are positive for me.
But my only gripe is - is the added complexity worth it?
I really don't get a lot of this criticism. For example, who is using iceberg with hundreds of concurrent committers, especially at the scale mentioned in the article (10k rows per second)? Using iceberg or any table format over object storage would be insane in that case. But for your typical spark application, you have one main writer (the spark driver) appending or merging a large number of records in > 1 minute microbatches and maybe a handful of maintenance jobs for compaction and retention; Iceberg's concurrency system works fine there.
If you have any use case like one the author describes, maybe use an in-memory cloud database with tiered storage or a plain RDBMS. Iceberg (and similar formats) work great for the use cases for which they're designed.
if you use a tool for use cases thet are designed how are you gonna come up with a blog to bitch about it? :)
> But for your typical spark application, you have one main writer (the spark driver) appending or merging a large number of records...
The multi-writer architecture can't be proven scalable because a single writer doesn't cause it to fall over.
I have caused issues by using 500 concurrent writers on embarrassingly parallel workloads. I have watched people choose sharding schemes to accommodate Iceberg's metadata throughput NOT the natural/logical sharding of the underlying data.
Last I half-knew (so check me), Spark may have done some funky stuff to workaround the Iceberg shortcomings. That is useless if you're not using Spark. If scalability of the architecture requires a funky client in one language and a cooperative backend, we might as well be sticking HDF5 on Lustre. HDF5 on Lustre never fell over for me in the 1000+ embarrassingly parallel concurrent writer use case (massive HPC turbulence restart files with 32K concurrent writers per https://ieeexplore.ieee.org/abstract/document/6799149 )
Great analysis of what iceberg does but don’t agree with so much criticism.
It is very basic compared to a database, and even when you go into details of databases there are many things that don’t make sense in terms of doing the absolute best thing.
You could ciritisize parquet in a similar way if you go through the spec but because it is open and so popular people are going to use it no matter what.
If you need more performance/efficiency simplicity etc. just don’t use parquet but have conversion between your format and parquet.
Or you can build on top of parquet with external indices, keeping metadata in memory and having a separate WAL for consistency.
Similarly it should be possible to build on top of iceberg spec to create something like a db server that is efficient.
It is unlikely for something so usable for so many use cases to be the technically pure and most sensible option.
I think this criticism is missing the order of magnitude aspect -- I agree, people do not choose the most technically pure option. But one that launches on day 1 that can be used in SQL or Python with a few lines of code, across any cloud provider, and it basically "just works" is an order of magnitude or more simple than using Iceberg, at least in my experience in Python. It's always been odd how every non-JVM client for Iceberg has supported reads, but never writes...
People don't choose on tech on technical purity, but they often chose on simplicity & ease of use
Yeah that's been our biggest issue in this ecosystem (the non-JVM clients). They can't do writes and are often far behind on feature parity with the blessed JVM clients.
I am currently considering whether it is worth moving our stack from Hive type tables to Iceberg. Iceberg is obviously technically more competent, but the Hive tables are just so nice because the data is almost orthogonal from the tables.
You can throw away a table and recreate it in minutes and vice versa you can edit the data and the table will adapt.
I am so used to this and I am worried of loosing this flexibility with Iceberg.
Maybe a mix is the way to go.
TFA is very well written by the way. From my perspective I see Iceberg as Hive tables 2.0. Solving a lot of the Hive related problems but not all generic database problems. So all new features are positive for me.
But my only gripe is - is the added complexity worth it?