Redshift does not fit into aws ecosystem. If you use kinesis, you get up to 500 paritions with a bunch of tiny files, now I have to build a pipeline after kinesis that puts all of it into 1 s3 file, only to then import it into redshift which might again put it on s3 backed storage for
Its own file shenanigans.
Clickhouse, even chdb inmemory magic has better S3 consumer than Redshift. It sucks up those Kinesis files like nothing.
Its a mess.
Not to mention none of its
Column optimizations work and the data footprint of gapless timestamp columns is not basically 0 as it is in any serious OLAP but it is massive, so the way to improve performance is to
Just align everything on the same timeline so its computation engine does not beed to figure out how to join stuff that is
Actually time
Aligned
I really can’t figure out how anyone can do seriously big computations with Redshift. Maybe people like waiting hours for their SQL to execute and think software is just that slow.
You realize “the pipeline” you have to build is literally just Athena SQL statement “Create table select * from…”. Yes you can run this directly from S3 and it will create one big file
I have a sneaking suspicion that you are trying to use Redshift as a traditional OLTP database. Are you also normalizing your table like an OLTP database instead of like an OLAP
And if you are using any OLAP database for OLTP, you’re doing it wrong. It’s also a simple “process” to move data back and forth between Aurora MySQL or Postgres by federating your OlTP database with Athena (handwavy because I haven’t done it) or the way I have done it is use one Select statement to export to S3 and another to export into your OLTP database.
And before you say you shouldn’t have to do this, you have always needed some process to take data from your normalized data to un normalized form for reporting and analytics.
Source: doing boring enterprise stuff including databases since 1996 and been working for 8 years with AWS services outside AWS (startups and consulting companies) and inside AWS (Professional Services no longer there)
Why are you doing this manually? There is a built in way of doing Kinesis Data Streams to Redshift
These things cost money, Redshift handling live ingestion from Kinesis is tricky.
There is no need for Athena, Redshift ingestion is a simple query that reads from S3. I dont want to copy 10TB of data just to have it in 1 file. And yes, default storage is a bit better than S3 but for an OLAP database there seems to be no proper column compression and data footprint is too big resulting in slow reads if one is not careful.
I mentioned clickhouse, data is obviously not OLTP schemed.
I don’t have normalized data. As I mentioned, Clickhouse consumer goes through 10TB of blobs and ends up having 15GB of postprocessed data in like 5-10 minutes, slowest part is downloading from S3.
I am not willing to pay 10k+ a month for something that absolutely sucks compared to a proper OLAP db.
Redshift is just made for some very specific, bloated, throw as much software pipelines as you can, pay as much money as you can, workflows that I just don’t find valuable. Its compute engine and data repr is just laughably slow, yeah, it can be as fast as you want by throwing parallel units but it’s a complete waste of money.
Thanks for having this discussion with me. I believe I don't want a time series database. I want to be able to invent new queries and throw them at a schema, or create materialized views to have better queries etc. I just don't find Snowflake or Redshift anywhere close to what they're selling.
I think these systems are optimized for something else, probably organizational scale, predictable low value workloads, large teams that just throw their shit at it and it works on a daily basis, and of course, it costs a lot.
My experience after renting a $1k EC2 instance and slurping all of S3 onto it in a few hours, and Redshift being unable to do the same, made me not consider these systems reliable for anything other than ritualistic performative low value work.
I’ve told you my background. I’m telling you that you are using the wrong tool for the job. It’s not an issue with the database. Even if you did need an OLAP database like Reddhift, you are still treating it like an OLTP database as far as your ETL job. You really need to do some additional research
I do not need JOINs. I do not need single row lookups or updates. I need a compute engine and efficient storage.
I need fast consumers, I need good materialized views.
I am not treating anything like OLTP databases, my opinion on OLTP is even harsher. They can’t even handle the data from S3 without insane amounts of work.
I do not even think in terms of OLTP OLAP or whatever. I am thinking in terms of what queries over what data I want to do and how to do it with the feature set available.
If necessary, I will align all postgresql tables on a timeline of discrete timestamps instead of storing things as intervals, to allow faster sequential processing.
I am saying that these systems as a whole are incapable of many things Ive tried them to do. I have managed to use other systems and did many more valuable things because they are actually capable.
It is laughable that the task of loading data from S3 into whatever schema you want is better done by tech outside of the aws universe.
I can paste this whole conversation into an LLM unprompted and I don’t really see anything I am missing.
The only part I am surely missing are nontechnical considerations, which I do not care about at all outside of business context.
I know things are nuanced and there’s companies with PBs of data doing something with Redshift, but people do random stuff with Oracle as well.
And you honestly still haven’t addressed the main point - you are literally using the wrong tool for the job and didn’t do your research for the right tool. Even a cursory overview of Redshift (or Snowflake) tells you that it should be used for bulk inserts, aggregation queries, etc.
Did you research how you should structure your tables fir optimum performance for OLAP databases? Did you research the pros and cons of using a column based storage engine like Redshift to a standard row based storage engine in an traditional RDMS? Not to mention depending on your use case you might need ElssticSearch.
This if completely a you problem for not doing your research and using the worse possible tool for your use case. Seriously, reach out to an SA at AWS and they can give you some free advice, you are literally doing everything wrong.
Clickhouse is column based storage, I can also apply delta compression, where gapless timestamp columns basically have 0 storage cost. I can apply Gorilla as well and get nice compression from irregular columns. I am aware of Redshift's AZ64 cols and they are a let down.
I can change sort order, same as in Redshift with its sort keys, to improve compression and compute. Redshift does not really exploit this sort-key config as much as it could.
My own assessment is that I'm extremely skilled at making any kind of DB system yield to my will and get it to its limits.
I have never used Redshift, Clickhouse or Snowflake with 1 by 1 inserts. I have mentioned S3 consumers (a library or a service, optimized to work well with autoscaling done by S3, respecting SlowDown -- something Redshift itself is incapable of respecting -- and achieving enormous download rates -- some of the consumers I've used completely saturate the 200Gbps limits of some EC2 machines at AWS). These consumers cannot be used in a 1-by-1 setting, the whole point is to have an insanely fast pipelining system with batched processing, interleaving network downloads with CPU compute, so that in the end, any kind of data repackaging and compression is negligible compared to download, so you can just predict how long the system will take to ingest by knowing what your peak download speed is, because the actual compute is fully optimized and pipelined.
Now, it might just be Redshift has bugs and I should report them, but I did not have the experience of AWS reacting quickly to any of the reports I've made.
I disagree, it's not a me problem. I am a bit surprised after all I've written that you're still implying I want OLTP, am using the wrong tool for the job. There are just some tools I would never pick, because they just don't work as advertised, Redshift is one of them. There are much better in-memory compute engines that work directly with S3, and you can create any kind of trash low-value pipelines with them, if you reach mem limits of your compute system, there are much better compute engine + storage combos than Redshift. My belief is that Redshift is purely a nontechnical choice.
Now, to steelman you, if you're saying:
* data warehouse as managed service,
* cost efficiency via guardrails,
* scale by policy, not by expertise,
* optimize for nontechnical teams,
* hide the machinery,
* use AWS-native bloated, slow or expensive glue (Glue, Athena, Kinesis, DMS),
* predictable monthly bill,
* preventing S3 abuse,
* preventing runaway parallelism,
* avoiding noisy-neighbor incidents (either by protecting me or protecting AWS infra),
* intentionally constrained to satisfy all of the above,
then yes, I agree, I am definitely using the wrong tool but as I said, if the value proposition is nontechnical, I do not really care about that.
> My own assessment is that I'm extremely skilled at making any kind of DB system yield to my will and get it to its limits.
Yes an according to my assessment I’m also very good in bed and extremely handsome.
But there is an existence proof seeing that you are running into issues yet millions of people use AWS services and know how to use the right tool for the job
I’m not defending Redshift for your use case, I’m saying you didn’t do your research and you did absolutely everything wrong. From my cursory research of Clickhouse, I probably would have chosen that too for use case
I did not do anything wrong. I had no choice with Redshift and had instructions from above. I made it work really well for what it can do and was surprised how much it sucks even when it has its own data inside of it and has to do compute. As a completely closed system, it's not impressive at all. It has absolutely shameful group-by SQL, completely inefficient sort-key and compression semantics, and absolutely can't attach itself to Kinesis directly without costing you insane amounts of money, because as you already know, Redshift is not a live service (you won't use it by connecting directly to it and expect good performance), it's primarily a parallel compute engine.
Your assessment of me is flawed. You haven't really shown any kind of low-level expertise on how actually these systems work, you've just name dropped OLTP OLAP as if that means anything at all. What is Timescale (now TigerData), OLTPOLAPBLAPBLAP? If someone tells you to use Timescale, you have to figure out how to use it and make the system yield to your will. If system sucks, it yields harder, if system is well designed, it's absolutely beautiful. For example, I would never use Timescale as well, yet you can go on their page and see unicorns using it. I have no idea why, but let them have their fun. There's successful companies using Elasticsearch for IoT telemetry, so who am I to argue I wouldn't do that as well.
There's nothing wrong with using PostgreSQL for timeseries data, you just need to know how to use it. At some point, scaling wise, it will fail, but you're deciding on tradeoffs.
So yes, my assessments have a good track record, not only of myself, but of others as well. I am extremely open to any kind of precise criticism and have been wrong bazillion times and I take part in these kinds of passionate discussions on the internet because I am aware I can absolutely be convinced of the other side. Otherwise, I would have quit a long time ago.
Clickhouse, even chdb inmemory magic has better S3 consumer than Redshift. It sucks up those Kinesis files like nothing.
Its a mess.
Not to mention none of its Column optimizations work and the data footprint of gapless timestamp columns is not basically 0 as it is in any serious OLAP but it is massive, so the way to improve performance is to Just align everything on the same timeline so its computation engine does not beed to figure out how to join stuff that is Actually time Aligned
I really can’t figure out how anyone can do seriously big computations with Redshift. Maybe people like waiting hours for their SQL to execute and think software is just that slow.