For my work device I've disabled Liquid glass completely. The accessibility options to reduce transparency and increase contrast improve the readability of the system a lot.
Booting a 15 year old Mac a while ago had me surprised how clean the interface actually is. The Dock/Desktop look a lot better in the old versions, and the age is mostly showing in apps like Finder which do look a bit dated.
I really hope someone at Apple is going to make the call to drastically reduce the Liquid Glass design and start complying with their own UX guidelines again.
I agree. It’s a neat idea and I’d be interested in seeing the details. A downloadable tarball is a lot better than nothing, but it still makes more work to evaluate a random project than I’m inclined to perform. It makes me assume the commit history is ugly in some way (being charitable and assuming the code itself isn’t). Hearing that it’s developed within a monorepo of unrelated projects and experiments isn’t inspiring either. Anyway, perhaps someone else will download the source and report back.
Edit: To be clear, I’m not saying any of those things are true, just that those are the first thoughts I have when someone says their source is open but makes it difficult to view. In this age in which it’s so trivial and commonplace to make source easily viewable.
In my experience they've always been this way. Real engineers lose to idiots that can knock out a nice Wordpress website. Now the way real engineers lose to idiots that can vibe out a nice UI.
You’re describing task reallocation, but the bigger second-order effect is where the firm can now source the remaining human judgment.
AI reduces the penalty for weak domain context. Once the work is packaged like that, the “thinking part” becomes far easier to offshore because:
- Training time drops as you’re not teaching the whole craft, you’re teaching exception-handling around an AI-driven pipeline.
- Quality becomes more auditable because outputs can be checked with automated review layers.
- Communication overhead shrinks with fewer back-and-forth cycles when AI pre-fills and structures the work.
- Labor arbitrage expands and the limiting factor stops being “can we find someone locally who knows our messy process” and becomes “who is cheapest who can supervise and resolve exceptions.”
So yeah, the jobs mostly remain and some people become more valuable. But the clearing price for that labor moves toward the global minimum faster than it used to.
The impact won’t show up as “no jobs,” it is already showing up as stagnant or declining Western salaries, thinner career ladders, and more of the value captured by the firms that own the workflows rather than the people doing the work.
Isn't that what a well run company does when creating a process? Bureaucracy and process, reduces the penalty of weak domain context and in fact is designed to obviate that need. It "diffuses" the domain knowledge to a set of specifications, documents, and processes. AI may be able to accelerate it, or subsume that bureaucracy. But since when has the limiting factor been "finding someone locally who knows the process?" Once you document a process, the power of computing means you can outsource any of that you want no? Again, AI may subsume, all the back office or bureaucratic office work. Perhaps it will totally restructure the way humans organize labor, run companies, and coordinate. But that system will have to select for a different set of skills than "filling out n forms quickly and accurately." The wage stagnation etc etc. predates AI and might be due to other structural factors.
Not necessarily. That's the old "I made Twitter in a weekend" joke.
That's not because you can technically replicate a product that your company will be successful. What makes a company successful are sales forces, internal processes and luck. Both are extremely difficult to replicate because sales forces are based on a human network you have to build, internal processes are either organic or kept secret, and luck can only be provoked by staying alive long enough, which means you need money.
I think something around that scale (say maybe 20 employees, but definitely not hundreds) was possible even before LLM got popular, but the people involved needed to be talented and focused. I'm not sure if AI will really change that though.
The salary compression point is the one I find hardest to push back on. Accounting BPO to the Philippines was already growing fast pre-AI - firms like TOA Global were scaling rapidly. With AI reducing the training overhead for domain-specific work, that arbitrage gets even easier. The remaining barrier is local regulatory knowledge (UK tax law, Companies House requirements, etc.) but even that erodes when you're mostly supervising exceptions rather than doing the full work yourself.
"it is already showing up as stagnant or declining Western salaries"
Real median salary, and real median wages are both rising for the last couple years. Maybe they would have risen faster if there was no AI, but I don't think you can say there has been a discernible impact yet.
I’d like a source for that. College graduates are no longer at an employment advantage compared to their uneducated peers. The average age of a new hire increased by 2 years over the past 4 years.
Young people in the west have definitely seen declining salaries, if only by virtue of the fact that they’re not being offered at all.
I don't think that's true, if you trust gemini at least.. "In 2025, U.S. software engineer pay is barely keeping pace with inflation, with median compensation growing 2.67% year-over-year compared to 2.7% inflation. While salaries held steady or increased during the 2021-2023 inflationary period, many professionals reported that real purchasing power remained stagnant or dipped, making it difficult to get ahead. "
This is why (personal experience) I am seeing a lot of FullStack jobs compared to specialized Backend, FE, Ops roles. AI does 90% of the job of a senior engineer (What the CEOs believe) and the companies now want someone that can do the full "100" and not just supply the missing "10". So that remaining 90 is now coming from an amalgamation of other responsibilities.
In my mind we will have a bimodal set of skills in software development, likely something like a product engineer (an engineer who is also a product manager-- this person conceptualizes features and systemically considers the software as a whole in terms of ergonomics, business sense, and the delight in building something used by others) and something like a deep-in-the-weeds engineer (an engineer who innovates on the margins of high performance, tuning, deep improvements to libraries and other things of that nature). The former is needing to skill in rapid context switching, keeping the full model of customer journey in their minds, while also executing on technical rigor enough to prevent inefficiencies. The latter will need to skill in being able to dive extremely deeply into nuanced subjects like fine-tuning the garbage collector, compiler, network performance, or internal parts of the DOM or OS or similar.
I would expect a lot of product engineering to specialize further into domains like healthtech, fintech, adtech, etc. While the in-the-weeds engineering will be platform, infra, and embedded systems type folks.
Actually, ideally I'd love to dig deep into and specialize in database management systems internals. I think data engineering in general is the underspoken but fundamental necessity to any sort of application, AI or otherwise, but especially any concept of a data warehouse.
Liquid Glass is really killing my love for Apple products. I'll probably get a Framework and an Android phone for my next device purchases.
They really need to just admit it was a bad move and make like Sonic.
reply