Hacker Newsnew | past | comments | ask | show | jobs | submit | sealeck's commentslogin

Go has a critical mass that Swift clearly doesn't (i.e. there are many, many companies who have net profits of >$1bn and write most of their server software in Go).

Additionally Google isn't selling Go as a product in the same way as Apple does Swift (and where Google does publish public Go APIs it also tends to use them in the same way as their users do, so the interests are more aligned)...


> Additionally Google isn't selling Go as a product in the same way as Apple does Swift

Hmm, Apple isn't selling Swift as a product either; it's literally what they needed for their own platform, much like how GOOG needed Go for their server works.


Apple is selling Swift as a product - it is their preferred interface for constructing iOS applications!

Kubernetes and Docker, CNCF is full of commercial products.

> Rust wasn’t designed for any specific platform

I suspect that Mozilla being the primary developer and sponsor for many years actually meant that compatibility with all major platforms was prioritised; Mozilla obviously care about stuff working on Windows, and run lots of builds on Windows + I imagine a number of Firefox developers (if not drive) at least own a Windows machine for testing Windows-specific stuff!

I call out Windows because I think generally software people go for Mac > Linux > Windows (although Mac > Linux may be slowly changing due to liquid glass).


Is liquid glass really that bad? I left Mac years ago due to other annoyances. It was my daily driver for a decade and change. But I couldn't get used to the iOSification and the dependence on apple cloud services for most new features. When I started with macOS jaguar it was just a really good commercial UNIX. It got even better with Tiger and leopard.

But the later years I spent every release looking at new fancy features I couldn't use because I don't use apple exclusively (and I don't use iOS at all, too closed for me). So almost no features that appealed to me while usually breaking some parts of the workflow I did use.

While I did hate the 'flat' redesign after Mavericks that on its own was not really a deal-breaker though. Just an annoyance.

I'm kinda surprised liquid glass is so bad people actually leave for it. Or is it more like the last drop?


> Is liquid glass really that bad?

No, but every release of MacOS has a noisy minority declaring it, or some features of it, as the end of Macs. Some people will genuinely hate it in the way that nothing can be universally loved, some people will abandon Macs over it, most people don't feel strongly about it at all.

Maybe there's some people out there that love it, even.

I can barely tell the difference between the Mac I use that's been upgraded, and the Mac that hasn't due to its age, because I'm not spending my time at the computers staring at the decor. The contents of the application windows is the same.


> Is liquid glass really that bad?

I don’t like it, but I think the claims of mass exodus are unlikely.

It feels a lot like the situation when Reddit started charging for their API: Everywhere you looked you could find claims that it was the end of Reddit, but in the end it was just a vocal minority. Reddit’s traffic patterns didn’t decline at all.


Liquid Glass really is that bad. Not because the visual design is especially bad (not my cup of tea but it's okay); but because all of macOS is now incredibly janky. Even Spotlight is a janky mess now with lots of broken animations.

> Is liquid glass really that bad?

It's unfinished. For example, the more rounded windows would require that scrollbars or other widgets are more inset and things like that. The system doesn't seem to handle this automatically, so many apps look broken, even Apple's first party ones.


If the median UK salary is >£35,000 I really wonder how arrive at the conclusion that missing a flight will set you back "years or decades"...

> If the median UK salary is >£35,000 I really wonder how arrive at the conclusion that missing a flight will set you back "years or decades"...

Ok, now take that figure and deduct tax, housing, food, utilities and so on - how much do you think is disposable/saveable? And then take the typical cost of a last-minute replacement flight and compare those two numbers.


Clever

Not really….

I actually don't think this is true, and certainly of people who cover LLMs Simon Willison is one of the more critical and measured people.

I'd love to see what happens if you hook this renderer up to AFL++...

I thought the point of Turso was to offer better concurrency than SQLite currently does. A la https://turso.tech/blog/beyond-the-single-writer-limitation-... and https://penberg.org/papers/penberg-edgesys24.pdf

Would be great if one of the Turso developers can clarify :)


Honestly, if you care about that level of concurrency, it begs the question of why are you using an in process database in the first place?

It's not just about performance: having an in-process MVCC engine would simplify the implementation of many single-machine concurrent applications. Currently you usually have to combine SQLite with some kind of concurrency primitives; this is extremely painful because most OS-level concurrency primitives are really easy to misuse (e.g. it's trivial to accidentally add deadlocks, and very hard to spot and remove these ahead of time: example hard to spot concurrency bugs https://fly.io/blog/corrosion/, https://rfd.shared.oxide.computer/rfd/400)

One reason is that architecture and maintenance are much simpler

Why is writing code to execute a program using the fewest instructions possible on a virtual machine a waste of time?

The expected time you spend on it is much less than the expected time they'll spend on it.

you don't get paid for it

The assumption that treasuries is safe depends upon the guarantor (the US government) being reliable. Currently this is ahistorically far from being the case and therefore people are investigating other places to park capital.

I think the other question is how far away this is from a "working" browser. It isn't impossible to render a meaningful subset of HTML (especially when you use external libraries to handle a lot of this). The real difficulty is doing this (a) quickly, (b) correctly and (c) securely. All of those are very hard problems, and also quite tricky to verify.

I think this kind of approach is interesting, but it's a bit sad that Cursor didn't discuss how they close the feedback loop: testing/verification. As generating code becomes cheaper, I think effort will shift to how we can more cheaply and reliably determine whether an arbitrary piece of code meets a desired specification. For example did they use https://web-platform-tests.org/, fuzz testing (e.g. feed in random webpages and inform the LLM when the fuzzer finds crashes), etc? I would imagine truly scaling long-running autonomous coding would have an emphasis on this.

Of course Cursor may well have done this, but it wasn't super deeply discussed in their blog post.

I really enjoy reading your blog and it would be super cool to see you look at approaches people have to ensuring that LLM-produced code is reliable/correct.


Yeah, I'm hoping they publish a lot more about this project! It deserves way more then the few sentences they've shared about it so far.


I’m interested to see how much more they know about the project

I think the current approach is simply not scalable to a working browser ever.

To leverage AI to build a working browser you would imo need the following:

- A team of humans with some good ideas on how to improve on existing web engines.

- A clear architectural story written not by agents but by humans. Architecture does not mean high-level diagrams only. At each level of abstraction, you need humans to decide what makes sense and only use the agent to bang out slight variations.

- A modular and human-overseen agentic loop approach: one agent can keep running to try to fix a specific CSS feature(like grid), with a human expert reviewing the work at some interval(not sure how fine-grained it should be). This is actually very similar to running an open-source project: you have code owners and a modular review process, not just an army of contributor committing whatever they want. And a "judge agent" is not the same thing as a human code owner as reviewer.

Example on how not to do it: https://github.com/wilsonzlin/fastrender/blob/19bf1036105d4e...

This rendering loop architecture makes zero sense, and it does not implement web standards.

> in the HTML Standard, requestAnimationFrame is part of the frame rendering steps (“update the rendering”), which occur after running a task and performing a microtask checkpoint

> requestAnimationFrame callbacks run on the frame schedule, not as normal tasks.

This is BS: "update the rendering" is specified as just another task, which means it needs to be followed by a microtask checkpoint. See https://html.spec.whatwg.org/multipage/#event-loop-processin...

Following the spec doesn't mean you cannot optimize rendering tasks in some way vs other tasks in your implementation, but the above is not that, it's classic AI bs.

Understanding Web standards and translating them into an implementation requires human judgement.

Don't use an agent to draft your architecture; an expert in web standards with a interest in agentic coding is what is required.

Message to Cursor CEO: next time, instead of lighting up those millions on fire, reach out to me first: https://github.com/gterzian


How much effort would it take GenAI to write a browser/engine from scratch for GenAI to consume (and generate) all the web artifacts generated by human and GenAI? (This only needs to work in headless CI.)

How much effort would it take for a group of humans to do it?


I'm not sure about what you mean with your first sentence in terms of product.

But in general, my guess at an answer(supported by the results of the experiment discussed on this thread), is that:

- GenAi left unsupervised cannot write a browser/engine, or any other complex software. What you end-up with is just chaos.

- A group of humans using GenAi and supervising it's output could write such an engine(or any other complex software), and in theory be more productive than a group of humans not using GenAi: the humans could focus on the conceptual bottlenecks, and the Ai could bang-out the features that require only the translation of already established architectural patterns.

When I write conceptual bottlenecks I don't mean standing in front of a whiteboard full of diagrams. What I mean is any work the gives proper meaning and functionality to the code: it can be at the level of an individual function, or the project as a whole. It can also be outside of the code itself, such as when you describe the desired behavior of (some part of) a program in TLA+.

For an example, see: https://medium.com/@polyglot_factotum/on-writing-with-ai-87c...


That is a wonderful write up.

“This is a clear indication that while the AI can write the code, it cannot design software”

To clarify what I mean by a product. If we want to design a browser system (engine + chrome) from scratch to optimize the human computer symbiosis (Licklider), what would be the best approach? Who should take the roles of making design decisions, implementation decisions, engineering decisions and supervision?

We can imagine a whole system with human out of the loop, that would be a huge unit test and integration test with no real application.

Then human can study it and learn from it.

Or the other way around, we had already made a huge mess of engineering beasts and machine will learn to fix our mess or make it worse by order of magnitude.

I don’t have an answer.

I used to be a big fan of TDD and now I am not, the testing system is a big mess by itself.


> That is a wonderful write up.

Thanks.

> what would be the best approach?

I don't know but it sounds like an interesting research topic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: