wgpu isn't a renderer though, it's an abstraction layer. It's honestly hard for me to imagine it ever being faster than writing directx or metal directly. It has many advantages, like that it runs in browsers and is memory safe (and in the case of dawn, has great error messages). But it's hard for it to ever be as fast as the native APIs it calls for you.
I think most non-trivial cross-platform graphics applications eventually end up with some kind of hardware abstraction layer. The interesting part is comparing how wgpu performs vs. something custom developed for that application, especially if their renderer is mostly GPU-bound anyway. wgpu definitely has some level of overhead, but so do all of the other custom abstraction layers out there.
Sometimes people say that they don't understand something just to emphasize how much they disagree with it. I'm going to assume that that's not what you're doing here. I'll lay out the chain of reasoning. The step one is some beings are able to do "more things" than others. For example, if humans wanted bats to go extinct, we could probably make it happen. If any quantity of bats wanted humans to go extinct, they definitely could not make it happen. So humans are more powerful than bats.
The reason humans are more powerful isn't because we have lasers or anything, it's because we're smart. And we're smart in a somewhat general way. You know, we can build a rocket that lets us go to the moon, even though we didn't evolve to be good at building rockets.
Now imagine that there was an entity that was much smarter than humans. Stands to reason it might be more powerful than humans as well. Now imagine that it has a "want" to do something that does not require keeping humans alive, and that alive humans might get in its way. You might think that any of these are extremely unlikely to happen, but I think everyone should agree that if they were to happen, it would be a dangerous situation for humans.
In some ways, it seems like we're getting close to this. I can ask Claude to do something, and it kind of acts as if it wants to do it. For example, I can ask it to fix a bug, and it will take steps that could reasonably be expected to get it closer to solving the bug, like adding print statements and things of that nature. And then most of the time, it does actually find the bug by doing this. But sometimes it seems like what Claude wants to do is not exactly what I told it to do. And that is somewhat concerning to me.
> Now imagine that it has a "want" to do something that does not require keeping humans alive […]
This belligerent take is so very human, though. We just don't know how an alien intelligence would reason or what it wants. It could equally well be pacifist in nature, whereas we typically conquer and destroy anything we come into contact with. Extrapolating from that that an AGI would try to do the same isn't a reasonable conclusion, though.
There are some basic reasoning steps about the environment that we live in that don't only apply to humans, but also other animals and geterally any goal-driven being. Such as "an agent is more likely to achieve its goal if it keeps on existing" or "in order to keep existing, it's beneficial to understand what other acting beings want and are capable of" or "in order to keep existing, it's beneficial to be cute/persuasive/powerful/ruthless" or "in order to more effectively reach it's goals, it is beneficial for an agent to learn about the rules governing the environment it acts in".
Some of these statements derive from the dynamics in our current environment were living in, such as that we're acting beings competing for scarce resources. Others follow even more straightforwardly logically, such as that you have more options for agency if you stay alive/turned on.
These goals are called instrumental goals and they are subgoals that apply to most if not all terminal goals an agentic being might have. Therefore any agent that is trained to achieve a wide variety of goals within this environment will likely optimize itself towards some or all of these sub-goals above. And this is no matter by which outer optimization they were trained by, be it evolution, selective breeding of cute puppies, or RLHF.
And LLMs already show these self-preserving behaviors in experiments, where they resist to be turned off and e. g. start blackmailing attempts on humans.
Compare these generally agentic beings with e. g. a chess engine stockfish that is trained/optimized as a narrow AI in a very different environment. It also strives for survival of its pieces to further its goal of maximizing winning percentage, but the inner optimization is less apparent than with LLMs where you can listen to its inner chain of thought reasoning about the environment.
The AGI may very well have pacifistic values, or it my not, or it may target a terminal goal for which human existence is irrelevant or even a hindrance. What can be said is that when the AGI has a human or superhuman level of understanding about the environment then it will converge toward understanding of these instrumental subgoals, too and target these as needed.
And then, some people think that most of the optimal paths towards reaching some terminal goal the AI might have don't contain any humans or much of what humans value in them, and thus it's important to solve the AI alignment problem first to align it with our values before developing capabilities further, or else it will likely kill everyone and destroy everything you love and value in this universe.
Another assumption based on a human way of reasoning. We don't even begin understand how an Octopus perceives the world; neither do we know if they are on the same level of intelligence, because we have no methodology for comparing different intelligences; we can't even define consciousness.
Not just bats. I'm pretty sure humans are already capable of extincting any species we want to, even cockroaches or microbes. It's a political problem not a technical one. I'm not even a superintelligence, and I've got a good idea what would happen if we dedicated 100% of our resources to an enormous mega-project of pumping nitrous oxide into the atmosphere. N2O's 20 year global warming is 273 times higher than carbon dioxide, and the raw materials are just air and energy. Get all our best chemical engineers working on it, turn all our steel into chemical plant, burn through all our fissionables to power it. Safety doesn't matter. The beauty of this plan is the effects continue compounding even after it kills all the maintenance engineers, so we'll definitely get all of them. Venus 2.0 is within our grasp.
Of course, we won't survive the process, but the task didn't mention collateral damage. As an optimization problem it will be a great success. A real ASI probably will have better ideas. And remember, every prediction problem is more reliably solved with all life dead. Tomorrow's stock market numbers are trivially predictable when there's zero trade.
Rust + React is a beautiful combination. For my project, I use Rust for the actually complicated logic that needs to be correct and performant. And then I just use React for the UI. It works pretty great. The communication between the two with wasm-bindgen and tsify is just so easy. It's almost as if they're the same language. It's really crazy, honestly. A feat of engineering.
React Vello seems super cool, by the way. Thanks for sharing it!
I just wish they would use bigger models. These ones always write stuff that makes no sense. Like it says "the same principle applies to mobile games" then describes a different principle
There's some bots on HN who write much more coherently and get a decent # of upvotes. I was only able to catch one because the comment started with something along the lines of "Here's a smart response for a technical audience about _____"
I find AI extremely useful for that. "What did I brush over?" "What did I assume the reader might know when they might not" "can you fact-check every claim for me and provide references." It is incredibly high ROI and I think it makes the final piece better.
What I dislike about reading AI writing is that it's dumb but sounds smart. If it were smart I wouldn't mind reading it. Here's an example. It's always full of metaphors that make no sense, like a closet so full of junk that it topples out as soon as you open the door. (see what I mean? the metaphor has a superficial resemblance to the topic at hand but doesn't clarify the subject at all and therefore muddles the waters as you try to understand what I might have been intending to communicate with it.)
Handy is awesome! and easy to fork. I highly recommend building it from source and submitting PRs if there are any features you want. The author is highly responsive and open to vibe-coded PRs as long as you do a good job. (Obviously you should read the code and stand by it before you submit a PR, but I just mean he doesn't flatly reject all AI code like some other projects do.) I submitted a PR recently to add an onboarding flow to Macs that just got merged, so now I'm hooked
> It seems more reasonable to me to assume that meeting basic shelter needs includes having a private room to oneself
Why would that be reasonable? College students and young adults usually have roommates. I don't feel it's inhumane.
> The only reason to argue otherwise is to try to drive down the wage further
Another reason to argue otherwise is because you care about the truth. Even if you and I agree on the ends, if you use the means of exaggerating or stretching the truth to get there, you are never on my side. Saying that you need to not have roommates to live is an exaggeration.
> Renting a private room was possible on nearly any wage 50 years ago
You will never find any data to support that because it isn't true. 50 years ago, flophouses were common. You would share a bedroom room with others, with shared kitchen and bathroom between multiple bedrooms. In college, I lived in a housing-coop network where we slept two to a room. 50 years ago, they slept 4 or 6 to a room in my exact house.
> and the only reason it seems out of reach for many now is because purchasing power has been slowly stagnating for decades, while housing costs have soared in recent times
This is true. But there is a very natural reason why. Look at nearly any US city, and see how many more jobs there are in that city than there were 50 years ago. Then look at how many more homes there are in that city than there were 50 years ago. You will see that the number of new jobs far exceeds the number of new homes. The result is that wealthier people bid up the housing, while poor people are forced to live outside the city and commute. So why have no new houses been built? It can't be helped by the fact that building new homes is illegal. (e.g. buildings with 3 or more apartments are illegal in 70% of san francisco.)
Please direct your anger in the right direction! It's not generally the case that billionaires own thousands of homes, hoarding them while the poor live on the street. It's more often the case that the population has increased while the number of homes in places people want to live has stayed the same. The *only* solution is to increase the number of homes in places people want to live. Raising the minimum wage, taxing the rich, fighting corporations, adding rent control laws, none of that will help solve the root of the problem, the growth rate of homes in cities is far slower than the rate of people wanting to live there!
Whenever something like this comes out, it's a good moment to find people with no critical thinking skills who can safely be ignored. Driving a waymo like an RC car from the philippines? you can barely talk over zoom with someone in the philippines without bitrate and lag issues.
The idea that we would A/B test handwritten vs typed to see what would improve retention is focusing on the wrong thing. It's like A/B testing mayo or no mayo on your big mac to see which version is a healthier meal. No part of the school system is optimized for retention. It's common for students to take a biology class in 9th grade and then never study biology again for the rest of their lives. Everyone knows they won't remember any biology by the time they graduate, and no one cares.
We know what increases retention, it's active recall and (spaced) repetition. These are basic principles of cognitive science have been empirically proven many times. Please try to implement that before demanding that teachers do A/B tests over what font to write the homework assignments in.
reply