As the author says, most incidents of this kind, in most of the world, are protesters vs. police, and the police have .. a substantial amount of control over whether the situation escalates or not. Including just opening up with tear gas.
Conflicting football ultras is basically the only case where this doesn't happen.
(I've never been near a tear gas kind of event, but I did witness the Met Police deploy "kettling" for the first time in May 2001, close enough that if I'd not paid attention to the police lines forming up I would have been imprisoned uncomfortably for eight hours.)
For colours to look natural you need your white light to contain lots of different wave lengths. It’s usually measured as Ra. Artificially looking LEDs are easily 10x cheaper than photography grade LEDs. Also, this guy is probably paying taxes and handling stuff the proper legal way. If you order from Alibaba, chances are you’ll not be paying taxes. Plus if they offer a 5 year warranty, they probably need to keep some money around for repairs.
That means it doesn’t need depth. Depth is helpful for getting good point locations, but SLAM on multiple frames should also work.
I’m guessing that they are researching this for AR or robot navigation. Otherwise, the focus on accurately dividing the scene into objects wouldn’t make sense for me.
Segmentation in 2d is mostly a solved problem (segment anything is pretty fucking great) Segmentation in 3d is also fairly well done. You can use dino V2 to do 3d object detection and segmentation.
The diffcult part _after_ that is interacting with the object. sparse and semi dense point clouds can be generated and refined in real time, but they are point clouds not meshes. this means that interacting with the object accurately is super hard, because its not a simple mesh that can be tested/interacted with. its a bunch of points around the edges.
Where this is useful is it allows you to generate a mostly plausible simple 3d model that can act as a standin for any further interactions. In VR you can use it as a collision object for physics. For robotics you can use it to plan interactions (ie place objects on the table)
Its also a step in the direction of answering "who's" object it is, rather than "what" the object is. Who's water bottle is much much harder to answer with machines (without markers) than "is this a water bottle" or "where is the water bottle in this scene"
The US has become a world leader in suing curious people into submission. As soon as you touch any commercially available tech and do anything that the manufacturer dislikes, you're at risk, thanks to § 1201:
"No person shall circumvent a technological measure that effectively controls access to a work protected under this title."
Extending your digital camera with new firmware? Illegal.
Inventing a custom ink or add-on for your printer? Illegal.
Repairing a tractor? or a ventilator? Illegal.
How do you expect anyone to get world-leading science done in this environment?
These are bad things but I have a hard time seeing these as the reason why science is lagging.
Science is lagging in the US because the US has destroyed viable careers in science.
Who does the hard work to get PHd in a scientific field knowing that they'll be saddled with hundreds of thousands of jobs in debt and that there's a good chance that they'll have no employment opportunities after the fact. Especially with the recent destruction of the public sector in scientific jobs, it's probably the worst time ever to get a degree in a field of science.
People do not graduate with a STEM PhD with hundreds of thousands in debt; that is not how the education system works pretty much anywhere in the world.
Your PhD might not put you into hundreds of thousands of debt, but your undergrad very much might in the US. And then you'd have to choose to start a PhD while having hundreds of thousands in debt.
This is the truth. I would love to go back to school and do research to get my PhD. But going back to living on sub-minimum wage to work 80+ hours a week is just not something I can do at this stage of life.
In fairness that sounds like extending capabilities of something that already exists. For personal use that should be okay. For commercial use, that would run afoul of IP -unless we’re talking about open source, though even then you might have obligations depending on the license.
If you want to start from square 1, using your own IP, you should be able to.
Now, sure it sucks that you can’t do those things you mention for ordinary use, which we should, but you are still able to come up with your own ground-up solution for commercial purposes.
Indeed, this is how companies like Facebook got a head start because they created scrappers for MySpace that made the transition easier. If you try to do the same today, they will likely throw you in federal prison for "tampering" or commit lawfare so heinous it'll feel like a war crime.
> If you want to start from square 1, using your own IP, you should be able to.
That's not remotely how any progress has ever been made in the history of the human race. Newton himself said he stood on the shoulders of giants. Or as Sagan said, to bake an apple pie from scratch, you have to first invent the Universe.
A clever patch to an existing thing is exactly how you get to the next big thing after enough patches.
I don't think people realize how important incremental improvements are. Before open-source software took off, everyone either licensed a proprietary library or invented an ad-hoc solution. If the proprietary library was discontinued, you often couldn't extend or improve it (and even if you could, you couldn't share your changes), so you started over, either from scratch or with another vendor redoing the same work.
This is also why we have so much e-waste: once a manufacturer ditches a product, its usefulness is permanently limited, both by law and practicality. Copyright expires eventually, but so far in the future that we'll all be dead by then.
They do, on paper, but many countries hardly enforce them. For example, the EU has more caveats to its section-1201-style insanity; China simply doesn't care at all. These copyright treaties are useless in practice and harmful because they ossify a bad system.
I think some people overblow the lawsuit risk in the US. It really does suck here, however one of the benefits to certain types of innovation is that the US has a lot of IP protection infrastructure. Which stiffles innovation in a lot of ways, but also makes investment easier in some cases.
This is true in a way. We are all very free to research and innovate, it is just when you get it in your mind to actually make any money that the lawsuits show up.
Legally risky research, but if it has high enough rewards will eventually end up in the hands of extremely large companies that have the legal backing to do anything they want.
Momentum is a large part. I also do think there's somewhat of a motivation that once you've gotten to the top you can sue people who try to displace you into oblivion - ye olde classic "temporarily embarrassed millionaires" syndrome.
That landing page example is devastatingly bad. You start with a page that has usage numbers, uptime, support 24/7 and a customer rating above the fold. You end up with a page that lacks all of these advantage and instead looks bland and has horrible typography and even less text contrast.
In line with that, the Dashboard looks more organized in the "after" picture, but that's because it lost most of its useful information.
Author here. I agree that that wasn't a strong example. I wasn't happy with the outcome of those before/after examples, it was rushed before the launch, and I shouldn't have shipped it. Removed. I mostly use these commands on smaller targeted sections on projects that I unfortunately can't screenshot, the case study examples where rushed and didn't communicate the value. Removed them for now, until I can fill in better, real examples.
AI is helping me solve all the issues that using AI has caused.
Wordpress has a pretty good export and Markdown is widely supported. If you estimate 1 month of work to get that into NextJS, then maybe the latter is not a suitable choice.
it's wild that somehow with regards to AI conversations lately someone can say "I saved 3 months doing X" and someone can willfully and thoughtfully reply "No you didn't , you're wrong." without hesitation.
I feel bad for AI opponents mostly because it seems like the drive to be against the thing is stronger than the drive towards fact or even kindness.
My .02c: I am saving months of efforts using AI tools to fix old (PRE-AI, PREHISTORIC!) codebases that have literally zero AI technical debt associated to them.
I'm not going to bother with the charts & stats, you'll just have to trust me and my opinion like humans must do in lots of cases. I have lots of sharp knives in my kitchen, too -- but I don't want to have to go slice my hands on every one to prove to strangers that they are indeed sharp -- you'll just have to take my word.
Slice THEIR hands. They might say yours are rigged.
I'm a non dev and the things I'm building blow me away. I think many of these people criticizing are perhaps more on the execution side and have a legitimate craft they are protecting.
If you're more on the managerial side, and I'd say a trusting manager not a show me your work kind, then you're more likely to be open and results oriented.
From a developer POV, or at least my developer POV, less code is always better. The best code is no code at all.
I think getting results can be very easy, at first. But I force myself to not just spit out code, because I've been burned so, so, so many times by that.
As software grows, the complexity explodes. It's not linear like the growth of the software itself, it feels exponential. Adding one feature takes 100x the time it should because everything is just squished together and barely working. Poorly designed systems eventually bring velocity to a halt, and you can eventually reach a point where even the most trivial of changes are close to impossible.
That being said, there is value in throwaway code. After all, what is an Excel workbook if not throwaway code? But never let the throwaway become a product, or grow too big. Otherwise, you become a prisoner. That cheeky little Excel workbook can turn into a full-blown backend application sitting on a share drive, and it WILL take you a decade to migrate off of it.
yeah AI is perfect at refactor and cleaning things up, you just have to instruct it. I've improved my code significanlty by asking it to clean up, refactor function to pure that I can use & test over a messy application. Without creating new bugs.
You can use AI to simplify software stacks too, only your imagination limits you. How do you see things working with many less abstraction layers?
I remember coding BASIC with POKE/PEEK assembly inside it, same with Turbo Pascal with assembly (C/C++ has similar extern abilities). Perhaps you want no more web or UI (TUI?). Once you imagine what you are looking for, you can label it and go from there.
I am a (very) senior dev with decades of experience. And I, too, am blown away by the massive productivity gains I get from the use of coding AIs.
Part of the craft of being a good developer is keeping up with current technology. I can't help thinking that those who oppose AI are not protecting legitimate craft, but are covering up their own laziness when it comes to keeping up. It seems utterly inconceivable to me that anyone who has kept up would oppose this technology.
There is a huge difference between vibe coding and responsible professional use of AI coding assistants (the principle one, of course, being that AI-generated code DOES get reviewed by a human).
But that, being said, I am enormously supportive of vibe coding by amateur developers. Vibe coding is empowering technology that puts programming power into the hands of amateur developers, allowing them to solve the problems that they face in their day-to-day work. Something that we've been working toward for decades! Will it be professional-quality code? No. Of course not. Will it do what it needs to do? Invariably, yes.
It is wild. I must admit I have a bit of Gell Mann amnesia when it comes to HN comments. I often check them to see what people think about an article, but then every time the article touches on something I know deeply, I realize it’s all just know-it-all puffery. Then I forget and check it when it’s on the many things I do not know much about.
My cofounder is extremely technically competent, but all these people are like good luck with your spaghetti vibe code. It’s humorous.
Just look at the METR study. All predictions were massive time savings but all observations were massive time losses. That's why we don't believe you when you say you saved time.
You should know better than to form a opinion from one study. I could show you endless examples of a study concluding untrue things, endless…
I’ve been full time (almost solo) building an ERP system for years and my development velocity has gone roughly 2x. The new features are of equal quality, everything is code reviewed, everything is done in my style, adhering to my architectural patterns. Not to mention I’ve managed to build a mobile app alongside my normal full time work, something I wouldn’t have even had the time to attempt to learn about without the use of agents.
So do you think I’m lying or do you just think my eyes are deceiving me somehow?
I think any measurement of development velocity is shaky, especially when measured between two different workflows, and especially when measured by the person doing the development.
Such an estimate is far less reliable than your eyes are.
So if people want to do more and better studies, that sounds great. But I have a good supply of salt for self-estimates. I'm listening to your input, but it's much easier for your self-assessment to have issues than you're implying.
It’s a very good point. I have full control and everything is incredibly uniform, and more recently designed with agents in mind. This must make things significantly easier for the LLM.
The work was moving the many landing pages & content elements to NextJS, so we can test, iterate and develop faster. While having a more stable system. This was a 10 year old website, with a very large custom WordPress codebase and many plugins.
The content is still in WordPress backend & will be migrated in the second phase.
I strongly disagree. I’ve yet to find an AI that can reliably summarise emails, let alone understand nuance or sarcasm. And I just asked ChatGPT 5.2 to describe an Instagram image. It didn’t even get the easily OCR-able text correct. Plus it completely failed to mention anything sports or stadium related. But it was looking at a cliche baseball photo taken by an fan inside the stadium.
I have had ChatGPT read text in an image, give me a 100% accurate result, and then claim not to have the ability and to have guessed the previous result when I ask it to do it again.
I'm still trying to find humans that do this reliably too.
To add on, 5.2 seems to be kind of lazy when reading text in images by default. Feeding it an image it may give the first word or so. But coming back with a prompt 'read all the text in the image' makes it do a better job.
With one in particular that I tested I thought it was hallucinating some of the words, but there was a picture in the picture with small words it saw I missed the first time.
I think a lot of AI capabilities are kind of munged to end users because they limit how much GPU is used.
The Quest 3 works offline with ALVR streaming over a private (non-Internet connected) WiFi network. Together with my 3090 I get 8k @ 120fps with 20ms latency over a WiFi6e dongle. I had to manually install the dkms for the dongle on PopOs, but apart from that it just works. ALVR starts SteamVR and then I use Steam to start the game. Proton seems to use Vulcan for rendering.
ouch. Must be weird living where you live.
reply