Hacker Newsnew | past | comments | ask | show | jobs | submit | silentkat's commentslogin

I like to call this Frieren's Demon. In that show, it is explained that demons evolved with no common ancestor to humans, but they speak the language. They learned the language to hunt humans. This leads to a fundamentally different understanding of words and language.

Now, I don't personally believe this is an intelligence at all, but it's possible I'm wrong. What we have with these machines is a different evolutionary reason for it speaking our language (we evolved it to speak our language ourselves). It's understanding of our language, and of our images is completely alien. If it is an intelligence, I could believe that the way it makes mistakes in image generation, and the strange logical mistakes it makes that no human would make are simply a result of that alien understanding.

After all, a human artist learning to draw hands makes mistakes, but those mistakes are rooted in a human understanding (e.g. the effects of perspective when translating a 3D object to 2D). The machine with a different understanding of what a hand is will instead render extra fingers (it does not conceptualize a hand as a 3D object at all).

Though, again, I still just think its an incomprehensible amount of data going through a really impressive pattern matcher. The result is still language out of a machine, which is really interesting. The only reason I'm not super confident it is not an intelligence is because I can't really rule out that I am not an incomprehensible amount of data going through a really impressive pattern matcher, just built different. I do however feel like I would know a real intelligence after interacting with it for long enough, though, and none of these models feel like a real intelligence to me.


>it does not conceptualize a hand as a 3D object at all

Oh but it does, it's an emergent property. The biggest finding in Sora was exactly that, an internal conceptualization of the 3D space and objects. Extra fingers in older models were the result of the insufficient fidelity of this conceptualization, and also architectural artifacts in small semantically dense details.


Oh, really. Very interesting. Any links on this? I'm curious if they tried to map that 3D understanding in a way we could read it (e.g. putting it into Blender somehow).

My work has required us all to be "AI Native". I am AI skeptical but am the type of person to try to do what is asked to the best of my ability. I can be wrong, after all.

There is some real power in AI, for sure. But as I have been working with it, one thing is very clear. Either AI is not even close to a real intelligence (my take), or it is an alien intelligence. As I develop a system where it iterates on its own contexts, it definitely becomes probabilistically more likely to do the right thing, but the mistakes it makes become even more logic-defying. It's the coding equivalent of a hand with extra fingers.

I'm only a few weeks into really diving in. Work has given me infinite tokens to play with. Building my own orchestrator system that's purely programmatic, which will spawn agents to do work. Treat them as functions. Defined inputs and defined outputs. Don't give an agent more than one goal, I find that giving it a goal of building a system often leads it to assert that it works when it does not, so the verifier is a different agent. I know this is not new thinking, as I said I am new.

For me the most useful way to think about it has been considering LLMs to be a probabilistic programming language. It won't really error out, it'll just try to make it work. This attitude has made it fun for me again. Love learning new languages and also love making dirty scripts that make various tasks easier.


Oh, no, I had these grand plans to avoid this issue. I had been running into it happening with various low-effort lifts, but now I'm worried that it will stay a problem.


I’m at a big tech company. They proudly stated more productivity measures in commits (already nonsense). 47% more commits, 17% less time per commit. Meaning 128% more time spent coding. Burning us out and acting like the AI slop is “unlocking” productivity.

There’s some neat stuff, don’t get me wrong. But every additional tool so far has started strong but then always falls over. Always.

Right now there’s this “orchestrator” nonsense. Cool in principle, but as someone who made scripts to automate with all the time before it’s not impressive. Spent $200 to automate doing some bug finding and fixing. It found and fixed the easy stuff (still pretty neat), and then “partially verified” it fixed the other stuff.

The “partial verification” was it justifying why it was okay it was broken.

The company has mandated we use this technology. I have an “AI Native” rating. We’re being told to put out at least 28 commits a month. It’s nonsense.

They’re letting me play with an expensive, super-high-level, probabilistic language. So I’m having a lot of fun. But I’m not going to lie, I’m very disappointed. Got this job a year ago. 12 years programming experience. First big tech job. Was hoping to learn a lot. Know my use of data to prioritize work could be better. Was sold on their use of data. I’m sure some teams here use data really well, but I’m just not impressed.

And I’m not even getting into the people gaming the metrics to look good while actually making more work for everyone else.


Management is just stupid sometimes. We had a similar metric at my last company and my manager's response was "well how else are we supposed to measure productivity?", and that was supposed to be a legitimate answer.


The benefits of AI either accrue toward incremental revenue-generation or cost-saving.

Its not rocket science to measure actually. The issue is most people dont know how to think properly to invent the right proxies.


Lol its gonna take longer than it should for this to play out.

Sunk cost fallacy is very real, for all involved. Especially the model producers and their investors.

Sunk cost fallacy is also real for dev's who are now giving up how they used to work - they've made a sunk investment in learning to use LLMs etc. Hence the 'there's no going back' comments that crop up on here.

As I said in this thread - anyone who can think straight - Im referring to those who adhere to fundamental economic principles - can see what's going on from a mile away.


It’s a form of contrastive reduplication. Used to emphasize the realness of the experience, versus like second hand experience like interviewing those who have the actual experience.

Also consider a phrase like “work work” versus “school work”. For someone who both works a paid job and goes to school, clarifying that they need to do “work work” makes sense.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: