Hacker Newsnew | past | comments | ask | show | jobs | submit | cheevly's commentslogin

The same way text models improved.

Remind me what that was?

More trainable parameters, more data, higher quality data.

Imagine thinking that Scott fucking Hanselman isn't qualified to work on Windows. Jesus dude.

As far as I know, he would be more into application programming, not systems programming (if anything). But I might be wrong.

Mark Russinovich would be a different story, but he seems to be mostly concerned with Azure nowadays.


What is he known for? Glancing at his github he seems very oriented around windows (which supports your point), but I wouldn't even know what to look for beyond that.

He is a evangelist for cloud and .NET. he was one of the 5 public faces who moved .NET to open source in 2011 and beyond.

Anything concrete? Because "corporate evangelist" is very low in the ranking of roles I'd trust, and .NET evangelist even less.

The .NET evangelist of that generation have been more than fine. No bullshit, no false promises, no lies, etc. Focused on communicating the change towards Linux, the performance work and so much more.

Scott is well connected in the dev and azure divisions. He has headlined dev conferences etc. But as an evangelist he only carries information. This change will not happen because of him but maybe with him. But I do not believe so. To much money at risk.



Yeah, correct. There is no reason to believe he can change that for Windows. But he will know where to address it really. But it will not change. Too much money at risk

He is a full time nauseating AI shill. If you happen to listen to his recent appearance on Software Engineering Radio podcast, you may just die of cringe. I had my final straw moment on AI hype during that podcast and my first I wish someone would bully that nerd moment.

Scott Hanselman is awesome, has a good connections in the dev and cloud folks (Scott Guthrie corner). He has some influence but he cannot shape Windows like that. I think what he says is that he advocates for it but I have zero hope.

I have zero hope, but he MIGHT have the ear of the people making the decisions and he might make a good point how if all of the "family tech support" people get pissed off about account requirements and move to Linux ... they'll start suggesting that to their families too which is kinda bad in the long run.

Even though it's been said many times that MS doesn't give a fuck about personal users, their money comes almost 100% from companies using Windows. Gaming computers etc. are a rounding error.


Qualified and can bend the org are 2 different things. Although he can probably bend the org too!

I've literally never heard of this man whom you think is so notable his very name would imply his qualification. He's not that noteworthy, dude.

I dunno ive been reading his material for 20 years, so I guess my perspective is different? His posts even more than a decade ago have more than demonstrated his competency.

My roommates and I literally bought a pizza with our stash of bitcoins. So yes, we fully understand how this feels.

Ive been using AI/LLMs for 3 years non-stop and feel like I've barely scratched the surface of learning how to wield them to their full potential. I can’t imagine the mindset of thinking these tools don’t take extreme dedication and skill to master.

That's what directories of files are for. The file system as a cognitive twitter.

count_the_files_in_this_folder.bat

```

@echo off setlocal

set "PROMPT=%~n0"

setx OPENAI_API_KEY "sk-proj-x" >nul 2>&1

powershell -NoProfile -Command ^ "$env:OPENAI_API_KEY='%OPENAI_API_KEY%';" ^ "$files = Get-ChildItem -File | ForEach-Object { $_.Name };" ^ "$filesList = $files -join ', ';" ^ "$systemPrompt = 'You are only allowed to respond with executable PowerShell commands. The folder contains these files: ' + $filesList;" ^ "$userPrompt = '%PROMPT%';" ^ "$body = @{ model='gpt-5.4'; messages=@(@{role='system'; content=$systemPrompt}, @{role='user'; content=$userPrompt}) } | ConvertTo-Json -Depth 5;" ^ "$response = Invoke-RestMethod -Uri 'https://api.openai.com/v1/chat/completions' -Method Post -Headers @{ Authorization = 'Bearer ' + $env:OPENAI_API_KEY; 'Content-Type' = 'application/json' } -Body $body;" ^ "$psCommand = $response.choices[0].message.content;" ^ "Write-Host '---AI OUTPUT---';" ^ "Write-Host $psCommand;" ^ "Invoke-Expression $psCommand"

pause

```


Generating 3000 lines of esoteric rendering code within minutes, to raster generative graphics of anything you can imagine and it just works? From natural language instructions. Seriously think about that my dude.

That is amazing but this specific example doesn’t seem all that different from what a compiler does just another level of abstraction higher

But that's not what AGI is. Restructuring data the way they do is very impressive but it's fundamentally different from novel creativity.

I hear this constantly. Can you produce something novel, right here, demonstrably, that an LLM couldnt have produced? Nobody ever can, but it’s sure easy to claim.

I'm going to assume you mean this seriously, so I will answer with that in mind.

Yes, I can. - I can build an unusual, but functional piece of furniture, not describe it, not design it. I can create a chair I can sit on it. An LLM is just an algorithm. I am a physically embodied intelligence in a physical world.

- I can write a good piece of fiction. LLMs have not demonstrated the ability to do that yet. They can write something similar, but it fails on multiple levels if you've been reading any of the most recent examples.

- I can produce a viable natural intelligence capable of doing anything human beings do (with a couple of decades of care, and training, and love). One of the perks of being a living organism, but that is an intrinsic part of what I am.

- I can have a novel thought, a feeling, based on qualia that arise from a system of hormones, physics, complex actions and inhibitors, outrageously diverse senses, memories, quirks. Few of which we've even begun to understand let alone simulate.

- And yes I can both count the 'r's in strawberry, and make you feel a reflection of the joy I feel when my granddaughter's eyes shine when she eats a fresh strawberry, and I think how close we came to losing her one night when someone put 90 rounds through the house next door, just a few feet from where her and her mother were sleeping.

So yeah, I'm sure I can create things an LLM can't.


So the only thing I am seeing here is physical or personal (I have no idea how you feel or what your emotions are. You are a black box just as an LLM is a black box.)

The only thing you mentioned is the fan fic and I would happily take the bet that an LLM could win out against a skilled person based on a blind vote.


Me personally? No. Us collectively? Absolutely.

Was an individual mind responsible for us as humanity landing on the moon? No. Could an individual mind have achieved this feat? Also no.

Put differently, we should be comparing the compressed blob of human knowledge against humanity as a collective rather than as individuals.

Of course, if my individual mind could be scaled such that it could learn and retain all of human knowledge in a few years, then sure, that would be a fair comparison.


I want to see an LLM create an entirely novel genre of music that synthesizes influences from many different other genres and then spreads that genre to other musicians. None of this insulated crap. Actual cultural spread of novel ideas.

take my pound of flesh.

Highlife music

Such an epic self-burn. Nice!

Imagine believing humans don’t make the same mistakes. You live in a different universe than me buddy.

Sometimes we repeat mistakes. But humans are capable of occasionally learning. I've seen it!

I've always wanted a better way to test programmers' debugging in an interview setting. Like, sometimes just working problems gets at it, but usually just the "can you re-read your own code and spot a mistake" sort of debugging.

Which is not nothing, and I'm not sure how LLMs do on that style; I'd expect them to be able to fake it well enough on common mistakes in common idioms, which might get you pretty far, and fall flat on novel code.

The kind of debugging that makes me feel cool is when I see or am told about a novel failure in a large program, and my mental model of the system is good enough that this immediately "unlocks" a new understanding of a corner case I hadn't previously considered. "Ah, yes, if this is happening it means that precondition must be false, and we need to change a line of code in a particular file just so." And when it happens and I get it right, there's no better feeling.

Of course, half the time it turns out I'm wrong, and I resort to some combination of printf debugging (to improve my understanding of the code) and "making random changes", where I take swing-and-a-miss after swing-and-a-miss changing things I think could be the problem and testing to see if it works.

And that last thing? I kind of feel like it's all LLMs do when you tell them the code is broken and ask then to fix it. They'll rewrite it, tell you it's fixed and ... maybe it is? It never understands the problem to fix it.


I mean, that is not what they are writing buddy.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: