Hacker Newsnew | past | comments | ask | show | jobs | submit | stavros's commentslogin

Those really make my phone overheat, so I avoid them. Didn't they heat yours up?

Sometimes the phone is warm, I wouldn't even say hot. Could be because I bought lower wattage wireless chargers - I don't need it to charge fast, I just need it to top up the battery.

The only time my phone has given me a message about heat was, indeed, in a phone holder in the car, but it wasn't even charging. We are experiencing a heat wave in Australia right now though, and the car had been sitting in the sun in a car park for an hour.


They’re a joke, I used to have one in my car and the combination of sunlight & internally produced heat would make my phone shut off & display a “iPhone is too hot” message. Even when it’s cold outside.

I think that's just sun+charging, not wireless charging specific.

I switched to wired charging with the phone mounted in the same spot and the heat issue went away. Wireless charging produces a lot more heat than wired.

Time for a new Vapor Cooled iphone?!

/s


I don't think this advice is useful. You're going to use your devices, so you won't control the temperature or, largely, the charge percentage.

I think good advice is to keep your devices as cool as you can (ie don't leave your cars in sunlight when there's shade), which you probably did anyway, and keep the battery between 20% and 80% as much as possible. If the battery is going to stay unused for a while, leave it at 3.8V (or close to it), or at 50%.

Batteries are ultimately consumables, so don't stress too much. Just care for them as much as convenient, and that's it.


> I think good advice is to keep your devices as cool as you can (ie don't leave your cars in sunlight when there's shade),

In some climates, such as where I live, the larger issue is the cold in the winter. From what I understand, Li-ion batteries don't like being charged below 0 C. And it is not uncommon for it to dip to -15 C or even -20 C here.

Really, from what I understand, batteries want to be kept above freezing but cool. So yeah, don't leave it in direct sunlight in the middle of summer. The more difficult problem is the winter (unless you happen to have a heated garage).


Yeah, they lose capacity temporarily when it's very cold. Most EVs now precondition the battery before charging by heating it up.

> You're going to use your devices, so you won't control the temperature or, largely, the charge percentage.

> I think good advice is to keep your devices as cool as you can (...), and keep the battery between 20% and 80% as much as possible.

Yeah that's kinda what I meant. Where it's easy or possible to do so (for eg lots of modern laptops & phones allow charge limits), it's better to follow these guidelines.

> Batteries are ultimately consumables, so don't stress too much. Just care for them as much as convenient, and that's it.

Yeah I agree (and that's what I meant by my last sentence), however, a lot of people (including eg my dad!) end up having battery issues while being unaware that they can do things to protect their hardware.

For example, my phone has enough capacity to last the whole day even at 60% of it's capacity. I've set it to stop charging at 80% (the lowest possible SOC) for this reason. On my laptop, I frequently reduce it to 60% as I use it plugged in.

> I don't think this advice is useful.

I'm afraid I don't get what's not helpful? We're probably talking across each other.


It came across to me as "keep your batteries always under 0 C", which obviously almost nobody can do, and it leads to a sense of "eh, I won't go to these lengths, might as well do nothing", which is counterproductive.

I see the same reaction with healthy eating, where people are so put off by extremely militant advice that they think "I can't eat only vegetables all day, fuck it, I'll eat these three cheeseburgers".

I agree with your second comment, the first one just could be misconstrued as very hard-to-follow advice.


That's backwards. At too low temperatures batteries start to take damage during discharge or (especially) charge, so 0C is the lowest temperature at which you should charge it. 5C would be better.

It's a concern mainly for e.g. offgrid batteries being used in the winter.


I know, but "as cool as possible (up to zero degrees C at least)" is conflicting, and kind of means "below zero degrees".

They probably just performed a standard cyberattack on the radar systems before sending in the troops, it doesn't have to be the same weapon for both.

It's not, but at least it will be equally ungreen.

What OpenCode primitive did you use to implement this? I'd quite like a "senior" Opus agent that lays out a plan, a "junior" Sonnet that does the work, and a senior Opus reviewer to check that it agrees with the plan.

You can define the tools that agents are allowed to use in the opencode.json (also works for MCP tools I think). Here’s my config: https://pastebin.com/PkaYAfsn

The models can call each other if you reference them using @username.

This is the .md file for the manager : https://pastebin.com/vcf5sVfz

I hope that helped!


This is excellent, thank you. I came up with half of this while waiting for this reply, but the extra pointers about mentioning with @ and the {file} syntax really helps, thanks again!

Because you have other benefits, so we'd really like to switch over to you, but we can't unless you support this dealbreaker feature that your competitor we're currently using has.

No, parent said they’d go talk to the competitor. They didn’t say they were already with them. Don’t change the scenario.

Because you have other benefits, so we'd really like to use you, but we can't unless you support this dealbreaker feature that your competitor has.

Aren't we all tired by this anti-AI stuff? Use it if you want to, don't use it if you don't want to, I just don't really want to hear about your personal opinion on it any more.

I do hope you comment the same thing on the pro-AI articles from people trying to sell you a product. Internet is now infested by those, and without these articles you might think everybody has collectively lost their mind and still think we will get replaced in the next 6 months.

I use AI, what I'm tired of is shills and post-apocalyptic prophets


I use AI, I pay a subscription to google. I use it for work. I use it for learning. I use it for entertainment.

I am still concerned with how it's going to impact society going forward. The idea of what this is being used for by those with a monopoly on the use of violence is terrifying: https://www.palantir.com/platforms/aip/

Am I a shill or a post-apocalyptic prophet?


Yes, us who use AI yet aren't shills nor hypers and also still have our critical thinking receptors left in our brains, are tired of both sides exaggerating and hyping/dooming.

People would do much better if they just stopped listening so much and started thinking and doing a bit more. But as a lazy person, I definitely understand why it's hard, it requires effort.


"Look at how I use this cool new technology" tends to be much more interesting to me than "this new technology has changed my job and I refuse to use it because I'm afraid".

Obviously it’s far more nuanced than that. I’d say there are several categories where a reasonable person could have reservations (or not) about LLMs:

Copyright issues (related to training data and inference), openness (OSS, model parameters, training data), sovereignty (geopolitically, individually), privacy, deskilling, manipulation (with or without human intent), AGI doom. I have a list but not in front of me right now.


Yes, and those are interesting topics to discuss. "AI is useless and I refuse to use it and hate you if you do" isn't, yet look at most of the replies here.

I hope we’ll eventually reach enough fatigue that either of the two gets 0 comments and we move on

> Yes, and those are interesting topics to discuss. "AI is useless and I refuse to use it and hate you if you do" isn't...

Did you read Mr. Bushell's policy [0], which is linked to by TFA? Here's a very relevant pair of sentences from the document:

  Whilst I abstain from AI usage, I will continue to work with clients and colleagues who choose to use AI themselves. Where necessary I will integrate AI output given by others on the agreement that I am not held accountable for the combined work.
And from the "Ensloppification" article [1], also linked by TFA:

  I’d say [Declan] Chidlow verges towards AI apologism in places but overall writes a rational piece. [2] My key takeaway is to avoid hostility towards individuals†. I don’t believe I’ve ever crossed that line, except the time I attacked you [3] for ruining the web.
  
  † I reserve the right to “punch up” and call individuals like Sam Altman a grifter in clown’s garb.
Based on this information, it doesn't seem that Mr. Bushell will hate anyone for using "AI" tools... unless they're CEO pushers.

Or are you talking in generalities? If you are, then I find the unending stream of hype articles from folks using this quarter's hottest tool to be extremely disinteresting. It's important for folks who object to the LLM hype train to publish and publicize articles as a counterpoint to the prevailing discussion.

As an aside, the LLM hype reminds me of the hype for Kubernetes (which I was personally enmeshed in for a great many years), as well as the Metaverse and various varieties of Blockchain hype (which I was merely a bystander for).

[0] <https://dbushell.com/ai/>

[1] <https://dbushell.com/2025/05/30/ensloppification/>

[2] link in the pull quote being discussed to: <https://vale.rocks/posts/ai-criticism>

[3] inline link to: <https://dbushell.com/2025/05/15/slopaganda/>


That's a very thorough takedown of something the guy you're replying to never said. The end of their comment was "yet look at most of the replies here".

That's an exceedingly unkind summation of the piece in question.

I wasn't talking about the piece in question, which just says "BTW I don't use AI".

> this new technology has changed my job and I refuse to use it because I'm afraid

You're confusing fear with disgust. Nobody is afraid of your slop, we're disgusted by it. You're making a huge sloppy mess everywhere you go and then leaving it for the rest of us to clean up, all while acting like we should be thankful for your contribution.


No, I'm tired of AI being pushed as this amazing way to make everybody go 100% faster, while being able to lay off 90% of the people.

And for some reason the CxO suite and upper management has completely drunk the cool-aid.

In the past new technology was adopted sparingly, to figure out whether the juice was worth the squeeze.

However with AI it feels like a lot of places are (trying to go|going) all in, both in their work, and integrating it into the products, regardless of whether it makes sense.

But most importantly, I think pushback is needed because if AI succeeds in the way it is currently advertised and sold, it's a lot more people than 'just' the Software Engineers that are going to lose their jobs.

Which is great for all those companies who currently have a lot less people on payroll.

But on the other hand, a lot of the money spent on these companies is discretionary spending. Guess what's the first thing to be cut when you lose your job?


Aren't we all tired by this pro-AI stuff? Use it if you wanna ruin the planet. Don't use it if you care about maintaining skill.

I just don't really wanna hear about your pro-AI peddling anymore.


AI is now a mainstream technology and well within the area of topics discussed on this board. Are we going to sit around and pretend it’s 2021? It’s like getting annoyed that all we talk about is computers.

The annoying part is that AI is getting shoved down every throat it can find. It's Blockchain and NFTs all over again.

There's nothing wrong with blockchain. In fact I find this swing back to centralized services distateful for this community.

“No blockchain” != “centralized”

Mastodon ain’t a blockchain and isn’t centralized.


Unlike blockchain Mastodon isn't decentralized either.

Eh, like everything on the Internet, the anti crowd is becoming more obnoxious than the pro crowd ever was. It has become an identity thing, more than a technical thing, and it always sucks when it devolves into that.

Is wasting massive fucktons of water and electricity an identity thing? Is it identity that RAM now costs 10x what it used to?

What are you gonna spend these water and electricity in US on instead?

It is still a technical thing though. AI generated code is outright buggy when it’s not mediocre but the pro AI crowd is pretending you can guardrail and test suite your way to good generated code. As if painting a picture in negative space is somehow less work than painting it directly. And that’s when you know all the requirements (the picture) upfront.

> Eh, like everything on the Internet, the anti crowd is becoming more obnoxious than the pro crowd ever was.

In your highly objective opinion, of course.


I'm not, I'm tired of hearing about it. If someone is forcing you to read these articles then that sounds like you are in a really shitty situation. Blink twice if you need help.

I find some of it interesting. I'm very interested in understanding why others' experience of using genAI is so vastly different to my own.

(For me it's been as transformational a change as discovering I could do my high school homework on a word processor in the 90s when what I suspect was undiagnosed dyspraxia made writing large volumes of text by hand very painful).

I'm also interested in understanding if the envisaged transformation of developers into orchestrators, supervisors, tastemakers and curators is realistic, desirable or possible. And if that is even the correct mental model.


Sure, me too, but most discourse I've seen is just knee-jerk reaction of the form "AI is entirely useless", which is just basically noise.

I don’t like “AI is useless” as an argument because

* it is basically invalidated by somebody saying “well I find it useful”

* it is easy to believe it’s on a path toward usefulness

OTOH it is worth keeping in mind that we haven’t seen what a profitable AI company looks like. If nothing else this technology has massive potential for enshittification…


I agree with you entirely. On the other hand, I love that nobody will ever be able to take the current open models away from us.

I am not tired by this anti-AI stuff. As a person who uses it in very limited capacity, also as an ML/computer vision developer and researcher with 10 years of commercial experience with it, I want much more anti AI stuff.

Low quality (low precision) news, code, marketing, diagnosis, articles, books, food, entertainment (shorts, tik-tok), engineering is in my opinion the biggest problem in XXI century so far.

Low quality AI usage decisions, low quality AI marketing, retraining, placement, investments are accelerating the worst trends even more. It's like Soviet nuclear trains - just because nuclear is powerful and real it doesn't mean most of it's applications made any sense.

So as a pro-AI person and AI-builder in general, I want more anti-AI-slop content, more pro-discipline opinions.


I think you have hit on something. The problem isn't the tech, it’s the eye of the user.

The same person who ignores a crooked door frame or a CSS overflow now has a "mostly right" button to bring mediocrity to scale. We unfortunately aren't invested in teaching craftsmanship as a society.


Both sides are against slop, what we are arguing about is basically the position that “AI can be used for useful things” vs the “all AI is slop” positions, the latter being based on hasty generalization fallacies (some AI is slop so all AI is slop).

Sorry, but I don't think so.

I think without AI the effort of producing slop code or art that sort of looks like a real thing on the first glance is let's say 5% of the effort needed for the real thing that actually works flawlessly. LLMs and diffusion models bring it down to 0.5%.

They are also really good at faking comprehension and make recognizing real experise from phoney cosplayers harder for busy managers, officials, execs, politicians etc.

So while AI CAN be used for useful things, it very rarely is and it requires more discipline than most people are willing to invest.

Also, the way AI is trained on stolen and random low quality content is deeply disturbing.

So yeah, while I'd like anti-AI rants to be more precise and nuanced, in general AI in 2026 is mostly a missuse of a technology with great potential.


Both sides are against slop made by AI, not that bit sides can make slop, which is true but irrelevant.

Still so many misconceptions about AI on HN, but I guess it’s just how techies are these days.


My CEO sent a company-wide email this week saying "AI use is mandatory for all developers". Until this kind of mandatory bullshit stops I'm happy to see other people fighting the good fight and publicly saying that they want to keep doing a job they actually enjoy.

Many of my coworkers have embraced AI coding and the quality of our product has suffered for it. They deliver bad, hard-to-support software that technically checks some boxes and then rush on to produce more slop. It feels like a regression to the days of measuring LOC as a proxy for productivity.


It seems that poor tech leadership are fearing that they won't be able to move onto their next job if they don't put "implemented AI efficiencies" on their CV now. It's up to us grunts to work out how to actually make it not suck.

what's your exit strategy? if i got a letter like that i'd either be out switching jobs at the first opportunity, or i'd ignore it until i get fired for refusing to comply, while hoping that disaster strikes before that happens, or maybe just hoping that noone notices.

I'm seeing the top-down AI usage pushed from the same types of leaders and companies who love to outsource and are happy with shit shovelled over the wall then devs firefight production bugs forever. It's just a good reminder they don't care a bit about quality.

I’m actually more sick of hearing about AI like literally all the time in all forms of media. I’m also sick of seeing AI created content which is so obviously low quality and often unchecked and just thrown out into the world.

Also when I hear another human suggest using AI for ____, my perception of them is that they are an unserious person.

So in my opinion AI has had a net negative effect on the world so far. Reading through this persons AI policy resonates with me. It tells me they are a thoughtful individual who cares about their work and the broader implications of using AI.


> Aren't we all tired of this anti-AI stuff?

Let's do a quick analysis of the amount of money put forth to push AI:

> OpenAI has raised a total of $57.9B over 9 funding rounds

> Groq has raised a total of $1.75 billion as of September, 2025

Well, we could go on, but I think that's probably a good enough start.

I looked into it, but I wasn't able to find information on funding rounds that David Bushell had undergone for his anti-AI agenda. So I would assume that he didn't get paid for it, so I guess it's about $0.

Meanwhile:

- My mobile phone keyboard has "AI"

- Gmail has "AI". Google docs has "AI". At one point every app was becoming a chat app, then a TikTok clone. Now every app is a ChatGPT or Gemini frontend.

- I'm using a fork of Firefox that removes most of the junk, and there's still some "AI" in the form of Link Preview summaries.

- Windows has "AI". Notepad has "AI". MS Paint has "AI".

- GitHub stuck an AI button in place of where the notifications button was, then, presumably after being called every single slur imaginable about 50000 times per day, moved it thirty or so pixels over and added about six more AI buttons to the UI. They have a mildly useful AI code review feature, but it's surprisingly half-baked considering how heavily it is marketed. And I'm not even talking about the actual models being limited, the integration itself is lame. I still consider it mildly useful for catching typos, but that is not with several billion dollars of investment.

- Sometimes when I log into Hacker News, more than half of the posts are about AI. Sometimes you get bored of it, so you start trying to look at entries that are not overtly about AI, but find that most of those are actually also about AI, and if not specifically about AI, goes on a 20 minute tangent about AI at some point.

- Every day every chat every TV program every person online has been talking about AI this AI that for literally the past couple of years. Literally.

- I find a new open source project. Looks good at first. Start to get excited. Dig deeper, things start to look "off". It's not as mature or finished as it looks. The README has a "Directory Structure" listing for some odd reason. There's a diagram of the architecture in a fixed width font, but the whitespace is misaligned on some lines. There's comments in the code that reference things like "but the user requested..." as if, the code wasn't written by the user. Because it wasn't, and worse, it wasn't read by them either. They posted it as if they wrote it making no mention at all that it was prompts they didn't read, wasting everyone's time with half-baked crapware.

And you're tired of anti-AI sentiment? Well God damn, allow me to Stable Diffusion generate the world's smallest violin and synthesize a song to play on it using OpenAI Jukebox.

I'm not really strictly against AI entirely, but it is the most overhyped technology in human history.


> Sometimes when I log into Hacker News, more than half of the posts are about AI.

And I don't ever see it under a fifth, anymore. There is a Hell of a marketing push going on, and it's genuinely hard to tell the difference between the AI true believers and the marketing bots.


My Samsung TV from 2013 is a smart TV with AI voice control features.

My scanner from 2003 has OCR.

Gaming has a very rich history with AI innovations. 1996's Creatures is a standout example.

AI has always been everywhere around you. AI predates me and you. The reason you're hearing about it now is because of the capability increase brought about by the lower cost of greater scale. But it's still in the uncanny valley. It is still flawed. To paraphrase John McCarthy, that's what makes it AI [^1].

I know you're fatigued from hearing about AI for the last three years. But I have been hearing about it for decades with the same magnitude of excitement and dismissal from pro-AI and anti-AI critics. Alan Turing laid the foundations for the technological singularity in 1950. Discourse has accelerated since the early 80's and 90's by writers like Vernor Vinge and computer scientists like Ray Kurzveil.

I encourage you to pay attention. Not to recent hype, but to what is actually happening. Steady innovation as always. That you were blindsided by it is curious.

[^1]: https://en.wikipedia.org/wiki/AI_effect


Sorry, but what you are talking about is AI (old). What I'm talking about is "AI" (new). It's different. Video games had AI (old). Notepad in 2026 has "AI" (new). Very different.

I could explain the difference but it's beyond my pay grade.


You deliberately read an article entitled “You can’t pay me to prompt” then complained about having to hear about anti-AI blog posts?

They said they're tired of anti-AI commentary, not that they're tired of complaining about anti-AI commentary!

Well played sir. Well played

We're in this in-between phase where we gradually all start to use AI. There is no escaping.

Resistance is futile, You will be assimilated?

Is it so hard to understand why people are reacting against this argument?


In fact you are right, there is no escape from this assimilation, at least I do not see how. And the outcome might be worse than becoming the Borg. Nobody can tell right now.

There's resistance but on the other hand there was resistance against light bulbs, trains with engines, automatic press, phones, television, a global internet,...


> There's resistance but on the other hand there was resistance against light bulbs, trains with engines, automatic press, phones, television, a global internet,...

There was also resistance against fascism, slavery, Ponzi schemes, the privatisation of public goods, the devaluation of the Humanities, ...


I don't see fascism, slavery etc as part of progress, it is more of a political/ethical choice. AI is a next step in human progress and that is why i see its use as inevitable. Of course there still need to be checks and balances, just like with free trade, free speech, capitalism, private property etc to keep things fair and balanced. But it is never perfect.

Very tired

> I just don't really want to hear about your personal opinion on it any more.

And I don't want to hear about how the world of software engineering has been revolutionized because you always hated programming with a passion, but can now instead pay $200 to have Claude bring your groundbreaking B2B SaaS Todo app idea to life, yet that's basically all I hear about in any tech discussion space.

You should ask your AI assistant to explain to you why people would go out of their way to take a stand against this.


You should ask an AI why this impression is very wrong.

> Aren't we all tired by this anti-AI stuff?

It's fine to be tired of this. What is not fine is pretending your beliefs/feelings represent everybody else's.

No one forcing you to read the article. He is as free to write what he wants as you are to complain about it. Balanced. Like all things should be.


Nah im tired about AI dominating 90% of the posts and the slop machine. People who use AI can't shut up about it.

There's a good reason for that. Because other AI users are listening. This is like choosing a car or a work tool, except they meaningfully progress every 6 months (more often if you restrict yourself to local). So you need to get an impression on what to use next before switching, unless you want to review every single one yourself.

There are entire sites dedicated to car reviews. This is a hackers website. Makes sense that the most evolving tool for the job is most discussed.

What else is really changing? CSS added a couple new properties? C++ new standard still didn't add modules (but the year changed!)?


You should ask yourself why nothing interesting is happening anymore. There's a reason for that.

Definitely people who hate AI can’t shut up about it. 90% if the comments on HN seem to just be people hating on AI.

Everyone else is just busy using it to get work done.


Honestly, it feels differently to me. I have the distinct impression that the pro-AI side is much more desperate to normalize usage and have AI-based achievements recognized as equivalent, rooted in fears of inadequacy. It's about hoping everyone stops with the "you didn't make that, the AI did".

“One side has the impression and/or believes that the other side is more vocal and less sincere” is as old humanity and hasn’t changed with AI.

Maybe they should stop trying to gavage us with it, then.

Frankly, I never understand this usage of "we". Who is "we"? An honest post would be "I'm tired of this anti-AI stuff". (And I feel you as I'm as bored with "look what my claude produced" posts.)

Unfortunately I didn't manage to figure out how to make their hardware to work without a HA installation. I'd really love to do that, if anyone has any info on how their protocol works, please do tell.

I looked at their Wyoming docs online but couldn't really see how to even let it find the server, and the ESPhome firmware it runs offered similarly few hints.


Can't really fault them when this exists:

https://github.com/anthropics/claude-code


What even is this repo? Its very deceptive

Issue tracker for submitting bug reports that no one ever reads or responds to.

Now that's not fair, I'm sure they have Claude go through and ignore the reports.

Unironically yes. If you file a bug report, expect a Claude bot to mark it as duplicate of other issues already reported and close. Upon investigation you will find either

(1) a circular chain of duplicate reports, all closed: or

(2) a game of telephone where each issue is subtly different from the next, eventually reaching an issue that has nothing at all to do with yours.

At no point along the way will you encounter an actual human from Anthropic.


By the way, I reversed engineered the Claude Code binary and started sharing different code snippets (on twitter/bluesky/mastadon/threads). There's a lot of code there, so I'm looking for requests in terms of what part of the code to share and analyze what it's doing. One of the requests I got was about the LSP functionality in CC. Anything else you would find interesting to explore there?

I'll post the whole thing in a Github repo too at some point, but it's taking a while to prettify the code, so it looks more natural :-)


Not only this would violate the ToS, but also a newer native version of Claude Code precompiles most JS source files into the JavaScriptCore's internal bytecode format, so reverse engineering would soon become much more annoying if not harder.

Claude code is very good at reverse engineering. I reverse engineer Apple products in my MacBook all the time to debug issues

Also some WASM there too... though WASM is mostly limited to Tree Sitter for language parsing. Not touching those in phase 1 :-)

> Not only this would violate the ToS

What specific parts of the ToS does "sharing different code snippets" violate? Not that I don't believe you, just curious about the specifics as it seems like you've already dug through it.


Using GitHub as an issue tracker for proprietary software should be prohibited. Not that it would, these days.

Codeberg at least has some integrity around such things.


That must be the worst repo I have ever seen.

OpenCode is amazing, though.

I switched to Opencode a few weeks ago. What a pleasant experience. I can finally resume subagents (which has been Broken in CC for weeks), copy the source of the Assistant's output (even over SSH), have different main agents, have subagents call subagents,... Beautiful.

Especially that RCE!

A new one or one previously patched?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: