> For example, .js files are supported for loading locally
Technically yes, but last time i tried that it gives you CORS errors now. You can start your browser with CORS disabled, but that makes sharing your .html file impossible. So back we go to inlining stuff :)
The whole issue is why i stopped using in-editor LLMs and wont use Agents for "real" work. I cant be sure of what context it wants to grab. With the good ol' copy paste into webui I can be 100%sure what the $TECHCORP sees and can integrate whatever it spits out by hand, acting as the first version of "code review". (Much like you would read over stackoverflow code back in the day).
If you want to build some greenfield auxiliary tools fine, agents make sense but I find that even gemini's webui has gotten good enough to create multiple files instead of putting everything in one file.
This way I also dont get locked in to any provider
The leakage issue is real. Before there was a way to use "GPT Pro" models on enterprise accounts, I had a separate work-sponsored Pro-tier account. First thing I did was disable "improve models for everyone." One day I look and, wouldn't you know it, it had somehow been enabled again. I had to report the situation to security.
As far as lock-in, though, that's been much less of a problem. It's insanely easy to switch because these tools are largely interchangeable. Yes, this project is currently built around Claude code, but that's probably a one-hour spike away from flexibility.
I actually think the _lack_ of lock-in is the single biggest threat to the hyperscalers. The technology can be perfectly transformative and still not profitable, especially given the current business model. I have Qwen models running on my Mac Studio that give frontier models a run for their money on many tasks. And I literally bought this hardware in a shopping mall.
This fits them I would say. I was, at a time, chasing the dragon of software minimalism. Y'know, using lynx to browse the web, using suckless software and so on. I was using KISS Linux for a while and even tried to make a package for suckless slock (which iirc was accepted only after someone from the KISS team basically redid my build scripts). So I kinda see myself as a fan of dylan and a great influence on my formative years. (Edit: Formative years apparently means 0-8. I meant more like 16-20 - my bad )
What alwas struck me about dylan araps's software was that the minimalism didnt come from a lack of scope or complexity, but rather the approach to use as much of the tools that were "already there" (at least thats my way to interpret this - i might be wrong).
The pure bash bible is describing how to do common tasks in pure bash. Tasks that usually were done with external tools like sed an awk etc. Then later came the pure sh bible, doing the same for POSIX sh and therefore shedding the dependency on bash-isms.
This represented to me a chase to go "deeper".
And this part clearly seems to still be a driving force to this day. Farming and producing olive oil and wine by hand, no chemicals no bullshit - that sounds like dylan araps alright. Looking at the website you can also see this spirit everywhere. Go ahead and disable JS on https://wild.gr/wine - the image slider still works. It uses inputs and CSS transforms. Makes the markup ever so slightly more complicated, and future changes ever so slightly more involved (unless the code is generated). But lets not kid ourselves, how often are those kind of web elemts updated?
It is once again a project that excels in "using whats already there" and I personally really like that, and even though I am not rocking KISS linux and DWM anymore, this way of thinking is still with me to this day, and I believe it was taught to me by dylan araps. For that I am thankful!
I agree with both of your points since I use LLMs for things I am not good at and dont give a single poop about. The only things i did with LLMs are three examples from the last two years:
- Some "temporary" tool I built years ago as a pareto-style workaround broke. (As temporary tools do after some years). Its basically a wrapper that calls a bunch of XSLs on a bmecat.xml every 3-6 months. I did not care to learn XSL back then and I dont care to do it now. Its arcane and non-universal - some stuff only works with certain XSL processors. I asked the LLM to fix stuff 20 times and eventually it got it. Probably got that stuff off my back another couple years.
- Some third party tool we use has a timer feature that has a bug where it sets a cookie everytime you see the timer once per timer (for whatever reason... the timers are set to end a certain time and there is no reason to attach it to a user). The cookies have a life time of one year. We run time limited promotions twice a week so that means two cookies a week for no reason. Eventually our WAF got triggered because it has a rule to block requests when headers are crazy long - which they were because cookies. I asked an LLM to give me a script that clears the cookie when its older than 7 days because I remember the last time i hacked together cookie stuff it also felt very "wtf" in a javascript kinda way and I did not care to relive that pain. This was in place until the third party tool fixed the cookie lifetime for some weeks.
- We list products on a marketplace. The marketplace has their own category system. We have our own category system. Frankly theirs kinda suck for our use case because it lumps a lot of stuff together, but we needed to "translate" the categories anyway. So I exported all unique "breadcrumbs" we have and gave that + the categories from the marketplace to an LLM one by one by looping through the list. I then had an apprentice from another dept. that has vastly more product knowledge than me look over that list in a day. Alternative would have been to have said apprentice do that stuff by hand, which is a task I would have personally HATED so I tried to lessen the burden for them.
All these examples are free tier in whatever I used.
We also use a vector search at work. 300,000 Products with weekly updates of the vector db.
We pay 250€ / mo for all of the qdrant instances across all environments and like 5-10 € in openai tokens. And we can easily switch whatever embedding model we use at anytime. We can even selfhost a model.
I always thought the whole argument was about explicitly using em dash and / or en dash. Aka — and –.
Because while people OBVIOUSLY use dashes in writing, humans usually fell back on using the (technically incorrect) hyphen aka the "minus symbol" - because thats whats available on the keyboards and basically no one will care.
Seems like, in the biggest game of telephone called the internet, this has devolved into "using any form of dash = AI".
The funniest thing I see are people who are harking "Eww, you used AI for this and it's bad because of that, I can tell because I used this other AI service who said what you wrote was 90% of AI", completely failing to grasp the irony.
- Barely literate native English speakers not comprehending even minimally sophisticated grammatical constructs.
- Windows-centric people not understanding that you can trivially type em-dash (well, en-dash, but people don’t understand the difference either) on Mac by typing - twice.
Fair. I was probably just projecting. I cant even figure out when to use a comma in my native language. So caring about which type of hyphen was used feels like overly sophisticated to me - because I dont care myself.
Do you have any resources here? The /r/seo subreddit seems vers superficial coming from an web agency background so its hard to find legit cases versus obvious oversights. Often people make a post describing a legit sounding issue on there just to let it shine through that they are essentially doing seo spam.
It's something you'll experience if you publish many sites over time.
Can't point to any definitive sources, many of the reputable search related blogs are now just Google shills.
Or if you search for content which you know exists on the web and it suddenly takes an unusual amount of coaxing (e.g. half a sentence in quotes, if you remember it correctly word for word) before it brings up the page you're looking for
Like, isn't this a well-known thing that happens constantly no matter if you're a user or run any websites? Relying on search engine ranking algorithms is russian roulette for businesses sadly, at least unless you outbid the competition to show your own page as an advertisement when someone searches your business' name
What "changes in methodology applied by Google in September" are you referring to? There surely is a public announcement that can be shared? Most curious to hear as a shop I built is experiencing massive issues since august / september 2025
I ran into the same thing! My site still isnt indexed and I would REALLY like to not change the URL (its a shop and the url is printed on stuff) - redirects are my last resort.
But basically what happened: In august 2025 we finished the first working version of our shop. I wanted to accelerate indexing after some weeks because only ~50 of our pages were indexed and submitted the sitemap and everything got de-indexed within days. I thought for the longest time that its content quality because we sell niche trading cards and the descriptions are all one liners i made in Excel. ("This is $cardname from $set for your collection or deck!"). And because its single trading cards we have 7000+ products that are very similiar. (We did do all product images ourselves I thought google would like this but alas).
But later we added binders, whole sets and took a lot of care with their product data. The frontpage also got a massive overhaul - no shot. Not one page in index. We still get traffic from marketplaces and our older non-shop site. The shop itself lives on a subdomain (shop.myoldsite.com). The normal site also has a sitemap but that one was submitted 2022. I later rewrote how my sitemaps were generated and deleted the old ones in search console hoping this would help. It did not. (The old sitemap was generated by the shop system and was very large. Some forums mentioned that its better to create a chunked sitemap so I made a script that creates lists with 1000 products at a time as well as an index for them.)
Later observations are:
- Both sitemaps i deleted in GSC are still getting crawled and are STILL THERE. You cant see them in the overview but if you have the old links they still appear as normal.
- We eventually started submitting product data to google merchant center as well. It works 100% fine and our products are getting found and bought. The clicks still even show up in search console!!!! So I have a shop with 0 indexed pages in GSC that gets clicks every day. WTHeck?
So like... I dont even know anymore. Maybe we also have to restart like the person in the blog did and move the shop to a new domain and NEVER give google a sitemap. If I really go that route I will probably delete the cronjob that creates the sitemap in case google finds it by itself. But also like what the heck? I have worked in a web agency for 5 years and created a new webpage about every 2-8 weeks so i roughly launached about 50-70 webpages and shops and i NEVER saw that happen. Is it an ai hallucinating? Is it anti spam gone too far? Is it a straight up bug that they dont see? Who knows. I dont
(Good article though and I hope maybe some other people chime in and googlers browsing HN see this stuff).
Lets go one level deeper: What's the reason that the vim disgners keyboards had hjkl as arrow keys? Because it made sense for it to be the home row. I still use arrow keys though
Technically yes, but last time i tried that it gives you CORS errors now. You can start your browser with CORS disabled, but that makes sharing your .html file impossible. So back we go to inlining stuff :)
reply