Hacker Newsnew | past | comments | ask | show | jobs | submit | ofalkaed's commentslogin

I finally started reading The Book of Disquiet, which I have had sitting about for years. Quite surprised by it, has a far greater range than people make it out to have and some amazing humor. It is turning out to be one of the most fascinating books I have read.


Style is pretty much a set of neurodivergent ticks. having a cohesive style requires understanding them so you don't argue with yourself mid sentence, unless you have a good reason to do so which works towards your style. The lack of being neurodivergent is the lack of identifiable style and essentially AI.


You just summarized my entire post better than I did.

That distinction between "unintentional mess" and "intentional style" is exactly what I'm trying to figure out. I think I've been treating my ticks as bugs for so long that I forgot they were actually the only unique data points I had.


Admittedly, I skimmed over much of your post, it is a bit round about and takes the scenic route (I also have a tendency to take the scenic route). If you want to use AI in your writing but maintain your style, perhaps it would be best to use AI to help structure your post instead of structuring your post for AI to write it, this will probably save you just as much time but keep things in your own voice.

Structure is where the vast majority have issues, not style, their neurodivergent ticks are going to come out regardless of how much they fight them unless they invest a good deal of time studying style and their style.


Fair point on the skimming. I definitely let the scenic route take over in this one; Ironically, getting lost in the weeds is how I found the voice!

I love the idea of using it for structure only. My fear was that if AI touches the structure, it inevitably sanitizes the voice too. But there's probably context I can provide to avoid that.


Give the AI the topic you want to write about and a few of the supporting ideas you want to touch on, ask it to elaborate on them, bounce back and fourth with it and question all of its suggestions, directly ask it about them. Take what you like. Repeat a few times until you are happy with the result. ChatGPT seems better at this sort of thing than Claude in my experience, it is better at picking up on your style of developing an idea and working within it as long as you remember to question it when it moves away from it and remind it of your way of thinking about things.

10 or 15 minutes should allow you to have a basic outline you can ask it to flesh out into a more complete outline and you can do things like tell it to emphasize a point and build the outline around that point or even tell it to reach that point in a roundabout fashion which only emphasis it at the end or anything else which suits your way of developing an idea. After that you just need to write it and if you do things right that will be easy.

AI is a great tool for this stuff and the big models are smart enough to learn your style and intents, becoming more mentor than shortcut, but like a mentor you need to put ins some time with them so they can figure you out and you can figure them out.


I don't think your passion is programming if AI erodes it, your passion was the external validation which programming provided. TFA kind of touches on this but I think dances around it trying to find a way to not admit it. External validation is nice but absolutely destructive as a passion.


I learned linux by using Arch back in the days when pacman -Syu was almost certain to break something and there was a good chance it would break something unique to your install. This was also back in the days when most were not connected to the internet 24/7 and many did not have internet, I updated when I went to the library which was generally a weekly thing but sometimes it be a month or two and the system breakage that resulted was rococo. Something was lost by Arch becoming stable and not breaking regularly, it was what drove the wiki and fixing all the things that pacman broke taught you a great deal and taught you quickly. Stability is not all that it is cracked up to be, has its uses but is not the solution to everything.


>>Something was lost by Arch becoming stable and not breaking regularly

Only a Linux user would consider the instability of a Linux distro to be a good thing.


If your goal is to learn how it works this was great, a new challenge every day.

Perhaps we need a chaosmonkey Linux distro.

Also FreeBSD did this well recently, migrating libc and libsys in the wrong order so you have no kernel API. That was fun.


I was hit with that on a remote box in another country! Should have researched the upgrade more before I did it, but it is a personal thing and not work.

However, my IPMI motherboard and FreeBSD's integrated ZFS boot environments might be considered cheating...


You don't become a mechanic without fixing broken down cars. So in that sense, the shittier the car, the better.

My Linux story is similar. In retrospect I learned it on hard mode, because Gentoo was the first distro I used (as in really used). And Gentoo, especially back around 2004 or so, really gave you fully automatic, armour-piercing, double-barreled footguns.


Gentoo foot guns were (are) the best!

That you could always just boot from the CD and start again was nice. I think I reinstalled 4-5 times the "first time" before I got it where I wanted to be.


If you choose not to upgrade, it is stable. There is no QA department for Linux (or windows, they were let go around 2015) so someone has to endure the instability if there is to be any progress. We should all thank those who run nascent software so those who run stable distros can have stability.


It is the sort of mentality required to reach the place in computing which linux has. Decent chance you have linux running on something you own even if you do not run it on your computer and even if you don't, you do use the internet.


I've contributed 32 edits (1 new page) in the past 10 years, so despite being stable, there are still many things to add and fix!

Sadly, the edit volume will likely drop as LLMs are now the preferred source for technical Linux info/everything...


At the same time, I suspect resources like the Arch Wiki are largely responsible for how good AI is at fixing this kind of stuff. So I'm hoping that somehow people realize this and can continue contributing good human-written content (in general).


> So I'm hoping that somehow people realize this and can continue contributing good human-written content (in general).

AI walled-gardens break the feedback loop: authors seeing view-counts and seeing "[Solved] thank you!" messages helps morale.


Definitely, being unpaid LLM trainer for big corporations while nobody actually reads your work is not very encouraging. I wonder what the future will bring.


I do think we will, at some point, face a knowledge crisis because nobody will be willing to upload the new knowledge to the internet.

Then the LLM companies will notice, and they’ll start to create their own updated private training data.

But that may be a new centralization of knowledge which was already the case before the internet. I wonder if we are going to some sort of equilibrium between LLMs and the web or if we are going towards some sort of centralization / decentralization cycles.

I also have some hope that LLMs will annihilate the commercial web of "generic" content and that may bring back the old web where the point was the human behind the content (be it a web page or a discussion). But that what I’d like, not a forecast.


I wouldn't be surprised if LLM companies end up sponsoring certain platforms / news sites, in exchange for being able to use their content of course.

THe problem with LLMs is that a single token (or even a single book) isn't really worth that much. It's not like human writing, where we'll pay far more for "Harry Potter" and "The Art of Computer Programming" than some romance trash with three reads on Kindle.


It's happening https://wikimediafoundation.org/news/2026/01/15/wikipedia-ce... Although editors paid directly by the Foundation are not the majority of Wikipedia.


This is perhaps true from the "language model" point of view, but surely from the "knowledge" point of view an LLM is prioritising a few "correct" data sources?

I wonder about this a lot when I ask LLMs niche technical questions. Often there is only one canonical source of truth. Surely it's somehow internally prioritising the official documentation? Or is it querying the documentation in the background and inserting it into the context window?


LLM companies already do this. Both Reddit and Stack Overflow turned to shit (but much more profitable shit) when they sold their archives to the AI companies for lots of money.


I kind of fear the same. At the same time I wonder if structured information will gain usefulness. Something like man pages are already a great resource for humans, but at same time could be used for autocompletion and for LLMs. Maybe not in the current format but in the same vein.

But longer form tutorials or even books with background might suffer more. I wonder how big the market of nice books on IT topics will be in the future. A wiki is probably in the worst place. It will not be changed with the MR like man pages could be and you do not get the same reward compared to publishing a book.


> nobody will be willing to upload the new knowledge to the internet

I think there will be differences based on how centralized the repository of knowledge is. Even if textbooks and wikis largely die out, I imagine individuals such as myself will continue to keep brief topic specific "cookbook" style collections for purely personal benefit. There's no reason to be averse to publishing such things to github or the like and LLMs are fantastic at indexing and integrating disparate data sources.

Historically sorting through 10k different personal diaries for relevant entries would have been prohibitive but it seems to me that is no longer the case.


Absolutely. Even though I don’t use arch (btw), the wiki is still a fantastic configuration reference for many packages: systemd, acpi, sensors, networkmanager I’ve used it for fairly recently.

You see it referenced everywhere as a fantastic documentation source. I’d love seeing that if I were a contributor


Also if it's not correct someone else will edit it. But with the LLM it's just the LLM and you, and if you correct it is not like it will automatically be updated for all the users.


I just installed Arch (EndeavourOS) and LLM did not help. The problems were new and the LLM’s answers were out-of-date. I wasted about 5 hours. Arch’s wiki and EndeavourOS’s forums were much better. YMMV


They may be preferred, but in a lot of cases they’re pretty terrible.

I had a bit of a heated debate with ChatGPT about the best way to restore a broken strange mdadm setup. It was very confidently wrong, and battled its point until I posted terminal output.

Sometimes I feel it’s learnt from the more belligerent side of OSS maintenance!


Why would you bother arguing with an LLM? If you know the answer, just walk away and have a better day. It is not like it will learn from your interaction.


Maybe GP knew the proposed solution couldn't have worked, without knowing the actual solution?


The Gell-Mann effect? If you can't trust LLM to assist with troubleshooting in the domain one is very familiar (mdadm), then why trust it in another that one is less familiar such as zfs or k8s?


I think my position is that I don’t trust it at all carte Blanche, but if I know what I’m doing, then it’s helpful as a doc source. In this case, I wasn’t too familiar with the specific cli tools, flags etc and it was a shortcut.


Because I wasn’t 100% sure of the solution myself, and wanted to talk through how to actually implement the theory of what I wanted to do. I knew that what it was suggesting was 100% wrong, but not of the best path.


Arguing with an LLM is silly because you’re dealing with two adversarial effects at once:

- As the context window grows the LLM will become less intelligent [1] - Once your conversation takes a bad turn, you have effectively “poisoned” the context window, and are asking an algorithm to predict the likely continuation of text that is itself incorrect [2]. (It emulating the “belligerent side of OSS maintenance” is probably quite true!)

If you detect or suspect misunderstanding from an LLM, it is almost always best to remove the inaccuracies and try again. (You could, for example, ask your question again in a new chat, but include your terminal output + clarifications to get ahead of the misunderstanding, similar to how you might ask a fresh Stack Overflow question).

(It’s also a lot less fun to argue with an LLM, because there’s no audience like there is in the comments section with which to validate your rhetorical superiority!)

1 - https://news.ycombinator.com/item?id=44564248 2 - https://news.ycombinator.com/item?id=43991256


I knew roughly the right path, and wanted guidance on that (cli guidance specifically). It was refusing to do so saying it wouldn’t work! It did work…:


> It was very confidently wrong, and battled its point

The "good" news is a lot of newer LLMs are grovelling, obsequious yes-men.


I think it all comes down to curiosity, and I dare think that that's one of the main reasons why someone will be using Arch instead of the plethora of other distros.

Now, granted, I don't usually ask an LLM for help whenever I have an issue, so I may be missing something, but to me, the workflow is "I have an issue. What do I do?", and you get an answer: "do this". Maybe if you just want stuff to work well enough out of the box while minimizing time doing research, you'll just pick something other than Arch in the first place and be on your merry way.

For me, typically, I just want to fix an annoyance rather than a showstopping problem. And, for that, the Arch Wiki has a tremendous value. I'll look up the subject, and then go read the related pages. This will more often than not open my eyes to different possibilities I hadn't thought about, sometimes even for unrelated things.

As an example, I was looking something up about my mouse the other day and ended up reading about thermal management on my new-to-me ThinkPad (never had one before).


Depends on how AI-pilled you are. I set Claude loose on my terminal and just have it fix shit for me. My python versions got all tuckered and it did it instead of me having to fuck around with that noise.


I'm not there yet. Not on my work system anyway.

Seen too many batshit answers from chatgpt when I know the answer but don't remember the exact command.


I learned linux on debian first. The xserver (x11 or what as its old name) was not working so I had to use the commandline. I had a short debian handbook and worked through it slowly. Before that I had SUSE and a SUSE handbook with a GUI, which was totally useless. I then went on to use knoppix, kanotix, sidux, GoboLinux, eventually ended up with slackware. These days I tend to use manjaro, despite the drawback that is systemd. Manjaro kind of feels like a mix between arch and slackware. (I compile from source, so I only need a base really for the most part, excluding a few things; I tend to disable most systemd unit files as I don't really need anything systemd offers. Sadly distributions such as slackware kind of died - they are not dead, but too slow in updates, no stable releases in years, this is the hallmark of deadness.)


Slackware only does long term stable releases but Slackware current is a rolling release that does not really feel like a rolling release because of how Slackware provides a full and complete system as the base system. I avoided Slackware current for years because I did not want to deal with the hassle of rolling release, but it is almost identical in experience to using the release.


I actually got a lot of Linux knowledge from the Suse handbooks, but when I was still buying a box in the book store because of slow internet connection in the beginning of the 2000. For Linux content nowadays the Arch wiki is still one of my most used resources although I did not use Arch in years.


> The xserver (x11 or what as its old name)

It was XFree86 until around mid 00s after which the X.org fork took over.


> Arch becoming stable and not breaking regularly

I believe this to be the entire ecosystem, not just Arch. It's been a long while since something like moving to 64bit happened. Or swapping out init systems.


Other good examples: Linuxthreads to NTPL (for providing pthreads), XFree86 to Xorg.

I was using Gentoo at the time, which meant recompiling the world (in the first case) or everything GUI (in the second case). With a strict order of operations to not brick your system. Back then, before Arch existed (or at least before it was well known), the Gentoo wiki was known to be a really good resource. At some point it languished and the Arch wiki became the goto.

(I haven't used Gentoo in well over a decade at this point, but the Arch wiki is useful regardless of when I'm using Arch at home or when I'm using other distros at work.)


I'm on Gentoo without the precompiled packages, except for very large applications. Gentoo wiki is not a match for Arch wiki for its sheer content and organization. But Gentoo wiki contains some stuff that Arch wiki doesn't. For example, what kernel features are needed for a certain application and functionality, and how a setup differs between systemd and other inits. I find both wikis quite informative in combination.


Arch was young in those days but I think fairly well known? we were quite vocal, opinionated and interjecting our views everywhere by the time of the Xfree86/Xorg switch. Perhaps it is just my view from being a part of it but I remember encountering the Arch reputation everywhere I went. Or maybe it is just the nostalgia influencing me.


Could be. I don't remember Arch being on my radar at that point though. But it wasn't long after I switched from Gentoo to Arch (and then Debian for a decade due to lack of stability before going back to Arch 7 years ago or so).

A few years before the Xorg thing there was also the 2.4 to 2.6 kernel switchover. I think I maybe was using Slackware at that point, and I remember building my own kernel to try out the new shiny thing. You also had to update some userspace packages if I remember correctly: things like modprobe/insmod at the very least.


2.6 was also the switch from OSS to Alsa, which caused some fun, Alsa really was not ready for prime time.


Oh yeah, you just unlocked a forgotten memory. I was actually lucky there, I had a SoundBlaster Live 5.1 which worked just fine on ALSA (hardware mixing even worked out of the box). But I remember lots of other people complaining on IRC about it.


And once ALSA was working pretty well for me, PulseAudio came too soon.


Most distros were stable well before Arch because Arch worked out most of the bugs for them and documented them on their wiki. Arch and other bleeding edge distros are still a big part of the stability of linux even if they don't break all that often anymore, they find a lot of the edge cases before they are issue for the big distros. In 2005 it was not difficult to have a stable linux install, you may have had to hunt a bit for the right hardware and it may have taken awhile to get things working but once things were working they tended to be stable. I can only think of one time Slackware broke things for me since I started using it around 2005, it taking on PulseAudio caused me some headaches but I use Slackware for audio work and am not their target so this is to be expected. Crux was rock solid for me into the 10s, nearly a decade of use and not even a hiccup.


> back in the days when pacman -Syu was almost certain to break something and there was a good chance it would break something unique to your install

This was still the case when I switched to arch in like 2016 lol


Not to mention that they broke EAC only a few years ago.


About a year ago, when I installed Arch, my first Linux distro, most things were great. However, while testing some commands in pacman, there were a bunch of Python-related packages (even though I hadn't downloaded them). Since I needed some disk space, I figured deleting them wouldn't hurt. Unfortunately, I couldn't boot again. I guess the ones related to Python were related to Hyprland and Quickshell.


This brings back memories, same here!

I even bookmarked a page to remember how to rebuild the kernel because I can always expect it breaking.


I didn't really get into custom kernels until I started using Crux. A few years after I started using Arch I got sick of the rolling release and Arch's constant breakages, so I started looking into the alternatives, that brought me to Crux (which Arch was based off of) and Slackware (which was philosophically the opposite of Arch without sacrificing the base understanding of the OS). Crux would have probably won out over Slackware if it were not for the switch to 64bit, when confronted with having to recompile everything, I went with the path of least resistance. Crux is what taught me to compile a kernel, in my Arch days I was lucky when it came to hardware and only had to change a few things in the config which the Arch wiki guided me through.

Crux is a great distro for anyone ok with a source distro and I think it might be the best source distro, unlike the more common source distros, it does not do most of the work for you. Also love its influence from BSD, which came in very handy when I started to explore the BSDs and FreeBSD which is my fallback for when Patrick dies or steps back, Crux deserves more attention.


Arch always turned me off with it's rolling release schedule, and I wasn't that impressed with pacman to be honest. I used to love Slack, but they lost their way trying to compete with Ubuntu and the like. I remember thinking how ridiculous it was for mplayer to have samba as a dependency, and the community saying a full install was the intended way to run Slack. I ran it as a minimalist without issues until they started wanting to compete in the desktop space.

The best successor I've found is Alpine. It's minimal and secure by design, has an excellent package manager (I much prefer apk to pacman or xbps, or apt and rpm for that matter), has stable and LTS releases while letting people who want to be rolling release do so by running edge. People think it's only for containers but it has every desktop package anyone would need, all up to date and well maintained. Their wiki isn't at Arch's level, but it's pretty damn good in its own right.


I like alpine because it try to be relatively simple (openrc, busybox, …). My only issue is programs relying on glibc for no reason (altough you can use flatpack). But I’m using openbsd now.


I was never a fan of openbsd, a lot of the security claims are misplaced, bordering on theater. glibc support isn't so bad in Alpine, there are compatibility packages that work for most things if there isn't a flatpak.


Arch linux will still happily blow itself up if you skip updates for too long.

It's to the point where if I see 'archlinix-keyring' in my system update, I immediately abort and run through the manual process of updating keys. That's prevented any arch nuclear disasters for the last couple years


I have started using Arch in 2016 and it was stable back then. Are you describing an earlier era?


Not OP, but used Arch for a while in 2011, and at some point doing an update moved /bin to /usr/bin or something like that and gave me an unbootable system. This was massive inconvenience and it took me many hours to un-hose that system, and I switched to Ubuntu. The Ubuntu became terrible with snaps and other user hostile software, so I switched to PopOS, then I got frustrated with out of date software and Cosmic being incomplete, and am now on Arch with KDE.

Back then I used Arch because I thought it would be cool and it's what Linux wizards use. Now Arch has gotten older, I've gotten older, and now I'm using Arch again because I've become (more of a) Linux wizard.


The silly move from /bin to /usr/bin broke lots of distros. Probably would have worked out if they'd had cp --reflink=auto --update to help ease migrations from files in /bin to /usr/bin and then just symlinked /bin to /usr/bin . However then any setups where /usr is a distinct filesystem from / would hard-require initramfs to set that up before handoff.

The python (is python2) transition was even more silly though. Breaking changes to the API and they wanted (and did!) force re-pointing the command to python3? That's still actively breaking stuff today in places that are forsaken enough to have to support python2 legacy systems.


Arch + KDE is pretty sweet. It looks gorgeous out of the box, and gives you a system that mostly just works but is still everything you love about Arch


Also not OP, I gave up Arch around 2011 as well after I wasn't able to mount a USB pendrive at the uni, as I was rushing somewhere. This was embarrassing and actually a serious issue, took some time to fix upstream and finding workaround was also annoying. This is when I gave up on it and never looked back, but I did, indeed, learn all about Linux internals from dailying Arch for 3 or so years.


> This was also back in the days when most were not connected to the internet 24/7 and many did not have internet

That does sound significantly longer ago then 2016 ;)


The switch to systemd is the last time I FUBARed my system. 2012 it looks like?? I simply did not even remotely understand what I was doing.


Systemd was the end of Arch for me, my rarely used Arch install was massively broken by its first update in ~6 months largely because of systemd. With some work I got things sorted out and working again only to fall into a cycle of breaking the system as I discovered that systemd was very different from what I was used to and did not like me treating it like SysV. Going 6 months without updates would most likely have caused issues with Arch regardless of how stable it had gotten even without the systemd change, but my subsequent and repeated breaking of the system made me realize I no longer had any interest in learning new system stuff, I just wanted a system that would stay out of my way and let me do the things I wanted to use the system for.

I do miss Arch but there is no way I am going to keep up with updates, I will do them when I discover I can't install anything new and then things will break because it has been so long since my last update. Slackware is far more suited to my nature but it will never be Arch.


> I simply did not even remotely understand what I was doing

Why do I miss the stupid unconscious bravery of those days :)


This would be back in the 00s. I would guess that Arch got stable around 2010? I was using Slackware as my primary system by then so don't know exactly when it happened, someone else can probably fill in the details. I started using Arch when it was quite new, within the first year or two.


I had somebody’s pgp key break something yesterday :) learned about arch-key ring.


> Something was lost by Arch becoming stable and not breaking regularly

...a smooth sea never made a skilled sailor


Being at 0.16 right now does not mean much. From what I gather, he is more focused on the semantics right now and trying to avoid getting bitten by a lack of foresight down the road, as most every language is. Things will probably start moving more quickly as the language solidifies.


Any backwards compatible language will accumulate hindsight errors. It's practically inevitable.


>Also Jai is like C++ in complexity, while Zig is similar to C, very simple language.

And most importantly, Zig is aiming at being a C++ replacement with the simplicity of C, it is not trying to replace C.


I think you meant to say Jai, not Zig.


Good luck with that, it is basically Modula-2 with C like syntax, and we aren't even getting into the whole ecosystem that it is missing on.

Any C++ or C replacement will need to win the earths of mainstream OS and game console vendors, otherwise it will remain yet another wannabe candidate.

Those have already their own languages, alongside their own C and C++ compilers, and are only now starting to warm up to Rust.

Zig or any other candidate will have a very hard time being considered.


So no one should even try because they will never win over all of the C/C++ crowd so are doomed to fail and forever to be a wannabe? I think Andrew has gone about things in a good way, going back to C and exploiting hindsight, not trying to offer everything as quickly as possible. Extend C but keep C interoperability and do both better than C++ instead of trying to be the next big thing and he goes about it in a very deliberate and calculated way. He may not succeed, but the effort has given us a great deal.


One should try, while being aware of the realities of language adoption.

I disagree Zig is that great deal of a language, it would have been if we were talking about 1990's programming language ecosystem, not in 21st century.

Use-after-free problems should not be something we still need to worry about, when tooling like PurifyPlus trace back to 1992.


I don't think Andrew believes Zig is going kill C or C++, he probably has hope but I think he is aware of the reality. He found a way to make a living on something he was passionate about.

Use-after-free is a fact of life until something kills C, but the realities of language adoption are against that. Zig seems interesting and worthwhile in offering a different perspective on the problem and does it in a way more agreeable than Rust or the like for all those who love C and are adverse to large complex languages. The realities of language adoption are as much for as against Zig, large numbers of people are still getting drawn to C and Zig seems to do a better job addressing why so many are drawn to it than the alternatives.


For that to matter OS vendors that only care about C on their platforms, have to also care about Zig.

Otherwise the only users are going to be the ones happy to do some yak shaving instead of the actual application code with the vendor tools.

It also ignores that C doesn't stand still, the competition is C2y, not C89.


I mostly agree with Ilove1234561, whose account was created just 25 minutes ago and apparently was created just to reply to this post, which raises some questions. But I am willing to be surprised.


I have yet to get an ad with ChatGPT, I think it only gives ads when you are asking about consumerist type things, which I never do but seems a completely reasonable thing to me and I would not have issue with it giving me ads if I were to ask it about which ketchup is best or even how to make ketchup. As an incompetent programmer, Claude has not worked well for me and ChatGPT has never given me ads when asking programming questions.


I always like seeing Forth submissions and TFA is one of the sources I referenced when I first started getting into Forth, but I think the bias towards Forth being primarily useful because it is small and easy to port, harms Forth more than it helps it, same with the emphasis put on the repl and interactive development. Forth always getting reduced to these few tricks keeps most from ever seeing it as the high level language it really is. If I were a blogger I would write a long blog post about the joys and power of using a well developed and mature Forth like gforth and developing through source files instead of interactively on the repl, but I am not a blogger so I will just make my occasional short rant on HN. Small and easy to port is not of much use to most devs, lets start advocating Forth in context of most devs instead of the tiny minority.


Some are placing their bets and hoping to win big, some are just having fun and exploring the new technology and most are probably like me and have found those few areas that the technology works for them and ignores everything else. It is a fairly amazing technology, as a humanities sort that has no clue when it comes to programming but also has some things I want to make, LLMs are invaluable and not in the sense that they can write them for me—I am not capable enough in programming to get them to do what I want—but in the sense they can present things in a way a humanities sort can understand so I can write those programs I want.

A recent prompt of mine:

>Write a dialogue where H.P. Lovecraft and David Foster Wallace are tasked with developing a DSL for audio synthesis and composition, they are trying to sell each other on their preferred language, Lovecraft wants rust, Wallace Zig. Their discussion should be deeper than just the talking points and include code examples. Thomas Pynchon is also there, he is a junior dev and enamored with C, he occasionally interjects into the discussion and generally fails to follow the discussion but somehow manages to make salient points; Lovecraft never address Pynchon or C but alludes to the horror, Wallace tries to include and encourage Pynchon but mostly ignores him.

I pretty much knew how ChatGPT would use each of them; Lovecraft would view everything other than rust as some lurking unnameable horror, Pynchon would seem random and nonsensical, and Wallace would vainly attempt to explore all options without weighing in, and that is exactly what happened. Being able to get this sort of information within contexts I understand is amazing, 15 or 20 minutes later I had more direction than I ever had before and it answered a dozen or so language agnostic questions I had regarding implementing such a DSL which is really what I was after. And it gave me some good laughs.

A year ago I thought these projects would never move beyond being ideas, when ever I tried to get help from people they just told me what I was doing wrong and tried to send me down the path of their ideals, how they think things should be done, which was probably more a failure on my part than theirs with my being a humanities sort and incompetent programmer. LLMs have infinite patience and time, those people who I sought help from in the past were not completely wrong but they did try to lead me down a wrong path for what ever reason, partially because they don't have infinite time and patience but also because they thought they were right.

Henry James has been walking me through the PureData source code this past week.

Everyone is not hyping AI/LLM, that is just the bias of tech communities and the like, most of us see it about the same as we see a blender or toaster. Can't remember the last time AI/LLMs came up in general conversation for me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: