Yeah, there's a lot more work and personal touch that went into this (and the previous piece) than just "write prompt -> copy/paste into substack".
It's really interesting to hear about others that have been exploring generating fiction with Claude. I clearly need some more work based on some of the comments, but it has been really interesting discovering and coming up with different techniques both LLM-assisted and manual to end up with something I felt confident enough about to put out.
I'd be curious to hear more about your experience!
I run a product that generates interactive fiction (for search engine reasons I don't mention it in my comments, but there's a link to an April Fool's landing page in my post history where you can try it)
Because it's productized I need to "one-shot" the output, so I focus a lot on post-training models these days, but I've also used tricks like running wordfreq to find recently overused words and feed the list back to the model as words that cannot be used in the next generation.
Models couldn't always follow instructions like that (pink elephant problem), but recently they're getting better at it.
Thanks! Yeah there were a couple I decided to leave in rather than try to rework as I wasn't trying to hide that it was written with AI, more trying to add more variety to the storytelling. I'm sure as I do more of these I'll be able to recognize them a lot easier. I have been toying with the idea of working them more into character's dialogue in the future, as I've already noticed some people I know speaking in LLMisms.
I'm particularly allergic to LLM-isms, if you look at my comment history I'm constantly complaining about LLM-written text. I am genuinely quite surprised to have read that much LLM-generated text and been happy to do so.
I am also extremely interested in thinking about where software development is going, so I really appreciated the ideas that went into this.
Since you seem open to feedback, I want to add that I felt the generated images were a negative addition. Maybe they wouldn't be if they also got a little polish - the labels in them were particularly bad.
Ahh cool, I'll dig through your comment history tonight :) I will say, I suspect we're only in the early stages of the LLM's writing equivalent of "autotune" while we all collectively figure out what's tasteful use, what isn't, what it might be like to use autotune as an instrument itself, and then what gets overused. So it'll probably get a lot worse before it gets better.
And thanks for the note about the images, I'll take that into account! I only really just started this project and am going to keep iterating as I learn to use the tools better and I find the right visual language for it.
Since you seem in the mood to give feedback ;) If you take a quick glance at the previous story, do you feel the same way about the images in that one or was it just this one's that you found particularly unpolished?
I think in this you are the autotune, trying to make the raw LLM writing in tune and palatable.
I did read your previous story (not as polished but still interesting) and noticed in the image that linked to "beautiful but the Mandarin module has a tone recognition bug that makes it nearly impossible for non-native speakers", that the tone bug was Hebrew rather than Chinese characters. Interesting...I might have a look again and translate.
Just wanted to say that I've felt the same about the images. To me it's likely was the text that for some reason had AI-feel to it. Great story though, I was in awe learning it was AI generated.
I would have preferred to see a disclaimer at the top about how this story was Put Together[tm], but I also agree that it is a pretty fine piece of writing overall. Which brings me to my initial point...
> Over the last couple months, I've been building world bibles, writing and visual style guides, and other documents for this project [...] about two weeks of additional polish work to cut out a lot of fluff and a lot of the LLM-isms.
The amount of work and walltime expended sounds about right. You have discovered / stumbled upon the relatively well known but little appreciated job of a publishing editor. It takes a lot of nitty-gritty work and built up domain knowledge ("world bibles") to direct a piece of writing - and its author - to a level where you confidently believe that you have captured the intent and desired tone of the piece, while keeping it sufficiently tight, engaging and interesting / non-patronising enough for its audience.
Disclosure: did ~decade of freelance writing around the turn of the millennium, and have had the privilege of being schooled by a small group of good old-school journalists. And then had a publishing editor assigned for a separate project, from whom I learned even more about writing.
I appreciate the question and I think the answer is much longer and more nuanced than can really effectively fit into this form factor. I think this question is getting asked right now about all art forms because of AI and from a lot of different people.
My short answer to “why should I care about the mathematical model output from the human artistic input” is “I think we’re all figuring that out right now!” And I’m pretty sure the answer isn’t “you shouldn’t care at all”. Especially if the mathematical model output from the human artistic input expresses what the human wants to express at a quality level that passes that human’s “Taste Gap” (https://www.youtube.com/watch?v=91FQKciKfHI)
I’m sure we could go back and forth about this a lot (and happy to keep this conversation going, I truly do feel like exploring and discussing, this is very interesting to me!) so happy to dig into any aspect with you :)
I will say that I think what’s happening is that we’re seeing more people explore art forms that couldn’t before because of mechanical skill gaps, and that’s interesting in the same way that synthesizers and sampling and software instruments did to music or I imagine digital art tools did to physical art, and I imagine digital photography did to photography which did the same to painting. It’s an interesting time to be alive!
but ai doesn't bridge a mechanical skill gap in this instance. there was nothing stopping you writing this story or drawing those pictures. juxtaposing language models against synthesizers chopping up discrete samples is just not a fair comparison. by prompting ai, one does not even remotely serve to fully engage their imagination in producing creative output (the dictionary definition of art). yes you could be seen to be using a tool to make art. in this instance, using that tool is an act of outsourcing your imagination to the distilled creativity of humanity. at this point the definition of tool must be reduced to I/O alone.
regarding your personal input, this is an order of magnitude less imaginative compared to tapping some keyboard keys. it's not your imagination that produced the majority of this story; it's unfair to claim any aspect of this process except your prompts. which is why i asked for the prompts. im not here to hate on your artistic expression, just as im not here to listen to the sum total of humanity's creativity that has been poked and prodded into maximising shareholder value. some people might be interested in that - frankly i doubt they would be, if they empathized with a painter or writer or producer (or had any clue how easy it is to manipulate humans). me myself, im here for your creativity and yours alone. not that of anthropic (who, like other AI companies, stole it).
by pushing out this work, theres nothing stopping you from having inadvertently acted as a conduit for a corporation to deliver its message. how do you know that you havent accidentally pushed out a work with hidden messages embedded within? do you know how good llms are at encoding and decoding hidden meaning?
Ugh yeah, I had an aside about the right-to-repair fights still going on indefinitely into the future that I ended up cutting. I kept the title because it seemed like a warning the characters would see on everything they bought, even if they ignored it. I'm sure I'll explore the idea more in the future though, I plan to explore insurance and liability and law at some point too.
Thank you for this comment, I'm so glad it made you feel a little bit better about the future, if even for a little while!
This is really the whole idea behind this project with Near Zero. I think there's a lot of anxiety out there around AI and the future, I was there for a while too. Ultimately I've ended up pretty optimistic about it all, and inspired by what the group at Protocolized is doing, found science fiction a great way to help express that.
There was another rule you did not know seemingly. Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.[1]
I feel like this ultimately boils down to something similar to nocode vs code debates that you mention. (Is openclaw having these flowcharts similar to nocode territory?)
at some point, code is more efficient in doing so, maybe even people will then have this code itself be generated by AI but then once again, you are one hallucination away from a security nightmare or doesn't it become openclaw type thing once again
But even after that, at some point, the question ultimately boils down to responsibility. AI can't bear responsibility and there are projects which need responsibility because that way things can be secure.
I think that the conclusion from this is that, we need developers in the loop for the responsibility and checks even if AI generated code stays prevalent and we are seeing some developers already go ahead and call them slop janitors in the sense that they will remove the slop from codebase.
I do believe that the end reason behind it is responsibility. We need someone to be accountable for code and we need someone to take a look and one who understands the things to prevent things from going south in case your project requires security which for almost all production related things/not just basic tinkering is almost necessary.
Yeah responsibility and accountability are also some areas I'd like to explore. I'm mostly digging through this artifact I created with Claude to look at first order and second order effects and then "traffic jams" in the "good science fiction doesn't predict the car, it predicts the traffic jam" and what kind of roles might pop up to solve those issues: https://claude.ai/public/artifacts/39e718fa-bc4b-4f45-a3d5-5...
I've mostly been digging through my own version of that and trying to find things I find interesting and seeing what kinds of stories we can build about what a day in that job might look like.
Thanks! Yeah this was AI assisted. As an experiment I started asking Claude to explain things to me with a fiction story and it ended up being really good, so I started seeing how far I could take it.
I’m pleasantly surprised this was AI assisted so deeply that inconsistencies like that slipped by you. The writing is really extraordinary. It made me want to read for fun again for the first time in decades. Thank you!
Funny, I was talking to a friend the other day about some thoughts on branding and he commented "as someone with a background in marketing & advertising communications, it's wild to watch a software engineer learn the value of branding and marketing from first principles".
I guess I'm also learning the value of working with an editor from first principles... over the last couple weeks before publishing I read through and made edits to this piece at least twice a day and still didn't catch this.
I don't think that phrase means what you are trying to say here.
What it doesn't mean:
- learning by doing
I believe it generally means: a formalization that comes after a subject is understood so well that you can reduce it to "first principles" that imply the rest. Or, the production of a hypothesis by deduction from widely-accepted principles.
Honestly, I read that passage as Carol realizing as she spoke that she had been underwatering that spot semi-consciously the whole time. That’s one of the things about expertise gained by doing. We don’t always realize exactly what we’re doing well enough to communicate it until we reflect on it later.
Has it bugged any of you that everyone you talk to has their own unique prompting technique that they swear by and none of them are exactly the same but still seem to kind of work?
It reminds me a bit of the endless discussions among analog photographers regarding the different chemicals and methods that can be used for developing black and white film. Everyone is, of course, convinced that their particular development method achieves optimal results. But no-one ever really does any proper controlled tests.
An old photography handbook from the 1950s drily remarks that the proliferation of developing agents "merely increases the number of different methods by which identical results may be obtained".
I think that due to the nature of language, often the prompting technique that you use is indeed the best, for you, since it allows you to express yourself “naturally” and thus have more consistent and effective session with a model adopting a similar style and using similar abstractions when building.
reply