The blog post said that the Iran war costs the US at least 1 billion USD per day. The US is incredibly rich and can afford the cost. What I don't see being discussed: What if the US (and Israel) does not put troops on the ground in Iran, but continues relentless, daily aerial bombing... forever (1/2/3 years)? I am not saying that you can control a country from air superiority only (this has been widely discussed by military strategists -- it cannot), but you can endlessly bomb their military assets. What would happen? Honestly, I don't know. I don't think it has been done in the last 50 years of war. (Please provide counter examples if you know any.)
That's one way to make sure people living under aerial bombing firmly support a regime defending their sovereignty, hence legitimizing the islamic republic. Example: Taliban, with boots on the ground, didn't get any weaker at the end.
The US bombed basically all of the Iraqi military in 1991, yet the war didn't end and Iraq didn't leave Kuwait until troops on the ground went in. Air power alone cannot control territory or compel political change
"There are a lot of people who say that bombing can never win a war. Well, my answer to that is that it has never been tried yet, and we shall see." - Sir Arthur Harris
The response is as applicable now as it was then. Time will tell.
I don't think we could see a bombing campaign like the one we've seen so far anywhere near that length of time. Partly for munitions reasons and partly for target reasons. There is only so much stuff to blow up and only so many bombs to blow things up with. We can't produce them at any where near the rate that would be required to just to do this for years.
Many of their military assets are underground out of reach of bombers. And you need somewhere to stage out of. Probably not the Gulf bases that are being wiped by missiles and drones at the moment. The aircraft carriers have been having issues and are being pushed back out of missile range. So it becomes more difficult and expensive to keep the bombing up.
I mean the answer to underground facilities is you just keep bombing the entrance which is exactly what they've done. Iran still has insane supply levels of ballistic missiles so the US/Israel are eradicating their tele-launcher fleet.
The second the first bomb hit, the Republican Guard went from a standing military force to a guerrilla army, similar in a lot of ways to what the US faced in Iraq, just vastly better-trained and better-equipped. The US couldn't subdue Iraq with hordes of troops on the ground for years, so why would anyone imagine an air-only campaign would have better results against a stronger and larger opponent?
> In a different scenario there'd be no motivation for a country like Iraq or Jordan to help.
While unprovable, I think the sentiment is too strong for Jordan. They have pretty good relations with Israel, and have been using their own fighter jets to down some drones from Iran. If anything, it is good practice for their airforce.
First, hat tip on that Guardian article that you shared. The map of desalination plants around the Persian Gulf is excellent.
My first thought looking at it: Why does Saudi Arabia have desal plants in Riyadh? It is 100s of km away from the Persian Gulf! Maybe they want some far away from the Gulf for security reasons? Else, it looks weird. I imagine that they need to pump sea (salty) water from the Gulf to Riyadh, desal it, then pump back the waste water. Quite a journey.
Some background for interested readers: Sophie Schmidt is the daughter of the former Google CEO Eric Schmidt. She accompanied him in Jan 2013 on a (state-sponsored? humanitarian?) visit to North Korea.
My favourite part of the blog post is when she visits The Kim Il Sung University e-Library, "or as I like to call it, the e-Potemkin Village".
You must be young. Do you not think there were similarly acronym infested tech speak in the 1980s, 1990s, 2000s, 2010s? All of those decades also had plenty of certification testing.
I disagree. HN discussions seem to have wildly liberal views of US copyright law and, in particular, fair use. Gamer's Nexus is surely commercial because they either make money (1) directly from YouTube, (2) directly from adverts / product placements, or (3) indirectly from merch.
I agree with the parent poster's point: "If news organizations can copy each other's clips of official speeches, who would bother going out and making such recordings?" When you see a head of state (or other VIP) making a speech and they show the media, there are normally 10+ different camera crews. If competitors can claim "fair use" for any of that footage, why would so many different media outlets send camera crews? The question answers itself.
A good counterpoint for fair use would be Wikipedia. They are very conservative about claiming fair use. I assume they have had pro bono (or not) lawyers review their policy and uses to confirm the strength of their claims. After hundreds of hours of reading Wiki, I can recall only once or twice ever seeing an artifact claim fair use. I think it was a severely downscaled photo of a no-longer-living person.
I think Wikipedia's relatively conservative (one might say erring on the side of safety) stance on free use is easy to understand when considering that they have a bank account stuffed to the brim with cash, minimal spend on hosting and developers compared to income and savings, and copyright lawsuits are one of very few of their exposed legal surfaces.
Additionally, folks don't like to rely on free use because the tests, though they have been well articulated, are inherently subjective and must be decided by a judge or jury. It's the sort of defense one wants to have available, but not depend on if possible, as a result.
Re: commercial use, in the US, just because a work is commercial does not automatically mean it loses fair use protection. Commerciality is only one factor of the four to be considered. Commercial parodies, for example, can still be fair use, especially where the work is transformative. IOW commerciality may weigh against fair use, but it is not dispositive. Google v Oracle involved fair use which was clearly commercial, for example.
GN's case would also be helped by the nature of the information being factual as opposed to artistic.
There are a lot of factors in whether or not an org can successfully take something to trial. Venue, judge, representation, jury selection, evidentiary rulings, all kinds of stuff. An imbalance in representation could easily swing it. So when I say that I think GN has a reasonable case, it's just me using the Supreme Court's rubric and some theoretical idealized court room which doesn't really exist. All I can say is that a good job could be done in arguing it. Whether or not GN could afford that work, or would want to, IDK.
Perhaps you should re-read what I wrote for comprehension. 50% of their spending may be on tech, but their total spending is only 4% of their income. Apparently I'm more familiar with their financial statements than you.
I think people misunderstand the 4 tests. They are not in-or-out tests. Commercial use doesn't mean it's not fair use. Each factor is weighed against others.
In this case this case the purpose is for critique or review and it justifies fair use since the clip is only a small part of the video, GN isn't in the same business as BB and isn't substitutive for BB's work, and the clip was a recording of a factual event and had didn't have a substantial creative element.
I also had no idea this was LLM generated. After reading your comment, I had a similar emotional reaction.
Thinking deeper, it seems prudent that we tag submissions like this with a prefix. Example: "LLM: ". This would be similar to "Show HN: ". While we cannot control what the original sources choose to disclose, we can fill that gap ourselves.
My point: I agree with you: It is misleading that the blog post does not include a preface explaining it was written by an LLM (and ideally, the author's motivation to use an LLM). However, it is still a good blog post that has generated some thoughtful discussion on HN.
Because 'quality' is a misnomer. LLM writing has quality in the same way that a press release from a big company has quality, or a professional contract written by a lawyer has quality. It is functional, generally typo-free and conforms to most standards but that doesn't mean it has flavor or spice to it.
Creative writing is the intent to convey feelings, thoughts, to create atmosphere. Here's a great example of the failure to do so here, in a way that even most terrible writers would avoid.
> “It just said harvest,” she told Tom. She was sitting in one of the plastic chairs, holding a cup of the adequate coffee.
The coffee in this story is conveyed as being 'perfectly adequate'. But how do you convey adequacy? When you simply just say 'the coffee is adequate' there's nothing there. It could be conveyed by establishing that the coffee is always perfectly room temperature, or with the mere hint of bitterness and sweetness, or that it tastes like every other brand out there. In many respects this story is the exact same as the 'perfectly adequate' coffee: functional, unexciting and ultimately flavorless.
This "flavorlessness" is all over the story, and paired with the obviously genAI images is how I realized as I read that this was either generated or at the least deeply driven by AI.
It constantly described facial expressions, tones of voice, and other emotional cues in generic, dry terms that communicated nothing but the abstract notion of "this person felt a particular way about what happened and it's up to you, the reader, to imagine what that feeling was."
It felt very much like it was prompted to "show, don't tell," by someone who has no idea what that phrase actually means.
As a professional programmer with a deep background in literature and music, this is yet another example that if you aren't an expert in a field, you will get mediocre results at best from an LLM, while being deceived into thinking they're great.
Five years ago and before, the blog post author would have gone to Fiverr and asked for an artist from a developing country to create some illustrations. There are many, many images on the Internet from five years (and before) that look similar. I object to your use of the adverb "obviously".
No, I clocked the AI images before I noticed the text. I think the "obviously" is earned.
You are correct that a previous era would have included a bunch of Fiverr images that would be in sort of that style, but it's not the style that's the problem. None of the images say more than the text that they're illustrating. It's subtle, but once you notice the lack of information density it becomes starkly apparent.
I took that phrase differently. The story makes the point that the AIs fail when metrics of quality can't be expressed in words. The use of a bare "adequate" reinforces the opacity of the coffee's quality. Certainly it would have worked well to use more words to convey specifics of the "adequacy" as you mention, but IMO that would have undercut the link back to the theme of human ineffability.
Obviously everyone's mileage may vary, but I didn't see this as a huge defect, and actually felt it worked pretty well.
In the hands of Douglas Adams or Kurt Vonnegut it could be spun into a whole recurring motif.
In this case it's merely...adequate. Almost captures the density of ideas packed into something like "The ships hung in the sky in much the same way that bricks don't" but doesn't quite manage the same effect.
because we typically want to know the writer of a piece. we want to know where to lay credit.
every book you buy has an author credited. articles in newspapers and magazines have photographer and author attributions.
asking an ai to write you a story does not make you an author. if you ask someone to take a photo for you, you don’t magically get to say “look at this photograph, i’m a photographer.” if you ask someone to bake you a wedding cake, and then claim you baked it, you’re a fraud.
Because you need to do some pre-filtering on where to focus your attention, and you want to make sure the author put some thought into the article without having to analyze it.
Due to LLMs making the cost of publishing “thoughts” extremely low, there’s now an over-supply of content that looks decent on the surface, but in reality the author has probably spent less time on than the reader.
Are we ready so far down I to the LLM denial mindset that we consider an author spending multiple months crafting this to be "worthless" and less investment then your casual reading?
No, I believe this is a great post. It’s awesome. Even more so because it’s AI generated, as it shows what AI can do when given a lot of quality material to work with.
I’m just talking about the general topic about the usefulness of an “this is AI generated” classifier.
> general topic about the usefulness of an “this is AI generated” classifier.
exactly what i'm trying to get at too. And my thesis is that this classification method is pointless - it's just as pointless as saying things like "this author went to harvard", or "he/she came from a poor background".
Don't we already have these filters in place? I only saw this because it was highly-upvoted on HN, for example - I don't read every new submission. I also read things sent by friends and family, shared by curators I trust, etc.
Of course these systems may eventually break down, but for now they seem to work.
why does it bother you to give attribution? why do you think crediting the writer impacts how the piece stands?
we have pop musicians who produce massive hits under their names and the song writers are still given credit in liner notes and in the tracks details on spotify or wherever.
if it’s created by a bot, id take it even further and say which version of which model actually generated it should be declared. why would anyone be against giving proper attribution?
We like writing because the fact that we can create good writing says something about ourselves. If AI can create writing that surpasses, say, a Tolstoy or George Eliot, that will fundamentally change our self-perception. Is that a good thing or bad thing? Well, let's first cross the bridge of an LLM writing War & Peace and see how we feel.
If someone couldn't be bothered to write it, I certainly can't be bothered to read it. I did not bother to read the article involved because the continual piss stain on the images, the website itself, and a few key phrases let me on to the fact that it was all generated.
When you interact with art, you do so to interact with the author and the point they want to make. Writing is something where a skilled writer will be able to make a point tersely and have it stick, knowing where to embellish and where to keep it simple. Every decision in art tells you about the artist. Generative AI may be able to fake the composition process, but the point of composition is it reveals something about the human. All of those are artistic decisions that a machine apparently now "can do", but not with any coherency.
The holder of the reigns of slop is not an artist, this is plain to see because they do not interact or engage with their work on the same level as an artist. The produced slop is not art, because it cannot be engaged with on the same level.
Imagine if you had an auto cake making machine that decides on its own the best time to make cake. It adds the ingredients, stirs, turns the oven on, and leaves the finished cake on the counter for you.
People start opening bakeries consisting entirely of cakes baked by the automatic machines. The owners of these machines have no idea whether the cakes have a bit too much flour or were slightly over-stirred. In some cases, they haven't even tried the cakes.
Who gets to claim they made the cake?
By contrast, there are others who carefully tune their machines to make sure everything is perfect. They adjust the mixing settings and ingredient proportions. They experiment and iterate. They taste test throughout the process. And what they give to the public tastes every bit as good as a homemade cake.
The first group is creating slop. The second group, I think, is baking. And OP is in the second group.
Replace "oven" with a dish washer or a washing machine for your clothes. Those things do exactly all of this. Yet we still complain about washing clothes and doing the dishes, even though it is far less effort than anything our parents did, or their parents before them.
If you commission a baker, another person, with wants and desires of their own, is involved.
If you use an AI, there isn't.
Either way, it's clear that the author (yes, the author) put a lot of work into this by iterating and shaping it to what he wanted, and that's a lot more than sprinkles.
> If you commission a baker, another person, with wants and desires of their own, is involved.
> If you use an AI, there isn't.
What is the functional difference here? You are commissioning (see: prompting) someone (see: an AI) for a piece of work, or artwork or whatever. The output is out of your control; and I don't think the existence or lack thereof of a human on the other end materially matters.
If we had hyper-advanced ovens from The Jetsons where we could type a prompt using a fold-out keyboard and it would magically generate whatever cake we ask of it: did we or did we not bake that cake? And I do not think it is clear the author put a lot of work iterating and shaping it into what he wanted; we have zero insight into that.
I didn't say the difference was functional. If you don't think the presence of a human on the other end matters (materially or not), feel free to continue this conversation with an LLM simulation of me. You can even prompt it so that you logically triumph and convince "me".
I'm asking you to explain what the actual difference is and you're avoiding the question.
If we had a complete black box where you submitted Prompt and out came Thing, and you had zero clue what said black box actually did, could you claim creation over Thing? What does knowing that it's a human vs LLM make materially different in terms of whether or not you created it?
Why would I give him the same credit I would give a writer.
Or why would I give a writer the same credit I would give someone who created the AI prompts and scaffolding to generate this?
Being unhappy about not being able to call oneself an author, ends up betraying a lack of confidence in the work or process.
In the end writer, dancer, actor, whatever - these titles come from their impact.
There will be a different name for this, and eventually there will be something made that is good enough that people will be spell bound. At which point its going to be named something else.
Ironically, the story can be read as gesturing in that direction, as it's ostensibly about giving a new title to a particular job.
In general, though, I think part of the mistake people keep making is that they try to imitate what would be value to engage with if a human wrote it, in an attempt to claim the role of an author of a book or whatever. There's likely artforms that are unique to what an LLM can facilitate, but trying to imitate human artforms is going to give you stunted results. The AI is very good at imitating the form but not the substance.
Once we stop trying to generate and pass off AI essays, novels, choose your own adventure stories, and all the other human genres as being human writing, we'll have a chance to figure out actually interesting artistic forms.
> Creating something without the effort previous works involved, can and do affect the context and understanding of it
not really. Unless you place value on _effort_, rather than be objectively outcome based. Someone digging a hole with a spoon doesn't make it a better hole than a jackhammer.
I maintain that the work itself - that is, the contents of what is being expressed - is the sole judgement of how good the works is. Not the authorship, LLM-usage or otherwise.
The context exists whether it's LLM generated or not, because the context sits broadly in society, culture, and manifests in the mind of the reader.
> how would LLMs fair when the content of the work itself is about “Something made by a human”.
it would fair just as well as if the same words had been written by a human, provided the contents are sound and has good meaning - conversely, slop is slop, regardless if it was written by an LLM or human.
My point at the grandparent post is that there's a lot of blind discrimination on the origin of a works - if it was written by or with the help of LLM, then it automatically deserves less attention, and/or its content's worth diminished. All without actually discussing the content.
Largely, I agree with you. One famous counterpoint about labeling works of arts with the author: The Economist (the magazine) does not add the author to most of their articles.
> because we typically want to know the writer of a piece. we want to know where to lay credit.
Does the average person really do care all the time? Maybe the outlet it comes from as a whole (factuality, political lean) but more rarely the exact author. Many don’t even have the critical skills for any of it and consume whatever content is chosen for them by whatever algorithm is there. We probably should care, I just don’t think a lot of us do.
For me, needing to know that something’s written by AI serves threefold purposes:
1) acknowledging that it might be slop that someone threw together with no effort (important in regards to spam)
2) acknowledging that depending on the model the factuality might be low when it comes to anything niche (though people are wrong too, often enough)
3) mentally preparing myself for AI bullshit slop language, like “It’s not X, it’s Y.”, or just choose not to engage with it (it's the same disgust reaction as when I find a PDF and realize it's just scanned images, not proper text)
In general, unless the goal is either human interaction or a somewhat rare case of wanting to read a specific blog etc., most of the time I don’t categorically care whether something was lovingly created by a human or shoved out by a half baked version of Skynet - only that it’s good enough for whatever metrics I want to evaluate it by. I’m not ashamed of it and maybe that’s why I don’t take an issue with AI generated code either, as long as it’s good enough (sometimes better than what people write, other times quite shit when the models and harnesses are bad).
In Peter Watt's Blindsight, the aliens understand language as spam, a hostile intent to waste their time, and respond by opening fire.
Reading LLM slop without warning makes me see their point of view.
I think there's useful ways to engage with LLM writing, but they are often very different than human writing.
A human writer, a good one, often has ideas that are denser than the words on the page, and close reading is rewarded by helping you unpack the many implications.
With AI writing, there's usually fewer ideas than words, and so it requires a different kind of engagement. Either the human prompter behind it didn't supply enough ideas, or they were noncommittal enough that their very indecision got baked in.
LLMs are very prone to hedging and circling around a point while not saying much of anything. Maybe it is the easiest way to respond to RLHF incentives and corporate-speak training data. Or maybe they're just intrinsically stuck on being unable to find the right next token so they just endlessly spiral around via all of the wrong ones. Either way, there's often a whole lot of cotton candy text that dissolves when you try to look at it more closely.
can't reply to your comment below so i will comment here
> why does it bother you to give attribution? why do you think crediting the writer impacts how the piece stands?
clearly it does to you?
thing is, this is a fool's errand to try to police what people credit when there is zero capability of verification and enforcement
the current social norms still value authorship, so people will just take or omit credit as they see most advantageous, even if it's merely an ego advantage, which it typically is but a proxy for brand building
what will happen if/when the currency of attribution is completely altered? hard to predict
my prediction is that track record will be considerably more important, not less, but human merit will be increasingly seen as irrelevant
People don’t want to self-disclose their use of AI I’ve noticed, especially the ones that put the least effort into using it. So this will only work for a small portion of the AI content.
Yeah, I am loving the public mudslinging over shit from 10 years ago, like high school girls fighting. This is like the FAANG version of the TV show Suits. We can call it FAANGs and use Midjorney to create the cover art and give the actors vampire fangs.
On a more serious note, it seems like any hyper competitive company eventually spirals into an awful, toxic working env.
reply