I think you overestimate the effect that would have on the kind of people that most need that sort of humility.
Look at what happened with William Shatner and Jeff Bezos when they came back from space. Shatner started to say something about what an impactful experience it was, but Bezos cut him off and was like “Woo! Partay!” and switched his attention to a magnum of champagne.
Jeff went up two flights earlier, in July 2021 on NS-16. Shatner was on NS-18 in October.
I don't know if it's a thing that wears off, if Bezos was just in business-mode the entire time, or just didn't want someone monologuing right after getting back.
What better solution do you have in mind? This scenario is AI being used as a tool to eliminate toil. It’s not replacing human creativity, or anything like that.
If you have a problem with that, then you should also have a problem with computers in general.
But maybe you do have a problem with computers - after all, they regularly eliminate jobs, for example. In that case, AI is only special in its potentially greater effectiveness at doing what computers have always been used to do.
But most of us use computers in various ways even if we have qualms about such things. In practice, the same already applies to AI, and likely will for you too, in future.
In a lot of my AI assisted writing, the prompt is an order of magnitude larger than the output.
Prompt: here are 5 websites, 3 articles I wrote, 7 semi-relevant markdown notes, the invitation for the lecture I'm giving, a description of the intended audience, and my personal plan and outline.
Output: draft of a lecture
And then the review, the iteration, feedback loops.
The result is thoroughly a collaboration between me and AI. I am confident that this is getting me past writer blocks, and is helping me build better arcs in my writing and lectures.
The result is also thoroughly what I want to say. If I'm unhappy with parts, then I add more input material, iterate further.
I assure you that I spend hours preparing a 10_min pitch. With AI.
Great example. Just give me the links you would give to the LLM. I also have an LLM and can use it if I want to, or I can read the links and notes. But I have zero interest in reading or hearing a lecture that you yourself find too tedious to write.
You have less interest in sifting through multiple articles and wiki pages sent to you by a stranger with a prompt than the one paragraph same stranger selected as their curated point.
And pretending like you’d act otherwise is precisely the kind of “anti ai virtue signaling” that serves as a negative mind virus.
AI is full of hype, but the delusion and head in sand reactions are worse by a mile
Then let him curate it as his central point. If he finds even that too tedious to do, I absolutely have no interest in reading the output of a program he fed the context to (particularly since I also have access to that program)
No pretending here. I don't ever ask an LLM for a summary of something which I then send to people, because I have more respect for my co-workers than that. Nor do I want their (almost certainly inaccurate) LLM summary. It's the 2020s equivalent of "let me Google that for you": I can ask the bag of words to weigh in myself; if I'm asking a person it's because I want that person's thoughts.
The original comment was saying that the AI would be both the writer now and the reader, in future. That's how the toil is eliminated. Instead of reading or searching through a series of release notes, you can just ask questions about what you're specifically looking for.
> If writing something is too tedious for you, at least respect my time as the reader
"If comprehending something is too tedious for you..."
Seriously, don't jump to indignant rhetoric before you're sure you've understood the discussion.
What's the point of the AI writer in that use case? Just send your prompt to my AI. And for that matter since prompting is in plain English, why not just send your prompt directly to me, and I'll choose to prettify it through an AI or not as I prefer.
The point is that it summarizes the context. It’s an important optimization, because context and tokens are both limited resources. I do something similar all the time when working with coding models. You’ve done a bunch of work, ask it to summarize it to the AGENTS.md file.
The more fully automated agents rely heavily on this approach internally. The best argument against it is that good harnesses will do something like this automatically, so you don’t need to explicitly do it.
Sending you the prompt wouldn’t help at all, because you’d have to reconstruct the context at the time the notes were written. Even just going back in version control history isn’t necessarily enough, if the features were developed with the help of an agent.
But I also have access to an AI that can summarize content. So why not just send me the content and the prompt you used? Or just the content, so I can summarize it however I want?
Obvious better solution is to either a.) not write those release notes b.) try to figure out release notes format and process that leads to useful release notes. Once it is useful, you can decide to automate it or not - and measure whether automation is still achieving the goal.
What OP did was "we lacked communication, then created ineffective process that achieved nothing, so we automated the ineffective process and pay third party for doing it".
If you pay tokens for release notes that nobody reads, they you may just ... not pay tokens.
> I do not believe that there is anything magical about humans that prevents us from eventually reverse engineering ourselves.
Nothing except a possibly unmanageable level of complexity. We don’t even really understand how LLMs do what they do.
Perhaps we can build an AI model that has an understanding of humans down to the level of detail being contemplated here, but that won’t mean we will understand that.
And even with that understanding, it doesn’t mean it’ll be possible to build a fully functioning human body without the equivalent of a brain. It’s likely to be more like a person in a vegetative state - they have a brain and measurable brain function, but no higher cognitive functions that we can detect.
The model made more sense before containers existed. Basically, Java tried to become a complete platform for application deployment, at a time when there weren’t many other good solutions to that.
However, the problem with that is that it requires writing everything in Java - heterogeneity breaks the model. A language-agnostic solution like containers was bound to win out, it’s just that nothing remotely close existed at the time.
Keep in mind a lot of this was developed even before VMs were commonplace. The first true, usable VMs for x86 were released in 1999, four years after Java’s debut.
The desire for free stuff is one of the most effective psychological hacks there is.
The large majority of the dystopian web, like Gmail, Facebook, etc. depend on that.
People who avoid e.g. Github, Gmail, Facebook, Xitter, etc. out of concern for broader principles will always be minor outliers.
Xitter is one of the best examples. Everyone knows it's compromised, owned by an dangerously antisocial person who's actively working at multiple levels to make the lives of everyone else on Earth worse, yet very few have stopped using it.
The saying "There's no ethical consumption under capitalism" is far too weak. It should me more like, there are no ethics under capitalism.
> Looks like MS thinks it's a "tip" rather than an ad.
No, they don't.
> edit: I think it's an ad too. Everyone would think so, except for MS.
You think a company with a $2.65 trillion market cap and an army of marketing professionals doesn't realize that what they're doing here is an ad, and didn't implement it intentionally as such?
That's not even remotely plausible. In the quantum multiverse which contains all physically realizable possibilities, that isn't one of them.
Well, at least their PM thinks(or argue ) it's a tip[0]. Also it's pretty obvious I was just being sarcastic about MS's behaviors. I don't know why you are so mean but please don't be. Have a nice day.
The correct word would be that the PM claims it’s a tip. Now ask yourself whether a PM who realizes he or his team has made a terrible mistake and is doing damage control in public is likely to make only true claims.
Correcting your mistakes is not mean. If you didn’t mean what you wrote, well hey, that’s a good example of the difference between what you think and what you say. See how that works?
> Getting your gmail account hacked does not reflect on you as a professional.
Why not? Most professionals at larger organizations have to do security training. These kinds of attacks are far less likely to succeed on anyone who follows the basic precautions taught in such training. E.g., if he had MFA enabled on his account - as he certainly should have had - they would not have been able to compromise it externally, i.e. it would have had to be much more than his email that was hacked.
I don’t get the propensity some people seem to have for defending this shameful collection of incompetent criminals, bullies, and clowns.
Smooth shiny white walls, beakers and test tubes filled with brightly colored liquids on shiny metal tables… Science!
reply