Hacker Newsnew | past | comments | ask | show | jobs | submit | ericdykstra's commentslogin

I got 12/18 on faces as an American-born Caucasian living in Japan for over 10 years. Since the subjects were photographed in New York City (and from the other comments, at least a decade ago), cues from fashion and makeup only helped me get about 4 of them, another 6 had pretty strong ethnic features. Of the remaining 8, it was a bit of a tossup and I did worse than guessing, getting only 2 correct.

13/18 on food. Even with a lot of the same general types of food, the presentation and specific ingredients made a lot of them somewhat simple. I got tripped up on a few, though, where I overthought it ("a Japanese X is usually not like this") or ones where it was really a tossup for me between Chinese and Korean since I'm less familiar with those foods.


I won't ever put my name on something written by an LLM, and I will blacklist any site or person I see doing it. If I want to read LLM output I can prompt it myself, subjecting me to it and passing it off as your own is disrespectful.

As the author says, there will certainly be a number of people who decide to play with LLM games or whatever, and content farms will get even more generic while having less writing errors, but I don't think that the age of communicating thought, person to person, through text is "over".


It's easy to output LLM junk, but I and my colleagues are doing a lot of incredible work that simply isn't possible without LLMs involved. I'm not talking a 10 turn chat to whip out some junk. I'm talking deep research and thinking with Opus to develop ideas. Chats where you've pressure tested every angle, backed it up with data pulled in from a dozen different places, and have intentionally guided it towards an outcome. Opus can take these wildly complex ideas and distill them down into tangible, organized artifacts. It can tune all of that writing to your audience, so they read it in terms they're familiar with.

Reading it isn't the most fun, but let's face it - most professional reading isn't the most fun. You're probably skimming most of the content anyways.

Our customers don't care how we communicate internally. They don't care if we waste a bunch of our time rewriting perfectly suitable AI content. They care that we move quickly on solving their problems - AI let's us do that.


> Reading it isn't the most fun, but let's face it - most professional reading isn't the most fun. You're probably skimming most of the content anyways.

I find it difficult to skim AI writing. It's persuasive even when there's minimal data. It'll infer or connect things that flow nice, but simply don't make sense.


I hear stories like this a lot (on here anyway) but I haven't seen any output that backs it up. Any day now I guess.

I don't really understand this retort. I assume most of us work in a professional environment where it's difficult, if not impossible, to share our work.

We've been discussing these types of anecdotes with code patterns, management practices, communication styles, pretty much anything professionally for years. Why are the LLM conversations held to this standard?


Well, because I've worked in different places, and with different organizations, and can see for myself how different approaches to professional conduct manifest in the finished product, or the flexibility of the team, effectiveness of communication, etc.

Especially with things like code and writing, I assess the artifacts: software and prose. These stories of incredibly facility of LLMs on code and writing are never accompanied by artifacts that back up these claims. The ones that I can assess don't meet the bar that is being claimed. So everyone who has it working well is keeping it to themselves, and only those with bad-to-mediocre output are publishing them, I am meant to believe? I can't rule it out entirely of course, but I am frustrated at the ongoing demands that I maintain credulity.

FWIW I have sat out many other professional organization and software development trends because I wanted to wait and assess for myself their benefits, which then failed to materialize. That is why I hold LLMs to this standard, I hold all tools to this standard: be useful or be dismissed.


Because I have a proof of the Riemann hypothesis but I'm not showing it to you because I don't want you to steal my idea.

Pretty sure people are trying to prompt chatgpt to write Brandon Sanderson-like stories and we'll see their successful prints anytime now.

It's really interesting that I've only seen a few actual pieces of large-scale LLM output by people boasting about it, and most of them (e.g. the trash fire of a "web browser" by Anthropic) are bad.

To build what, though? I’m truly curious. You talk about researching and developing ideas — what are you doing with it?

> but I and my colleagues are doing a lot of incredible work that simply isn't possible without LLMs involved

...Which part is impossible? "Writing a bunch of ideas down" was definitely possible before.


I assume if someone used an LLM to write for them that they must not be comfortabley familiar with their subject. Writing about something you know well tends to come easy and usually is enjoyable. Why would you use an LLM for that and how could you be okay with its output?

Writing a first draft may come easy, but there's more to the process than that. An LLM can go from outline to "article" in one step. I can't.

I don't write often, so revising and rewriting is very slow for me. I'm not confident in my writing and it looks clunky to my eye.

I see the appeal, though I want to keep developing my own skills.


> An LLM can go from outline to "article" in one step. I can't.

But the point is that the results tend to be very grating.

> I'm not confident in my writing and it looks clunky to my eye.

AI writing is clunky!

> I don't write often, so revising and rewriting is very slow for me.

This is totally fair, but maybe consider editing the AI output once it's given you a second draft?


I agree entirely. Seeing all llm garbage being published made me realize how insecure people are about their writing.

Since realizing, I've been stubbornly improving my own writing and not touching LLMs. Takes a bit of work though.


"maybe consider editing the AI output once it's given you a second draft?".

I would completely rewrite the LLM output. Use it as a researcher or idea generator.


> I assume if someone used an LLM to write for them that they must not be comfortabley familiar with their subject.

This statement assumes that the writer is a native speaker in the language in which he writes the text.


If you're not a good enough speaker to write it, you're not good enough to proofread it, either.

some people might be better at prompting a LLM than you

just like when you go to a restaurant to have a chef cook for you when you can cook yourself


a chef can only do so much with a frozen microwave meal

Most restaurants, by volume, these days churn out ultra processed, mass-marketed slop.

It’s true there is the occasional Michelin starred place or an amazing local farm to table place. There is also the occasional excellent use of LLMs. Most LLM output I have to read, though, is straight up spam.


I studied poker a lot in high school, and it still influences how I think today.

Probabilistic thinking about the EV of your decisions is a good framework, but "Just keep making good decisions" is the hard part. Same as in poker, though, the hard part about making a decision in life isn't the middle-school level probability math, but about making the right estimates about payoff potential and success rate based on incomplete information.

Analyzing things as they are rather than how you wish they were, being able to separate useful information from noise, and taking a step back to look at second-order effects are all useful skills that will help you make better decisions the more you develop them.


There is a bit too much emphasis on the relationship between the manager and individual subordinates as the only thing a manager does. It's certainly the relationship that programmers have with their manager, but it ignores the reason why managers exist at all. In the end, managers are part of the translation layer between the company's top-level goal of acquiring customers and improving profitability and code that gets written and deployed.

The day-to-day responsibilities of a manager vary by company, but in essence can be boiled down to: Take priorities that are handed down from above -> apply those priorities as efficiently as possible to the team -> assist in execution.

The manager might be part of the discussion of priorities and clarify them before relaying them to their team, they may actually have quite a bit of freedom of interpreting the priorities, or they may literally just be a task-assigner-and-enforcer. The manager might also have technical leadership authority, architecture responsibility, or anything else, but these are still all in service of coordinating a team to produce the best output possible.

How a manager relates to their subordinates is important, of course, and the best managers treat their subordinates as individuals that have different needs. There's a responsibility to give them room to grow, keep them happy, and keep them productive as part of the job, but that alone isn't the job.


I've seen it work in a few ways; these are not mutually exclusive:

* You have someone whose job or as part of their job is to it is to discover these kinds of internal organizational efficiencies and automate them. Something that organically comes up like this gets assigned to that person.

* Managers are not incentivized to stick to a rigid schedule or metrics based on an inflexible roadmap.

* Flexibility and autonomy is built into developers' schedules so they can work on things outside of just their rank-ordered task list.


These sound like good ideas. I guess I just don’t work in such companies and I think this is the norm unfortunately.

There are strict timelines that span months if not years, often optimised to a large extent. There is little room for spontaneity and organic projects to come up.


I've worked at companies where this sort of thing is encouraged, and others where I'd be afraid to even ask about the possibility of doing such a thing. Naturally it's a spectrum.

(Although, there is also the company that claims to encourage it, and then buries you in bureaucracy...)


RTS has always been my preferred competitive genre. Yes, basic build orders are pretty well mapped out. In Starcraft 2, for example, the first 1:30 or so for beginners, the first 3-4 minutes for intermediate players, and the first 8-9 minutes for pros have "standard" build orders.

But once you get past this, there's so many things to worry about - balancing tech versus units versus upgrades versus economy, micro, scouting, unit composition, harassing and defending harass... And then the meta layer, which is allocating your limited time and APM to those decisions and actions! Really challenging and rewarding to improve at, and the only "e-sport" I find interesting to watch.


This article, like Duke's book, has a solid premise, but fails to provide any actionable advice aside from a simple risk/reward framework. When I read the book, I was hoping there would be more information about how to properly handicap various situations, but there just wasn't.

Business and life decisions aren't as simple as calculating pot odds and outs. Anyone who has estimated a complex and unfamiliar programming task knows that the unknown-unknowns are the biggest part of any equation.


The largest fault in the practical (life) applications of probabilistic thinking is that estimating the odds in real-time is often impossible. Poker is a constrained environment where the odds are computable.

It's a useful framework for thinking in various situations, but it is almost never going to reduce to an equation that can tell you some objectively correct answer or decision.


If the main reason your effort estimations are off is that you don't allow for unknown unknowns, then that's easily fixable.

After all, although we don't know which the unknown unknowns are, the possibility of them is known. And they do, in my experience, tend to increase the required effort by, say, 1--30 × depending on task complexity and familiarity.

So even in the most complex and unfamiliar of tasks, you can adjust the upper end of your estimate by 30× and there you go! Unknown unknowns accounted for in your effort estimation.

(Simpler or more familiar tasks require smaller adjustments to their upper end. Knowing how much adjustment is appropriate is a matter of deliberate practise.)


There's a kind of vicious cycle here where people feel disconnected from their surroundings and communities, and attention-stealing apps make a business of providing a surrogate through parasocial relationships and infinite access to the spectacle and new ways to interact with it.

Parasocial relationships aren't real relationships, and the spectacle replaces doing anything directly, there is no fulfilment and the cycle continues. There's an epidemic of people who can't have hold a conversation or relate to anyone without mediating the conversation through a common topic that they've experienced on mass or social media. They're addicted to watching a simulation of having a life, and not actually living the one life they have.


This is the big one for me, even though I do 0 social broadcast apps (only 1:1 chats, news, reddit and hacker news). Huge improvement, yet still I use it in the same parasocial way, using your terms. It sucks!


I like the feature of being able to select what equipment you have available. Lots of us lost access to a gym with COVID and don't have the ability to put a power rack in our abode. A simple concept, but great execution!

PS: Do you know where I can download more RAM for my computer? It's been a bit laggy recently.


You're looking for https://downloadmoreram.com.


Many companies implemented work-from-home policies very quickly, almost all public events (sports, standardized tests) were canceled, schools and sports gyms closed, and some companies that didn't go to remote work had employees stagger commute times so trains would be less crowded. This all started, iirc, while the number of cases was still in the tens (not counting the cruise ship).

It is also more accepted to wear face masks, as many people have pollen allergies so it's quite normal, and there is less direct physical contact between people overall.

I remember reading that the number of flu cases was about half that of a normal flu season because of the precautions everyone was taking, which points to a high percentage of people doing their part to help prevent the spread.


This is the big difference: people doing their part to help instead of, say, panic-hoarding or obstreperously insisting on going on as if nothing were happening.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: