Using "literally" figuratively or, more precisely, as a hyperbolic intensifier [0], is a tradition employed by notable English writers who lived and died long before you were born.
Those are some bounding leaps you made without much context. Are you in sales?
Kidding aside, my first reaction was: perhaps the occasions they were aware of their own influence were ones in which they didn't much care for the outcome. Or maybe a conflict of interest, like trying to win over a hiring manager for a position you know you'll hate.
I don't think cajoling or persuading others inherently manipulative, but I can think of a lot of examples where doing so feels grimy.
> I don't think cajoling or persuading others inherently manipulative, but I can think of a lot of examples where doing so feels grimy.
What I am trying to do is understand why sema4hacker, and some others, feel that influencing people is manipulative. So if you pop into the conversation and say that you don’t feel the same way that sema4hacker does, that doesn’t really help me understand sema4hacker’s perspective.
That’s the bounding leap here and I want to pull it apart, dissect it. The bounding leap from “I influenced somebody” to “I manipulated them”. I think there’s not just raw, random feelings here, but some kind of rational thought that I want to understand.
I'm one of those people-For clarity I'm referring to influence in the active 'How to Win Friends and Influence People' sense. To me "influence" and "manipulation" seem like forms of persuasion but with positive and negative connotations, like "public relations" versus "propaganda".
None of them are necessarily bad on their own, but the choice of words seems like it depends on the perspective who is perceiving or describing the influencer/manipulator and their motives. Influence can be seen as manipulation and manipulation seen as influence.
For example someone who dislikes an "ifluencer" is probably more likely to think of them as manipulating their audience into buying products. Convincing someone to join a religious group could be seen as positive influence by current members, and manipulation by an outsider.
Clarity of purpose from the person doing the persuading is not necessarily clear as well. There is likely there's likely a mixture of factors motivating them to persuade, including some to their own benefit. People, especially those who think they are doing good, will also generally grade themselves on a bit of a curve and rationalize their actions towards being positive, especially in moral and emotional contexts.
I’ll see your pedantry and raise you ... more pedantry. The sentence may be a bit clunky, but there’s nothing grammatically wrong with it. And you’re leaving out the first sentence, which frames the comparison:
> ASML gross revenue was 28B€ in 2024, and their net income was 7.5B€. While 1.3B€ (the amount ASML invested in this 1.7B€ fund raise) is not pocket change, it is also an amount that ASML can not afford to lose.
Worded another way:
> ASML had a healthy margin of 7.5B€ on 28B€ in gross revenue in 2024. 1.3B€ isn’t a huge chunk of this, relatively speaking, but *it’s also an amount that ASML can’t afford to lose.*
There was nothing in the comment that you reply to suggesting that it was grammatically wrong: "The sentence is framed like a contrast but then instead it says the same thing twice." If anything it suggests it's semantically wrong.
language exists to convey a shared concept, you don’t think the sentence means “it’s a lot of money for ASML to risk losing?” and wouldn’t have been mentioned if it meant inconsequential or small?
I’m Tim, a developer and writer with a swath of experience across the full stack, from design to Python-based backend systems, including asynchronous operations, concurrency, and integrated LLM workflows. My recent work involves orchestrating and managing processes with libraries like Meilisearch, OpenAI/Anthropic endpoints, and advanced metaclass/typing/patterns in Python. I also have a strong front-end background, keen design sensibilities, and proven writing and technical documentation skills—my writing portfolio can be viewed on my LinkedIn profile. I’m currently looking for work that challenges me technically and allows me to communicate complex ideas through writing and design.
Location: Brazil (US Expat)
Remote: Preferably
Willing to relocate: Never say never
Technologies: Python, asyncio, Pydantic, LLM integrations (OpenAI, Anthropic), Meilisearch, WebSockets, Redis, Postgres, REST/HTTP APIs, Front-end web development, React, Next.js, HTML/CSS, Tailwind, UI/UX, design, technical & content writing
Résumé/CV: https://www.linkedin.com/in/kaechle/
Email: [email protected]
I’m Tim, a developer and writer with a swath of experience across the full stack, from design to Python-based backend systems, including asynchronous operations, concurrency, and integrated LLM workflows. My recent work involves orchestrating and managing processes with libraries like Meilisearch, OpenAI/Anthropic endpoints, and advanced metaclass/typing/patterns in Python. I also have a strong front-end background, keen design sensibilities, and proven writing and documentation capabilities—my writing portfolio can be viewed on my LinkedIn profile.
I’m currently looking for a place where I can continue learning and growing my broad skillset; a position where I can challenge my technical skills as well as my ability to communicate complex ideas through writing and design.
This is the way. Blending two or more styles also works well, especially if they're on opposite poles, e.g. "write like the imaginary lovechild of Cormac McCarthy and Ernest Hemingway."
Also, wouldn't angry Charles Bukowski just be ... Charles Bukowski?
For the past year and a half, my passion project has been developing a comprehensive Python framework designed for building modular, asynchronous, and event-driven LLM applications. It features a component-based architecture encompassing state and context management, AI-composable workflows, and code parsing and analysis. The goal is to create an intelligent system with a persistent context/memory layer built dynamically via search while providing a natural-language interface for workflow composition. I'm eager to explore AI engineering opportunities that align with this trajectory.
Great points. We're pattern-matching shortcut machines, without a doubt. In most contexts, not even good ones.
> When I go to the doctor, I don't study medicine first, I trust the doctor. Trust takes the place of genuine understanding.
The ultimate abstraction! Trust is highly irrational by definition. But we do it all day every day, lest we be classified as psychologically unfit for society. Which is to say, mental health is predicated on a not-insignificant amount of rationalizations and self-deceptions. Hallucinations, even.
That's just it. We're not unique. We've always been animals running on instinct in reaction to our environment. Our instincts are more complex than other animals but they are not special and they are replicable.