Hacker Newsnew | past | comments | ask | show | jobs | submit | theevilsharpie's commentslogin

A lot of professions have terms of art that can be interpreted incorrectly or be viewed as odd by laymen. "Individual contributor" is no different.

Maybe it sounds weird to you, but it's a well-understood term in the management profession.


it's a bullshit term because a managers contributions are also individual and the kind of contributions (implied to come from non-individual contributors) also come from individual contributors

ICs are given tasks for one person. Managers are given tasks to distribute to entire teams. By definition managers do not contribute individually because their output (from an org chart perspective) includes the output of everyone reporting to them.

see how the language lead directly to you considering all contributions to belong to the manager? yea newspeak is bad for the brain!

Your manager is on the hook for things you do. I can’t explain it more plainly

regardless of who is on the hook for what, ones contributions are their own. a manager makes contributions by performing actions just like anybody else. this label for implying that a managers contributions includes that of their reports is bogus - if a manager does absolutely nothing at all we should consider the contributions of their reports? actual technical jargon adds precision to language "IC" takes it away, it's nothing more than corporate newspeak ala 1984

This is a pretty odd take, from my perspective.

If one of my direct reports came to me and said they were interested in working on, say... AI observability (replace with whatever interests you), and that was something I had any influence over (even if only indirectly), I'd be finding whatever way I could to connect my report with that kind of work.

It's all well and good to say that you're in control of your own career advancement, but that's not in conflict with working with your manager on supporting your career development. Even if they don't have anything to teach you, they will necessarily have some influence of your scope/area of work, so it only makes sense to work them on aligning your work with your interests.


I believe everything you wrote about here is actually cooperation between two people, and to the point of what I said, you not actively getting in the way of your direct report's career progression.

> The manager's job is to find you impactful work that a) gets you promoted and b) challenges you in the ways you want or need to grow.

To me, the comment I responded to reads like a manager actively involved in the promotion of a direct report, and in finding a scope of work that the report might find challenging so that they grow. Your comment reads like a colleague helping out another colleague to the best of their ability. Which is exactly what I expect from a manager.


Fractal still sells a Serenity workstation[1], but it's essentially an off-the-shelf AMD Ryzen-based system, installed into a Fractal Design Define 7 Mini case, with a Noctua tower air cooler and case fans replacing the stock cooling. They have a variety of photos showing their customized fan setup in various configurations.[2]

It's a reasonably well-built system, but $3,500 USD is hard to justify for a basic system with an 8-core CPU, 32 GB of RAM, and no discrete GPU, especially given that it's using parts that you can just purchase and assemble yourself.

I know that prices of some components have increased significantly, but not by THAT much.

[1] https://www.pugetsystems.com/solutions/more-workstations/qui...

[2] https://www.pugetsystems.com/parts/photography/Additional-Co...


More recent revisit: https://www.phoronix.com/review/snapdragon-x-elite-linux-eoy...

TL;DR: It runs, but not well, and performance has regressed since the last published benchmark.


Tuxedo is a german company relabling Clevo Laptops so far, which work out-of-the-box pretty good (I might say perfect in some cases) on Linux. They have done ZILCH, NADA, absolute nothing for Linux, besides promoting it as a brand. So now they took a snapdragon laptop, installed linux and are disappointed by the performance....Great test, tremendous work! Asahi Linux showed if you put in the work you can have awesome performance.


Yes but having to reverse engineer an entire platform from scratch is a big ask, and even with asahi it's taken many years and isn't up to snuff. Not to say anything of the team, they're truly miracle workers considering what they've been given to work with.

But it's been the same story with ARM on windows now for at least a decade. The manufacturers just... do not give a single fuck. ARM is not comparable to x86 and will never be if ARM manufacturers continue to sabotage their own platform. It's not just Linux, either, these things are barely supported on Windows, run a fraction of the software, and don't run for very long. Ask anyone burned by ARM on windows attempts 1-100.


> if you put in the work you can have awesome performance.

Then why would I pay money for a Qualcomm device just for more suffering? Unless I personally like tinkering or I am contributing to an open source project specifically for this, there is no way I would purchase a Qualcomm PC.

Which is what the original comment is about.


The original comment was "explicitly can't run Linux" which is explicitly not true. Not "it's not fully baked" or "it's not good", but a categorically unambiguously false claim of "explicitly can't run Linux" as if it was somehow firmware banned from doing so.


If you want to split hairs, sure. It does not help anyone who is considering buying a laptop.


I'm open to being wrong.

If someone wants to provide a link to a Linux iso that works with the Snapdragon Plus laptops( these are cheaper, but the experimental Ubuntu ISO is only for the elites) I'll go buy a Snapdragon Plus laptop next month. This would be awesome if the support was there.


I have used Terraform, Puppet, Helm, and Ansible (although that's not strictly declarative), and all of them ran into problems in real-world use cases that needed common imperative language features to solve.

Not only does grafting this functionality onto a language after-the-fact inevitably result in a usability nightmare, it also gets in the way of enabling developer self-service for these tools.

When a developer used to the features and functionality of full-featured language sees something ridiculous like Terraform's `count` parameter being overloaded as a conditional (because Terraform's HCL wasn't designed with conditional logic support, even though every tool in this class has always needed it), they go JoePesciWhatTheFuckIsThisPieceOfShit.mp4 at it, and just kick it over to Ops (or whoever gets saddled with grunt work) to deal with.

I'm seeing the team I'm working with going down that same road with Helm right now. It's just layers of templating YAML, and in addition to looking completely ugly and having no real support for introspection (so in order to see what the Helm chart actually does, you essentially have to compile it first), it has such a steep learning curve that no one other than the person that come up with this approach wants to even touch it, even though enabling developer self-service was an explicit goal of our Kubernetes efforts. It's absolutely maddening.


> Sure, but lts often doesn't work for other use cases like gaming. For example the experience on lts with this year's AMD gpus will be extremely poor if it works at all.

I'm using Ubuntu 24.04 LTS with a Radeon RX 9070 XT (currently the most recent and highest-end discrete GPU that AMD makes), and it works fine, both functionally and in terms of performance.

> I run Arch and my 9070 xt experience was poor for several months after release. I can't imagine modern gaming on an lts release.

Maybe instead of imagining it, you should just try it?


> Arch being unstable is a myth.

Arch follows a rolling release model. It's inherently unstable, by design.


You are probably using some annoying pedantic definition of unstable. Most people mean it to mean “does stuff crash or break”. Packages hang out in arch testing repos for a long time. In fact, Fedora often gets the latest GNOME release before Arch does, sometimes by months.


> You are probably using some annoying pedantic definition of unstable. Most people mean it to mean “does stuff crash or break”.

English has a specific word for that: reliable.

Pedantry aside, having a complex system filled with hundreds (thousands?) of software packages whose versions are constantly changing, and whose updates may have breaking changes and/or regressions, is a quick way of ending up with software that crashes or breaks through no fault of the user (save for the decision to use a rolling release distro).


This isn't true in practice. It turns out incrementally updating with small changes is more stable in the long run than doing a large amount of significant upgrades all at once.

Have you ever had to maintain a software project with many dependencies? If you have, then surely you have had the experience where picking up the project after a long period of inactivity makes updating dependencies much harder. Whereas an actively maintained or developed project, where dependencies are updated regularly, is much easier. You know what is changing and what is probably responsible if something breaks, etc. And it's much easier to revert.


> Have you ever had to maintain a software project with many dependencies? If you have, then surely you have had the experience where picking up the project after a long period of inactivity makes updating dependencies much harder. Whereas an actively maintained or developed project, where dependencies are updated regularly, is much easier. You know what is changing and what is probably responsible if something breaks, etc. And it's much easier to revert.

Have you ever had situations where Foo has an urgent security or reliability update that you can't apply, because Bar only works with an earlier version of Foo, and updating or replacing Bar involves a significant amount of work because of breaking changes?

I won't deny that there's value in having the latest versions of software applications, especially for things like GPU drivers or compatibility layers like Proton where updates frequently have major performance or compatibility improvements.

But there's also value in having a stable base of software that you can depend on to be there when you wake up in the morning, and that has a dependable update schedule that you can plan around.


Debian -- probably not, but Ubuntu has numerous variants whose primary purpose is providing a different desktop experience, and a SteamOS-like variant would fit in perfectly with that.


That’d still come with the limits brought by the old kernels Ubuntu ships.

Which as an aside, I think distros should advertise better. It must be awful to be sold on a distro only to find that it doesn’t support your newish hardware. A simple list of supported hardware linked on the features and download pages would suffice but a little executable tool that will tell you if your box’s hardware is supported would be even better.


> the kernel might still be good but the userland is just awful in every way imaginable

The Windows kernel is also falling behind. Linux is considerably faster for a wide variety of workloads, so much so that if you're CPU limited at all, moving from Windows to Linux can net you an improvement similar to moving up a CPU generation.


Dial-up modems can transfer a 4K HDR video file, or any other arbitrary data.

It obviously wouldn't have the bandwidth to do so in a way that would make a real-time stream feasible, but it doesn't involve any leap of logic to conclude that a higher bandwidth link means being able to transfer more data within a given period of time, which would eventually enable use cases that weren't feasible before.

In contrast, you could throw an essentially unlimited amount of hardware at LLMs, and that still wouldn't mean that they would be able to achieve AGI, because there's no clear mechanism for how they would do so.


From modern perspective it's obvious that simply upping the bandwidth allows streaming high-quality videos, but it's not strictly about "more bigger cable". Huge leaps in various technologies were needed for you to watch video in 4k:

- 4k consumer-grade cameras

- SSDs

- video codecs

- hardware-accelerated video encoding

- large-scale internet infrastructure

- OLED displays

What I'm trying to say is that I clearly remember reading an old article about sharing mp3s on P2P networks and the person writing the article was confident that video sharing, let alone video streaming, let alone high-quality video streaming, wouldn't happen in foreseeable future because there were just too many problems with that.

If you went back in time just 10 years and told people about ChatGPT they simply wouldn't believe you. They imagined that an AI that can do things that current LLMs can do must be insanely complex, but once technology made that step, we realized "it's actually not that complicated". Sure, AGI won't surface from simply adding more GPUs into LLMs, just like LLMs didn't emerge from adding more GPUs to "cat vs dog" AI. But if technology took us from "AI can tell apart dog and cat 80% of the time" to "AI is literally wiping out entire industry sectors like translation or creative work while turning people into dopamine addicts en masse" within ten years, then I assume that I'll see AGI within my lifetime.


There's nothing about 4K videos that needs an SSD, an OLED display, or any particular video codec, and "large-scale internet infrastructure" is just a different way of saying "lots of high-bandwidth links". Hardware graphics acceleration was also around long before any form of 4K video, and a video decoding accerator is such an obvious solution that dedicated accelerators were used for early full-motion video before CPUs could reasonably decode them.

Your anecdote regarding P2P file sharing is ridiculous, and you've almost certainly misunderstood what the author was saying (or the author themselves was an idiot). That there wasn't sufficient bandwidth or computing power to stream 4K video at consumer price points during the heyday of mp3 file sharing, didn't mean that no one knew how to do it. It would be as ridiculous as me today saying that 16K stereoscopic streaming video can't happen. Just because it's infeasible today, doesn't mean that it's impossible.

Regarding ChatGPT, setting aside the fact that the transformer model that ChatGPT is built on was under active research 10 years ago, sure, breakthroughs happen. That doesn't mean that you can linearly extrapolate future breakthroughs. That would be like claiming that if we developer faster and more powerful rockets, then we will eventually be able to travel faster than light.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: