Right, but if you say something essential in a meeting with 10 people and it has to percolate through five levels of management to reach the front-lines and gets watered down, that could be much more lost, even millions.
Scale cuts both ways.
What matters isn't how big the meeting is, it's how important the material is, and how well presented it is.
I don't think I've ever heard a top leader say anything essential in such a meeting. The stuff they work on is not related to my job at all. It's all gartner level strategy stuff. In our company they do take time talking about it in large calls but it's always boring and never relevant. And a lot of political spin you have to poke through to see the real message.
If I ever attend it just put it on mute and look at the slides while I do some real work. That way my attendance gets registered and it doesn't stress me out later with too much stuff left hanging.
That percolation is also translation of what they say to things that are relevant at my level. Like what we will be working on next year, if there's going to be bonus or job losses.
I couldn't give a crap about the company's strategy as a whole and that's not my job anyway. Why should I. I'm not here because I believe in some holy mission. I just wanna do something I like and get paid.
Most of those meetings are pretty damn fluffy. No one goes back to their desk and does anything different because they've introduced new company values and the acronym is S.M.I.L.E.
But this meeting is a course correction for how they're using AI, which is a huge initiative. He'll be trying to sell the right balance of "keep using the technology, but don't fuck anything up."
Too cautious, everyone freezes and there's a slowdown[0]. Too soft, everyone thinks it's "another empty warning not to fuck up" and they go right back to fucking everything up because the real message was "don't you dare slow down." After the talk, people will have conversations about "what did they really mean?"
[0] If you hate AI, feel free to flip the direction of the effect.
Well this is the main problem with AI right now isn't it? How to use it successfully without having it fuck up.
How are they expecting some juniors to do this when the industry as a whole doesn't know where to begin yet?
Like that Meta AI expert who wiped her whole mailbox with openclaw. These are the people who should come up with the answers.
Ps I mostly hate AI but I do see some potential. Right now it feels like we're entering a fireworks bunker looking for a pot of gold and having only a box of matches for illumination.
What we need to know from management is exactly what you mention. Do we go all out and accept that shit will hit the fan once in a while (the old move fast and break things) or do we micromanage and basically work manually like old. And that they accept the risk either way. That kind of strategy is really business leader kind of work. Blaming it on your techs when it inevitably goes wrong is not.
Because the tech as it is right now is very non-deterministic. One day it works magic and the next day it blows up.
And yes that SMILE thing was a good example. Been in too many of those time wasters.
I think the most significant boundary is given by the question: "is there a plan to support new minor versions of Python?" It sounds like there is not.
There may be non-zero maintenance work happening, but a project that only maintains support for old versions and will never adopt new ones is functionally one that the ecosystem will eventually forget about. Maybe you call that "under active development" but my response is "ok, then I don't care whether it's under active development, I (and 99.9% of other people) should care about whether it's going to support new minor versions."
On the other hand, if you don't support new minor versions day one, but you eventually support them, that's quite different.
More specifically, the Scientific Python community through SPEC 0[0] recommends that support for Python versions is dropped three years after their release. Python 3.12 was released in October 2023[1], so that community is going to drop support for it in October 2026.
Considering that PyPy is only just now starting to seriously work on supporting 3.12, there's a pretty high chance that it won't even be ready for use before becoming obsolete. At that point it doesn't even matter whether you want to call it "in active development", it is simply too far behind to be relevant.
What's the point of a three year window? It seems like a weird middle-point. Either you are in a position to choose/install your own interpreter and libraries or you are not.
If you can choose your own versions and care at all about new releases, you can track latest and greatest with at the very most a few months of lag. Six months of "support" is luxurious in this scenario.
If you can't choose your own versions, you are most likely stuck on some sort of LTS Linux and will need to make do with what they provide. In that case three years is a cruel joke, because almost everything will be more than three years old when it is first deployed in your environment.
I guess the point of a three year window is to be able as an ecosystem to at some point adopt new language features.
When you have some kind of ecosystem rule for that, you can make these upgrade decisions with a lot more confidence.
For example in my project I have a dependency on zstandard. In 3.14 zstandard was added to the standard library. With this ecosystem wide 3 year support cycle I can in good confidence drop the dependency in three years and use the standard lib from then on.
I feel like it just prevents the ecosystem from going stale because some important core library is still supporting a really old version, thus preventing other smaller libraries from using new language features as well, to not exclude a large user base still on an old version.
This is silly, there's no killer feature for scientific computing being added to python that would make an existing pypy codebase drop that dependency, getting a code validated takes a long time and dropping something like pypy will require re-valditating the entire thing.
Unfortunately python does add features in a drip-drip kind of way that makes being behind an experience with a lot of niggles. This is particularly the case for the type annotation system, which is retrofit to a language that obviously didn't have one originally. So it's being added slowly in a very conservative way, and there are a lot of limitations and pain points that are gradually being improved (or at least progressed on). The upcoming lazy module loading will also immediately become a sticking point.
The phenomena you're describing is why Cobol programmers still exist, and simultaneously, why it's increasingly irrelevant to most programmers
The killer feature is ecosystem: Easily and reliably reusing other libraries and tools that work out-of-the-box with other Python code written in the last few years . There are individually neato features motivating the efforts involved in upgrading a widely-used language & engine as well, but that kind of thinking misses the forest for the trees unfortunately.
It's a bit surprising to me, in the age of AI coding, for this to be a problem. Most features seem friendly to bootstrapping with automation (ex: f-strings that support ' not just "), and it's interesting if any don't fall in that camp. The main discussion seems to still be framed by the 2024 comments, before Claude Code etc became widespread: https://github.com/orgs/pypy/discussions/5145 .
The alternative is when you run a script that you last used a few years ago and now need it again for some reason (very common in research) and you might end up spending way too much time making it work with your now upgraded stack.
Sure you can were you should have pinned dependencies but that's a lot of overhead for a random script...
We can play that game - items like GIL-free interpreters and memory views are pretty relevant to folks on the more demanding side of scientific computing. But my point is this is a head-in-sand game when the community vastly outweighs any individual feature. My experience with the scientific computing community is that the non-pypy portion of it is much bigger.
I'm not a pypy maintainer, so my only horse in this race is believing cpython folks benefit from seeing the pypy community prove Things Can Be Better. Part of that means I rather pypy live on by avoiding unforced errors.
They appear to be talking about CPython implementations, taking into account when those versions continue to be sorted (in the sense of security updates). That's irrelevant for PyPy, which clearly supports version numbers on a different schedule.
It's not irrelevant, because if SPEC 0 says that a particular Python version is no longer supported, then libraries that follow it won't avoid language or standard library features that that version doesn't have. And then those libraries won't work in the corresponding PyPy version. If there isn't a newer PyPy version to upgrade to, then they won't work in PyPy at all.
> I think the most significant boundary is given by the question: "is there a plan to support new minor versions of Python?" It sounds like there is not.
There is literally a Python 3.12 milestone in the bug tracker.
> my response is "ok, then I don't care whether it's under active development, I (and 99.9% of other people) should care about whether it's going to support new minor versions."
It sounds a lot more like your actual response is "I don't care about pypy".
Which is fine, most people don't to start with. You don't have to pretend just to concern-troll the project.
The health of the market is not a function of the total number of jobs alone, it's a function of the number of jobs and the number of people to fill them.
The number of total jobs going up year after year meant that there were increasing numbers of candidates, new people entering the field. If the job growth stops, then there still we be candidates coming in. There will also be the new hires from the last decade moving into increasingly senior roles, and there won't be space for them (unless you devalue the meaning of "senior" even more).
So the year over year change matters a lot. If it plateaus, or even declines slightly, it's more than enough to make a terrible market.
YoY change in jobs is still probably not the best way to visualize overall market health. As you say, you also have to take into account the number of people of fill the jobs. To me it seems like the least misleading statistics would be a graph showing unemployment and underemployment % over time. I'd probably also toss in graphs of length of unemployment period as well as various median wage percentiles (quintiles or deciles maybe) over time.
I am a US citizen living in Portugal. I have the right to go to the US, live there, etc.
I recently went back for a funeral, and I had to spend a moment reminding myself that it would be fine for me.
For people who don't have my passport, I wouldn't feel comfortable telling them "it will be fine", though I would still tell a European "the odds of a problem are relatively low." But I couldn't in all honesty say "there's nothing to worry about."
This reminds me of an incident with a friend of mine. He flew to the US and entered through Texas. He is white with blond hair and he was wearing a t-shirt very reminiscent of the Confederate flag.
A security guard picked up his bag from the carousel, handed it to him, and very emphatically said "Welcome home, sir!".
Don't put words in my mouth, don't say silly things.
I'm well aware the color your skin matters a lot, but your passport also matters, especially at the border.
You're better off with white skin and a US passport than with white skin and a British passport, but you're also better off with brown skin and a US passport than brown skin and a British passport and that's still better than brown skin and a third-world passport.
And yeah, even if you're a white man with a US passport, you still might end up shot by ICE if you're in Minneapolis (doesn't mean you're less likely to be targeted).
> I'm well aware the color your skin matters a lot, but your passport also matters, especially at the border.
The way things are currently operating, the border is probably the place you have to worry the least as it's staffed by CBP folks which have probably had training: it's the rest of the country with ICE randos running around that seem to be the worrisome areas. Just ask the South Koreans:
> You're better off with white skin and a US passport than with white skin and a British passport, but you're also better off with brown skin and a US passport than brown skin and a British passport […]
Are we talking at the border or the rest of the country? At the border with CBP a US passport would probably be best. With the rest of the country, with ICE, white skin and a British (or any) passports would probably be 'best'.
Good observation, I meant at the border. Your passport won't matter that much if you get stopped by a cop.
But also, look carefully at the comparisons I offered. I didn't include all the combinations, because I only was including comparisons that were obviously true without any room for ambiguity or nitpicking.
As you noted, a black citizen might be treated better at the border and worse during a traffic stop compared to a white foreigner.
>You're better off with white skin and a US passport than with white skin and a British passport, but you're also better off with brown skin and a US passport than brown skin and a British passport and that's still better than brown skin and a third-world passport.
Tell me you're not an American without telling me you're not an American.
I hate to say it, but to many (racist) Americans, brown skin < anything else ... and ICE has a disproportionate number of those people, because they deliberately hire them.
Probabilistically speaking, the entire thing is fine.
But seeing my engineer freak out about flying in a plane, despite passing Diff Eq and knowing the probability of a crash... Feelings/emotions do matter.
This is why populist demagogues win elections... ugh...
I have a US passport. I'm avoiding the US. ICE has already openly killed US passport holders. My Irish accent could get me in trouble or create a misunderstanding. Why risk anything like that?
Being a USC is no assurance, I've sat in immigration jail cuffed and legs bound where every other person but me was brown and spoke another language. It is rather bizarre when it happens because none of them empathize with you because at the end of the day you know you have the right to enter and they are just fucking with you out of sadism, while for the others they are wondering if they'll be deported. Although generally after a shift or two they forget why they were fucking with you and you get released.
FedRAMP contracts require all inputs being FedRAMP compliant and a vetted BOM. Anthropic is no longer FedRAMP high and because it is declared a supply chain risk now all our FedRAMP contracts are at risk and any company who has FedRAMP customers is at risk too.
Although, when I learned foundations of mathematics, every function was a set, and if you wanted them, you'd get plenty of junk theorems from that fact.
Go is also winner take all. It's psychologically satisfying to have a big win, in the same way that it's psychologically satisfying to achieve a brilliant checkmate, but in any ordinary game or tournament (outside of certain gambling setups), a win by 1/2 point is the same as a win by 20+ points.
Yes and no. One could say this of any game with points where the margin of victory doesn't affect long-term outcomes (e.g. most ball games).
A win by 1/2 point or 20 points it suggests a very different relative skill between the two players. Similarly the custom of the stronger player playing white without komi suggests that the point differential matters.
I see what you're saying; this is true for any game scored win/loss. Even gridiron football if you're down by 4 points with time almost out you won't kick a field goal (worth 3 points).
Scale cuts both ways.
What matters isn't how big the meeting is, it's how important the material is, and how well presented it is.
reply