Would be interesting to know for other retailers though and how much of this is down to what Walmart sells?
I'm confused by the comment that it failed because it forced single item purchases. Most of my "ecommerce" use is researching and buying one item at a time.
I think in large part the average Walmart consumer does not shop like the average Amazon consumer. They load up a big cart over time rather than pull the trigger on lots of smaller, convenience-driven purchases. So Walmart is going to view a smaller cart size as a potential failure primarily because their operations are not optimized the same way that Amazon is.
It's a failure for e-commerce vendors because it's a spectacular success for shoppers, and the relationship between sellers and buyers is almost always adversarial.
I wonder how much of this is down to the massive amount of new repos and commits (of good or bad quality!) from the coding agents. I believe that the App Store is struggling to keep up with (mostly manual tbf) app reviews now, with sharp increases in review times.
I find it hard to believe that an Azure migration would be that detrimental to performance, especially with no doubt "unlimited credit" to play with?
You can provision Linux machines easily on Azure and... that's all you need? Or is the thinking that without bare metal NVMe mySQL it can't cope (which is a bit of a different problem tbf).
I think part of the issue is that Azure has been struggling to reliably provision Linux VMs. Whether that's due to increased load, poor operational execution, or a combination of them, it's hard for anyone on the outside to know.
It depends though. At least in London a lot of cycleways were made by removing bus lanes and replacing them with high quality segregated cycle lanes.
This has led to a big increase in %age terms of cyclists in London, but a fairly significant decline in bus passengers.
I think roughly 300m/yr cycle journeys were added, but bus has lost 500m pax/yr (mainly because of increased congestion making them less and less attractive). Note this isn't all down to bus lane removal, but it's a significant part of it.
FWIW I recently switched full time to Linux and have had absolutely 0 problems with GNOME, Wayland and Fedora, though I am using an AMD GPU.
wl-copy works fine, askpass works, copy and paste works, screen sharing with Google Meet works, drag and drop works. Using an iphone as a webcam works as does recording my screen.
Most importantly using multiple monitors with fractional scaling works perfectly. AFIAK this is not possible to do well (at all?) on X11, which is a complete show stopper for me.
If anyone's reading this and sitting on the fence, I would really give Fedora a go. I've found it so much more polished than Ubuntu, and loads of things which didn't work on it work out of the box on Fedora (at least compared to 24.04 LTS).
Yes! Per-monitor fractional scaling on Fedora/Wayland finally allowed me to switch my default OS on my laptop from Windows 11 to Linux.
I had to give up on my previous attempt a couple years ago with Linux Mint/X11 because it was an exercise in futility trying to make my various apps look acceptable on my mixed DPI monitor setup.
Linux Mint with Wayland clearly was not getting a lot of attention at the time, and the general attitude when I looked up bugs seemed to be "just don't use Wayland", but maybe the situation has improved by now. It was also kinda off-putting reading Reddit/forum comments whose attitude towards per-monitor DPI scaling on Linux in general was basically "why would anyone need that" when it's been a basic Windows feature for a decade+.
Fedora on the other hand was literally just plug-and-play and has been very enjoyable to use as my daily driver.
What a pox that such an old slow moving distro as Mint somehow is people's first port of call. I don't know how this happened, how Mint rooted itself so well (in 2006 it was fresh!), but this perception that you should use the slowest moving oldest possible dustiest Linux is the best possible thing Microsoft and Apple could spread to convince the world to believe.
If you are going to jump into Linux, dont sell yourself the weird delusion that using ancient ass systems is somehow going to be better for you.
In my experience Mint still has the smoothest process for Nvidia drivers, making it the first suggestion for gamers.
And Snap causes some embarrassing bugs in Firefox in the Ubuntu family, so people thinking "I want an Ubuntu-like OS but without Canonical's mistakes" still gravitate to Mint.
I've always been stuck on the deb/apt system because it seems to have the best support but I probably need to move on at this point. It just doesn't work that well.
They were one of the few distros at the time which had a sane out-of-the-box desktop experience for non-tech people, back when Ubuntu was pushing (the original) Unity and GNOME was still the the early days of 3.x. Drivers and codecs were easy to install as well, generally speaking, without having to hit the forums or ask your tech family member for help.
Sorry if I sold myself a delusion about the Linux distro I casually tried but I've been jumping on and off Linux for 20 years at this point and didn't get the memo it was outdated until later on. The significant change here was being able to daily drive it on my laptop instead of living in a VM or secondary dual boot.
In the past Ubuntu was always my go-to but the snap thing was irritating, and I'd always used some kind of Debian variant, so after cycling through all the X-buntus said hey, why not this Linux Mint I keep hearing about? Plus, Cinnamon looked decent in screenshots but turned out Gnome with a few tweaks ended up being much closer to my ideal than even heavily customized Cinnamon.
That's basically what I heard ten years ago from individuals (and even universities) for why they switched to Mint.. but even now, if you ask Perplexity for a "debian-based distro thats not ubuntu" Mint is the second option.
I did a bunch of distro hopping in the 90's but locked onto Debian (mainly testing, now largely unstable) not long after. I'm still just not sure what compels people elsewhere. Especially now: the Debian installer was vicious if you were a newbie, but I hear it's pretty ok now.
This is largely a me problem! I don't understand what the value add is of other offerings. It's unclear what else would be good or why. Debian feels like it really has what's needed now. Things work. Hardware support is good. Especially in the systemd era, so much of what used to make distros unique is just no longer a factor; there's common means for most Linux 's operation. My gut tells me we are forking and flavoring for not much at all. Aside from learning some new commands, learning Arch has been such a recent non-event for me. It feels like we are having weird popularity contests over nothing. And that amplifies my sense of: why not just use Debian?
But I also have two and a half plus decades of Linux, and my inability to differentiate and assess from beginner's eyes is absolutely key to this all. I try to ask folks, but it's still all so unclear what real motivations, and more, what real differences there are.
The real differences are things that maintainers do. Like how... OBS I think? ...had a bunch of people come in with issues that only existed in the Debian version. Debian software has a bunch of patches, Arch software has far fewer and sticks closer to upstream, other distros will vary. Derivatives also made nonfree easier to set up, which was especially important when MP3 was still encumbered. Nowadays Debian still has the reputation of having old, outdated versions of software, which is going to be hard to shake, especially considering stability is meant to be their main draw.
It's really simple, then I have to use GNOME or KDE or any other thing that is on Wayland which I don't use. AwesomeWM, Xmonad, Fluxbox, OpenBox and many other interfaces just aren't on Wayland and have no intention to be because it just doesn't really do well what they want to and they don't feel like maintaining two versions.
The real issue with Wayland and “setting back” isn't what the article says, but just that like 15 years was taken just to get Wayland on semi-decent feature-parity with X11 during which time development on X11 came to a standstill. That time could've been used to improve X11 and it's still not real feature parity.
And part of it was just the devs refusing to believe that people needed those features. I talked with them around 2010-ish and about some of the things they cut out claiming that no one ever used them. These were things related to mouse acceleration that is pretty essential to video games and image editing, certain forms of screen capture, various things with fonts and color management that are essential to many professionals and they actually believed that no one used those things. Eventually they came around and added many of those things back in, in doing so basically making many of the initial security promises complete void again but so much time has been put in what isn't much of an improvement to justify the time spent on it.
You're lamenting the use of time by people, but you are not those people's boss.
People work on what they want to work on. There is no rule that people who worked on Wayland (and I happen to think they did a great job) would have worked on Xorg instead, or that the original motivations for building Wayland are invalid.
Well that's the issue with free software isn't it. In properitary software people work on what their boss tells them to work on which is decided by market research based on what people want.
Others said in this thread that Wayland in many ways was more so trying to solve issues for developers than for users and that's true.
Market research is far more often used to measure what will make or save the most money that they can get away with. Nobody cares about what the plebs want.
I go back and forth between Fedora and Ubuntu a lot, and once you get past the snap/flatpak and the apt/dnf differences everything feels the same.
I usually format my Fedora disk ext4, add flatpak to my Ubuntu installs, manually override the fonts, add dash-to-panel.. the resulting experience ends up identical.
Separate scaling fractions on separate monitors doesn't work under X. Well, I lie: it does work under zaphod mode, but no applications other than Emacs support that.
Heh. Just today I started fooling around with a new X11 setup on a barebones Ubuntu Server VM with just xorg, xinit, xterm, Emacs and i3.
It's pretty neat learning about iommu groups and doing NVMe passthrough with KVM/Qemu, and also messing around with the new (to me) Spice/virgl 3D acceleration. I was impressed I was able to play YT videos in the Ubuntu Virtual Machine Manager with hand-built mpv/ffmpeg + yt-dlp setup without dropping too many frames or serious glitches. Huzzah for libgl1-mesa-dri.
After that, I rebooted the host OS, jumped into the UEFI boot menu and booted the "guest" NVMe disk directly with my actual GPU, and it still worked. It's quite a trip down memory lane, typing 'startx' and having a both a :0.0 and :0.1 displays. That muscle memory from the 1990s is still going strong.
I miss the simplicity of how I remember XFree86 running on the alt-f7 terminal, and having alt-f1 through alt-f6 for my own needs... a second X on alt-f8 when I got 64MB of ram. ctrl-alt-backspace to quickly kill X and restart it (within a few seconds on a 486).
Then, gradually, these things disappeared from Linux, for no good reason; you can still configure them but someone decided in their infinite wisdom that some of the most compelling features just weren't really needed anymore, in favour of rewriting the XDM again and again until now there's too many of them and none of them are really any better than what we had in the 90s.
I had to put that in my .xinitrc, because like you I really missed that feature. I also made a .Xresources file and had to remember that xrdb was a thing. Good times, good memories. I also remember the jump to 64MiB of memory, it was a big deal! I think I got a Gravis UltraSound right around then too.
I stopped my nostalgia journey short of pimping out my console (sadly now only fbcon works, and the old vga modes are a legacy BIOS thing I think) with fonts and higher resolution, and enabling in the kernel the Alt+SysReq+g key for dropping into the kernel debugger, but there is always tomorrow!
Running X11 on Ubuntu 22.04 - I have a 2650x1600 main at 150% scale and a 1920x1980 secondary at 100% scale. Essentially they're the same virtual size side-by-side. This _only_ works on my nVidia GPU...
I moved away from desktop Linux a few years back after getting a new development laptop with a hiDPI screen and running into fractional scaling issues. Windows wsl2 was just getting real good at the time, so I moved over on my desktop and laptop.
Nice to hear fractional scaling situation is better now. Tempted to try it out but.. Man Windows(Pro) is just such a nice desktop and host now, and I can still develop in "linux"..
Just gonna jump in with the alternate view, if you like Windows desktop but not Windows, KDE is just amazing now. I didn’t enjoy it much in be KDE3 and 4 days but I’m loving Plasma 6.
My experience lately has been similar. Most things work well now.
But, I think the article has some valid points about how long it's taken to get even this far. And it just kinda sucks that some things are still broken or don't have alternatives (the #1 thing I miss right now is Barrier (Synergy) for using my macbook from my linux desktop). HDR gaming on linux is possible thanks to Valve but it's still nowhere near as simple as plugging in your HDR display and toggling one switch.
And it's been rough getting here, and it seems like there are still some things that are slow and hard to get right. I'm not a display protocol dev, so I don't really have educated opinions about the protocol. But I know it's been a rough transition relative to other projects I've adopted even when there was major pushback (systemd springs to mind).
No I do get that, it's definitely been a slow and painful migration. But just having a very insecure X11 "forever" with no fractional font scaling wasn't a long term plan either imo.
The amount of time it’s taken to get here I think is THE fair criticism.
They had an absolute ton of work to do to design it and get it all running. It was never going to be fast. And it’s not like they could order any of the desktop environments to do what they want.
There have always seemed to have been commenters who were annoyed it didn’t come practically done with every feature from X plus 30 more from the day of announcements.
> the #1 thing I miss right now is Barrier (Synergy) for using my macbook from my linux desktop)
It's admittedly tough to keep up with all of the forks that have happened, but the current iteration, Input Leap, has worked for this for me for years now
That's not even the most recent iteration, there's also Deskflow now which is maintained by the main Synergy developer and a very active independent dev. Works fine on Wayland afaik. Also has a wiki page with the history of all the forks!
I tried that recently and it didn't seem to work with my particular setup (Sway window manager). Or at least, the tray app won't open any windows to see if it's enabled/disabled/configured properly.
If the Python 2 to 3 migration took a decade, isn’t it reasonable for a display server migration to take even more time to stabilize?
Especially given:
(1) The (relatively) fragmented reality of Linux distros and desktop managers. I am sure that such a migration could have been executed faster had the Linux desktop world been more centralized like Windows or macOS.
The python 2 to 3 situation was a similar colossal mistake of honestly incompetent developers who really enjoy programming in their free time who don't understand that time is money for most people.
By comparison, Rust with its edition system understands this.
But this is the major issue. They don't understand that even if Wayland had feature-parity with X11. The simple fact that it works differently means that if I am to migrate I would have to rewrite a tonne of scripts that hook into X11 that just organically grew over time that I've now become dependent on for my workflow. It has to be substantially better and have killer features for me to switch and yes, fractional scaling per-monitor is that killer feature for many, but not for me, and the simple fact that XMonad runs on X11 and not on Wayland is a killer feature for others.
Not to mention that p3 on its own was prettymuch functional and p2 quite stable and the major issue was migrating/porting all the legacy over to p3 .Hence bridges like six and 2-to-3 that at least attempted to smooth the transition over by allowing bot to coexist for a time.
With wayland they seem not to be even entertaing this optionality - with wayland itself being not yet feature complete to standalone.And the attempts to bridge like xwayland coming way after the fact and pushing a oneway path with no coexisting situation.
As a result introducinga whole lot of friction and surprises in UI functionality. So yeah at a time when the presentation layer should be a boring afterthough, it is too timeconsuming in part of a Linux setup and daily usage.
> The python 2 to 3 situation was a similar colossal mistake of honestly incompetent developers who really enjoy programming in their free time who don't understand that time is money for most people.
It’s been years but even then, this sincerely cannot be repeated enough.
Indeed. And what many seem to fail to notice is that at it's core it's exactly the same mistake being made all over again. A mistake that I've seen so many times over and over again, increasingly commonly in recent years, which can be summed up thusly:
"I want to make some incompatible changes in my thing that is being widely used by (say) thousands or millions of people. I could spend a bunch of time ensuring I'm backwards compatible as much as possible, or doing a compatibility layer which would make the transition seamless for most, but that's not sexy work, and it would be something I would have to maintain (also not sexy), and it would take me (let's say) 1000 hours to do. Instead, I'll just insist that each and every one of those thousands/millions of people put (say) 100 hours each into adapting to what I want to do"
It's disrespectful of your users. It devalues their time. It says that your (say) 1000 hours is more valuable than (say) a million people putting in (say) 100 hours each. And it's inefficient - wasting the time of many to save the time of a few.
It also undermines their trust in you: if you're willing to force them to spend a bunch of time re-writing something that already works just to suit your whims, what's to say you won't do it again next year when you have a newer and even shinier whim?
Now someone will jump in to argue about how "FOSS developers are volunteers, they do it for free, you can't expect them to do the boring stuff". Which is false, false, and false: You'll find that for a large number of these projects (like say gnome and wayland) the core developers are indeed professionals who are paid (by e.g redhat) to work on it, even if they started off as volunteers. And the boring stuff is part of the job, too, otherwise don't call yourself a software engineer.
If you're working on a widely-used piece of software, then the users should be your god.
> They don't understand that even if Wayland had feature-parity with X11
See, I don't think you're giving them enough credit. Or is it too much credit? These are not stupid people. I say they do understand this, they just don't care about your time enough to do anything about it.
> Indeed. And what many seem to fail to notice is that at it's core it's exactly the same mistake being made all over again. A mistake that I've seen so many times over and over again, increasingly commonly in recent years, which can be summed up thusly:
Yes, just as the idea of “We will start anew because the codebase is a mess and this time we'll make it clean.”. 10 years ago, whenever I saw something like that I would've said that person has zero actual experience working as a programmer. I've seen teams go through this multiple times but at the end, the new codebase when all the features are added is just as much of a mess as the old, at best a slight improvement. People who say this just underestimate the scope. But these people have experience. They're just optimistic and full of wishful thinking maybe?
> See, I don't think you're giving them enough credit. Or is it too much credit? These are not stupid people. I say they do understand this, they just don't care about your time enough to do anything about it.
I disagree. I've talked with many of those people both online and in real life who don't understand that for most people time has value. They really just don't get it. They're not stupid; they just don't really think about it that way and don't have much to do in their lives aside from this one specific hobby.
> 10 years ago, whenever I saw something like that I would've said that person has zero actual experience working as a programmer
That, or maybe they've just never really tried the whole "I'll start from scratch and get it right this time" thing and discovered for themselves how misguided it is.
It's also really easy to tell yourself "but this time I'll get it right!". I'm still guilty of believing it sometimes.
> But these people have experience. They're just optimistic and full of wishful thinking maybe?
Or perhaps simple myopia and lack of long term planning? I don't know.
I feel like it's probably easy in a project like this to lose sight of what regular users want, or to feel like you know better and so should be able to dictate to them what they should want. And because you've had your head buried in the project for a decade, dealing only with team members who share all the same opinions, you're shocked when people don't find "but if you ever get a HDR monitor it might be marginally better" to be a compelling reason to have to re-write all the scripts they've been building and relying on for 20 years.
> I disagree. I've talked with many of those people both online and in real life who don't understand that for most people time has value. They really just don't get it. They're not stupid; they just don't really think about it that way and don't have much to do in their lives aside from this one specific hobby.
Yeah you might be right. I don't have anything to back up my opinion, I was really just trying not to assume they're stupid. And I feel like I have run into people like this.
I’ve heard reports of issues on Windows were you often have to switch between HDR and non-HDR modes to get the colors or brightness to appear correctly. Something about tone mapping I think?
I don’t know if that’s fixed in newer versions or if it has to do with specific drivers or what. But it didn’t sound like it worked very well.
It's pretty funny to see "copy/paste works" and "drag and drop works" presented like some kind of win. That's the absolute baseline for a desktop OS.. since at least Windows 3.x.
Windows, bloated and ad-riddled as it is now, never had to be defended on the basis that basic GUI behavior still functioned.
But this year surely will be the year of the Linux desktop!
I am on latest Fedora Gnome, and tab switching between windows randomly stucks. It's so annoying, i had to go back to X11, even if handles badly high dpi laptop; the alternative being to reboot randomoy in the middle of the work
I already have stuff that works out of box (based on 24.04 as it happens), and from what I've seen of GNOME Desktop I really just don't like the design — and its maintainers generally just impress me as insufferable people any time a story comes up.
Overall I think it's much better that options exist. I'm even willing to tolerate GUI inconsistency across the Linux ecosystem in exchange.
Yeah? Then try to drag out a tab of firefox or GNOME files to the upper direction, good luck. Then check how "awful" Blender 5.1 titlebar and window frame integrates to GNOME. Have fun trying to make Deskflow/Synergy working on GDM.
Here it just works to the left or right, tried multiple distributions Fedora, Arch, CashyOS, NixOS, no way. Perhaps an issue with NVIDIA drivers, running a 5090 here.
Decades of using Linux desktops and nothing has ever changed hahaha. Users still complain things don’t work. Fans still say “oh what a first world problem”.
Like a little 2004 era time loop. People still installing Dapper Drake. Haha.
In the time that people have been talking about the Wayland future to today where they’re still talking about it I have lived in 3 continents, met my wife and had a child, and experienced a few huge technology shifts. Truly amazing. I get this blast of nostalgia every time this discussion happens. Like looking through a bubble and seeing my teenage self.
Fully agree, same here. It's just sad to keep watching this, because now just after approx. 15 years i started to evaluate the Linux Desktop again and it failed again.
Many professional software like Maya, Houdini, Unreal, etc. that used to run great on Linux/X11, now sucks on wayland. Some are hyping Linux for the subpar gaming compatibility, while for GameDev Windows is still required. In 15 years I'll try again, but then I'm probably to old for this.
When there's people taking the complaints as attacks rather than feedback on how to improve, it's no wonder we keep seeing the same complaints.
I just don't get it myself. When users complain about the software I've released, I look to see if there's reasonable changes I can make to alleviate their issues.
I think it’s more like they gave up on Perl 6, admitted it was a mistake, and renamed all that work like it wasn’t related to Perl. Where it languishes in mostly obscurity.
It's just some of the so many reasons why the "Year of the Linux Desktop" will never see the light. Linux is doomed to run mainly headless on a dark chamber hardware. As always when the Linux Desktop is just starting to take off, somebody comes up with a new great self destructive idea(wayland), it always has been like that and probably will never change.
Wayland is why Steam Deck is a product. Gamescope, the compositor it uses for all the features that makes it compelling to buy, uses it and it's features heavily.
Desktop Linux was never going to go anywhere stuck on X. Wayland is happening, it's currently going through it's trial by fire and in the end (and for a lot of people, right now) it'll be better for it.
It's easy to say Wayland has been around forever and barely progressed, but for me it's pretty easy to see, based on the massive amount of fixed issues and new features being added to Wayland, that we're no longer on the horizontal part of the curve. It seems a lot of people have become blind to it's exponential growth. Also the growth of desktop Linux adoption, which is real and happening, in spite of 'Wayland setting Linux Desktop back by 10 years'.
Gamescope is custom sw built by Valve, and all the games run under X (via Xwayland). I'd suspect you could build similar functionality without Wayland (for example a custom X server talking to directly to the kernel DRM).
I'd wager in a alternate universe where Wayland didn't have all the mindshare, Steam Deck would still be a product (unless some butterfly effect nixed it).
Better than trying to make a point and failing to make it. And if I didn't, at least I tried to be funny as that counts for something, your comment is just noise.
My comment is a fact, without the Windows games ecosystem, by developers living and breathing on Windows, with Windows development tools, Proton has nothing to play, even if many of Windows games are developed on top of cross-platform engines.
Unfortunely Valve failed to make native Linux gaming a reality, not even game studios targeting Android NDK bother, which has the same 3D and audio APIs as GNU/Linux.
> Unfortunely Valve failed to make native Linux gaming a reality
Who cares? What would that actually achieve and how would they have practically achieved it anyway? Use their store platform to force or coerce developers? Hold a gun to developers heads?
Valve don't owe anyone shit, neither did PC compatible BIOS manufacturers, nor anyone else who creates a clean room implementation of a pre-existing API. Getting Windows software working outside of Windows is a net good for consumers and developers.
Is anything around forever? What kind of argument is this?
Proton works by wrapping Windows calls to Linux equivalents, which have been improving and becoming more robust as a result of this work. If the Windows game ecosystem collapses (How? When? It's literally never been more popular) then those equivalent APIs can be targeted instead. Meanwhile, the absolutely massive PC back catalogue, the platform's greatest strength, remains playable.
I am skeptical of the "Year of the Linux Desktop" as well, but saying that it won't come because of problems like that is crazy. Windows has plenty of bugs of much higher severity, and they don't seem to stop people from using it. People just use what they're used to.
The goal is to produce a stable workstation OS, because that's who pays the bills. That means Linux 'enthusiasts' who want the latest and greatest stuff have signed themselves up to be eternal betatesters. That part will never change because its largely intentional.
Nah that’s irrelevant. The year of the GNU/linux desktop won’t materialize because it’s not a platform for apps, it’s balkanized, has no backward compat save for win32, and flatpak/snap are awful clutches. ChromeOS and Android will eat its lunch.
Nope, I stopped using Apple devices in early 2019. I can't accept their attitude anymore, of deciding what I'm allowed to install on my hardware. macOS is a bit more open than iOS, but is every year shifting more and more into the same direction.
Or just install Windows, install Deskflow, do my job, earn my money, pay my bills, go on vacation, take a sun bath and stop using an OS developed by people wearing thin foil aluminum hats.
"One thing I really suspect we'll see a lot more of is much more generous rate limits at 'off peak' times - likely to be early morning UTC - as there is no doubt a lot of "idle" compute sitting there"
I strongly suspect this will end up in the opposite happening - where peak tokens are far more "expensive" (whether that be thru usage limits of API costs) than off-peak.
PS: Anthropic have managed to improve reliability but are absolutely shredding opus tok/s at peak times. It absolutely crawls on the web (maybe 2-3 tok/s?) and I believe that on non-max plans it's also incredibly slow on claude code.
“I strongly suspect this will end up in the opposite happening - where peak tokens are far more "expensive" (whether that be thru usage limits of API costs) than off-peak.”
This only happens once/if competition eases up. Until then, it’s a race to the bottom
It's interesting because I'm the same in so much that I use windows basically as a WSL2 host and not much else. I use a MacOS a lot.
_However_, still find the Linux desktops that I've tried are too buggy. While the hardware support is incredible (compared to Windows out of the box), I constantly hit bugs with fractional scaling on multiple monitors. I'm hopeful that Ubuntu 26.04 may finally iron out the last problems with this. The latest version of Fedora I installed did fix all this but I'm far too used to Debian based OSes.
Definitely. If you're doing regular queries with filters on jsonb columns, having the index directly on the JSON paths is really powerful. If I have a jsonb filter in the codebase at all, it probably needs an index, unless I know the result set is already very small.
Yeah, the other problem is I've really struggled to have postgres use multiple threads/cores on one query. Often maxes out one CPU thread while dozens go unused. I constantly have to fight loads of defaults to get this to change and even then I never feel like I can get it working quite right (probably operator error to some extent).
This compares to clickhouse where it constantly uses the whole hardware. Obviously it's easier to do that on a columnar database but it seems that postgres is actively designed to _not_ saturate multiple cores, which may be a good assumption in the past but definitely isn't a good one now IMO.
But there's really good reason for this. On the app it can use NFC to read your passport data exactly. Until WebNFC supports reading passports, it is a much more efficient way.
It's not like they are getting some long term benefit of having the app on your phone. It's just because WebNFC can't read passports.
The information on my passport is of comparatively little value compared to the information on my devices. Most states could get my passport information with little more than a friendly request to my government, same for most, access to my phone however.
Why give up more information than is strictly necessary, so you can tap your passport on your phone? Not convincing imo.
Because for many people with poor eyesight, poor English or computer literacy tapping a passport is far easier than typing the data in with no risk of transcription errors.
But this is just the nature of LLMs (so far). Every "conversation" involves sending the entire conversation history back.
The article misses imo the main benefit of CLIs vs _current_ MCP implementations [1], the fact that they can be chained together with some sort of scripting by the agent.
Imagine you want to sum the total of say 150 order IDs (and the API behind the scenes only allows one ID per API calls).
With MCP the agent would have to do 150 tool calls and explode your context.
With CLIs the agent can write a for loop in whatever scripting language it needs, parse out the order value and sum, _in one tool call_. This would be maybe 500 tokens total, probably 1% of trying to do it with MCP.
[1] There is actually no reason that MCP couldn't be composed like this, the AI harnesses could provide a code execution environment with the MCPs exposed somehow. But noone does it ATM AFIAK. Sort of a MCP to "method" shim in a sandbox.
I'm confused by the comment that it failed because it forced single item purchases. Most of my "ecommerce" use is researching and buying one item at a time.
reply