It's literally been one month since the last outage. This is getting ridiculous. Past Degradations
Provider Status Page
3mo
Past 3 Months
UTC+01:00
Jan 16, 2026
Duration
55m
4:10 pm - 5:05 pm
resolved
While I'd hesitate to call the people behind the project out for fraud, they certainly are great at obtaining funding while delivering negative value and then presenting these experiences in a positive light. The linked site promotes their use in Germany's "aufstehen party" which wasn't actually a party but whatever if it sounds better and as I was intimately involved with that experience (being brought in to consult on recovery) it was a disaster that ended that NGO before it really begun. The Pol.is platform was supposed to be the central decision making interface of that distributed movement supported by figures from different leftwing, social democrat and ecological parties, it wasn't just that they failed to scale but they were completely unwilling to deploy on our infra or even open source critical components while maintaining publicly that their system was completely open source. The question of whether to replace it (remember its use was the key innovation of this movement) and if so with what ended up not only splitting the NGO but making one sub-group create a new party that is "neither left nor right wing" and voting with the right-wing AfD in (state) parliaments helping them secure anti-migration majorities.
So now we know what that was all about. Nothing. Obviously POTUS orders people in his circle to contact foreign leaders and foreign intel all the time. Whether Mossad or SVR there are people in the Trump orbit that talk to them and Trump approves of this in his unusual diplomacy style. Maybe they even think they have leverage and it's going on behind Trumps back. Now thanks to this Whistleblower they know that their contacts happened with knowledge of the WH. Seems like a legitimate discretionary decision by DNI Gabbard to disclose this only to the WH CoS and not along the usual lines. Particularly since the press apparently get briefed immediately by members of the congressional oversight committee.
What? This is the most convoluted theory I think anyone could imagine. And it seems like it was only posted to distract from the most obvious:
Someone in the President's circle is secretely communicating with (or compromised by) a foreign adversary's intelligence service. They don't want that story getting out as it would make them politically look terrible and so they went to suppress it.
Also...
The White House says she’s cleared, but: she was "exonerated" by Dennis Kirk, a Project 2025 co-founder she personally appointed to the Inspector General’s office just two weeks after a whistleblower blew the lid on her conduct.
Knew a mechanical engineer at a place where I interned. Asked him about it and he joked that he didn't trust those transistors before explaining that it's just muscle memory to him and while a calculator would be faster he'd still earn the same per hour. Apparently I was the first to ask him in over a decade as everyone had moved on to do stuff in software and no one was pushing him to use a calculator anymore. Interns didn't inquire because they thought it must be some esoteric/religious practice. Last I heard he was still working there, management asked him to stay on past retirement age for his invaluable skillset. While its probably some other skill I just like to imagine the suits in a meeting where they decided to keep him on for this particular "skill" that no one else in the company had anymore.
No! No one in their right mind would even consider using them for guidance and if they are used for OCR (not too my knowledge but could make sense in certain scenarios) then their output would be treated the way you'd treat any untrusted string.
> Powered by Gemini, a multimodal large language model developed by Google, EMMA employs a unified, end-to-end trained model to generate future trajectories for autonomous vehicles directly from sensor data. Trained and fine-tuned specifically for autonomous driving, EMMA leverages Gemini’s extensive world knowledge to better understand complex scenarios on the road.
You were confidently wrong for judging them to be confidently wrong
> While EMMA shows great promise, we recognize several of its challenges. EMMA's current limitations in processing long-term video sequences restricts its ability to reason about real-time driving scenarios — long-term memory would be crucial in enabling EMMA to anticipate and respond in complex evolving situations...
They're still in the process of researching it, noting in that post implies VLM are actively being used by those companies for anything in production.
I should have taken more care to link a article, but I was trying you link something more clear.
But mind you, everything Waymo does is under research.
So let's look at something newer to see if it's been incorporated
> We will unpack our holistic AI approach, centered around the Waymo Foundation Model, which powers a unified demonstrably safe AI ecosystem that, in turn, drives accelerated, continuous learning and improvement.
> Driving VLM for complex semantic reasoning. This component of our foundation model uses rich camera data and is fine-tuned on Waymo’s driving data and tasks. Trained using Gemini, it leverages Gemini’s extensive world knowledge to better understand rare, novel, and complex semantic scenarios on the road.
> Both encoders feed into Waymo’s World Decoder, which uses these inputs to predict other road users behaviors, produce high-definition maps, generate trajectories for the vehicle, and signals for trajectory validation.
They also go on to explain model distillation. Read the whole thing, it's not long
But you could also read the actual research paper... or any of their papers. All of them in the last year are focused on multimodality and a generalist model for a reason which I think is not hard do figure since they spell it out
This strikes me as a skunworks project to investigate a technology that could be used for autonomous vehicles someday, as well as score some points with Sundar and the Alphabet board who've decreed the company is all-in on Gemini.
Production Waymos use a mix of machine-learning and computer vision (particularly on the perception side) and conventional algorithmic planning. They're not E2E machine-learning at all, they use it as a tool when appropriate. I know because I have a number of friends that have gone to work for Waymo, and some that did compiler/build infrastructure for the cars, and I've browsed through their internal Alphabet job postings as well.
The Register stooping this low is the only surprise here. I'm quite critical of Teslas approach to level 3+ autonomy but even I wouldn't dare suggest that there vision based approach amounted to bolting GPT-4o or some other VLLM to their cars to orient them in space and make navigation decisions. Fake News like this makes interacting with people who have no domain knowledge and consider The Register, UCLA and Johns Hopkins to be reputable institutions and credible sources more stressful to me as I'll be put into a position to tell people that they have been misled or go along with their delusions...
> consider The Register, UCLA and Johns Hopkins to be reputable institutions
The Register is arguably misrepresenting the story by omission but I don't understand why you're dragging UCLA and John Hopkins into this? The paper is clear about this being a new class of attacks against a new class of AI systems, not the ones on the road today.
> Teslas approach to level 3+ autonomy
Tesla doesn't have an approach to L3+ autonomy, all of their systems are strictly L2 as they require human supervision and immediate action from the driver.
Sure, just like Aaron Swartz was persecuted for "recklessly damaging a protected computer" and "wire fraud" not for any other reasons at all and btw the State wasn't at all involved in murdering him, he did that to himself, probably because he felt guilty for having damaged so many computers....
When I was a kid people told me I needed no Chess Computer - You can play chess in your head, you know ? I really tried, no luck. Got a mediocre device for Christmas, couldn't beat it for a while, couldn't lose against it soon after. Won some tournaments in my age group and beyond. Thought there must be more interesting problems to solve, got degrees in Math, Law and went into politics for a while. Friends from College call on your birthday, invite you to their weddings, they work on problems in medicine, economics, niches of math you've never heard of - you listen, a couple of days later, you wake up from a weird dream and wonder, ask Opus 4.5/Gemini 3.0 deepthink some questions, call them back: "did you try X ?" they tell you, that they always considered you a genius. You feel good about yourself for a moment before you remember that Von Neumann needed no LLMs and that José Raúl Capablanca died over half a decade before Turing wrote down the first algorithm for a Chess Computer. An Email from a client pops up, he isn't gonna pay your bill unless you make one more modification to that CRUD app. You want to eat and get back to work. Can't help but think about Eratosthenes who needed neither glasses nor telescopes to figure out the earths circumference. Would he have marvelled at the achievements of Newton and his successors at NASA or made fun of those nerds that needed polished pieces of glass not only to figure out the mysteries of the Universe but even for basic literacy.
As if the sight of this dystopian thread wasn't depressing enough, there is your one gold nugget of a comment, downvoted into oblivion, grayed out at the bottom of the comment section.
A hundred comments of people reverse-engineering vendor handshakes, writing Python daemons, and debating the finer points of CEC frame injection - and not one of them asking why this is necessary. The answer is in three letters: DRM.
Your PlayStation is a computer. Your Xbox is a computer. Your Apple TV is a computer. Your "smart TV" is a computer. You already own a computer. The reason you can't just... use it... is that the entertainment industry spent two decades making sure the bits know who owns them at every step of the pipeline. HDCP, HDMI licensing, CEC's vendor-specific "quirks".I see no interoperability failure, it's interoperability prevention.
Meanwhile, a $200 mini-PC running VLC, connected via DisplayPort to a monitor and 3.5mm to powered speakers, plays anything in any format at any bitrate with zero handshake failures. One "remote": a wireless keyboard. This solution has existed since before some commenters here were born.
What you're all debugging isn't technology. It's compliance.
You write that "The olfactory bulb can vary in size by up to 3x, depending on "age and olfactory experience", so perhaps (we're making this up) with more usage your olfactory bulb might actually get bigger" which certainly does not seem out of the question. What we can assume with even greater likelihood is that the sense of smell works better when regularly stimulated. Even if your method did not have any commercial applications in entertainment it could likely (at least if this method scales beyond 4 distinct sensations) have therapeutic potential for people who suffer from blocked noses, chronic sinusitis, allergies or other conditions that block their sense of smell for physical reasons. It might even be used by Sommeliers to retain the capacity for their tradecraft while unable to use their actual nose while suffering from a cold. As we know that there is a strong association between smell and memory there are many other useful therapeutic and educational applications that come to mind if this technology can be made safe for broader consumer use. Right now, regardless of protocols used, you are somewhere on the spectrum between shining nascent lasers at your eyes to determine whether they work and emit light output (which doesn't scale with an increase in power) and the nobel prize worthy quadrant of Jonas Salk and Barry Marshall. While I do hope you succeed and I'd hate for you to be overly cautious I also hope your (olfactory) neurons survive!
reply