Strong disagree — aggregate markers were super useful when browsing the map on mobile! Maybe need to add a flag for mobile vs. desktop, but the experience would be a lot worse on mobile without them.
I tried it on mobile. The clustering reduces it to 6 points for all of North America. My phone has over 3 million pixels, surely there’s room for more detail than that.
Considering the mention of AI in job searches and screening, I don’t know if this is actually from 2016. Some fantastic advice in here though, particularly on navigating political / technical landscapes
It begs the question from a noob like me... Where should they host the status page? Surely it shouldn't be on the same infra that it's supposed to be monitoring. Am I correct in thinking that?
Interesting, free AI coding agents but "completely for free because we use anonymized usage data for model training and other purposes."
Not quite sure how I feel about that, but I guess the major AI players are already using chats for model training anyways. Reminds me a bit of OpenRouter with grok code fast 1, trading off hot models for usage stats.
I love everything that Kagi has put out. The Orion browser rocks (recently replaced Brave, good riddance) and my go-to chatbot today is the Kagi Assistant with Kimi K2 connected to the internet.
I tended towards Axios but lately it's gotten a bit paywalled and less informative. Can't wait to incorporate Kagi News into my daily workflow.
I find the radiologist use case an illuminating one for the adoption of AI across business today. My takeaway is that when the tools get better, radiologists aren't replaced, but take up other important tasks that sometimes become second nature when reads (unassisted) are the primary goal.
In particular, doctors appear to defer excessively to assistive AI tools in clinical settings in a way that they do not in lab settings. They did this even with much more primitive tools than we have today... The gap was largest when computer aids failed to recognize the malignancy itself; many doctors seemed to treat an absence of prompts as reassurance that a film was clean
Reminds me of the "slop" discussions happening right now. When the tools seem good, but aren't, we develop a reliance to false negatives, e.g. text that clearly "feels" written by a GPT model.