As a physician, I can give further insight. The blood pressure medication the commenter is referring to is almost certainly a beta blocker. The effect on blood sugar levels is generally modest [1]. (It is rare to advise someone with diabetes to stop taking beta blockers, as opposed to say emphysema, where it is common)
They can be used for isolated, treatment of high blood pressure, but they are also used for dual treatment of blood pressure and various heart issues (heart failure, stable angina, arrhythmias). If you have heart failure, beta blockers can reduce your relative annual mortality risk by about 25%.
I would not trust an LLM to weigh the pros and cons appropriately knowing their syncophantic tendencies. I suspect they are going to be biased toward agreeing with whatever concerns the user initially expresses to them.
I don't know about Claude Code but in GitHub Copilot as far as I can tell the subagents are just always the same model as the main one you are using. They also need to be started manually by the main agent in many cases, whereas maybe the parent comment was referring about calling them more deterministically?
Copilot is garbage, even MSFT employees I know all use cc. The only thing useful is you can route cc to use models in copilot sub which corp had a deal from their m365
Sub-agents are typically one of the major models but with a specific and limited context + prompt. I’m talking about a small fast model focused on purely curating the skills / MCPs / files to provide to the main model before it kicks off.
Basically use a small model up front to efficiently trigger the big model. Sub agents are at best small models deployed by the bigger model (still largely manually triggered in most workflows today)
Talk about market cap, especially meme stock maket cap, reminds me of that old XKCD comic on extrapolation. Market cap is what you get when you extrapolate the fair market value for the 1% of a company's shares currently on the market all the way out to 100%. But demand doesn't work that way - it doesn't scale linearly.
This is a straw man in my opinion. But regardless of that, your theory doesn't explain why conservative media isn't really covering this either - The Iran protests haven't exactly been front page material on Fox News or OAN or Breitbart
the conservative media is covering it. Prince Reza Pahlavi (the leader of the revolution) has appeared on Fox several times. Mark Levin, Rep. senators (Graham, etc.) constantly talk about how we should urgently help Iranians in their fight for freedom.
Reza Pahlavi has also had recent interviews with CBS [1], the Economist [2], and CNN [3] (all within the last 30 days). So how is the existence of Reza Pahlavi interviews on Fox evidence that conservative media is covering this issue more than liberal media?
On the topic of Politicians, Democratic congressmen like Dave Min and Jim Hines have also spoken in favor of US intervention in Iran.
> All comments, complaints, corrections, and retraction requests? Unmoderated? Einstein articles will be full of comments explaining why he is wrong, from racist to people that can spell Minkowski to save their lives. In /newest there is like one post per week from someone that discover a new physics theory with the help of ChatGPT. Sometimes it's the same guy, sometimes it's a new one.
Judging from PubPeer, which allows people to post all of the above anonymously and with minimal moderation, this is not an issue in practice.
They mentioned a famous work, which will naturally attract cranks to comment on it. I’d also expect to get weird comments on works with high political relevance.
I don't think any oil execs are interested in this, just like they weren't interested in investing in Venezuela after Maduro's ouster (at least if you believe the Financial Times).
Rather these invasions appear to be the pet projects of neo-imperialist advisors in the government who see national growth as a zero sum game, a Starcraft-esque race for a finite set of resources where powerful countries can generate wealth only by using their power to steal from others. In Steven Miller's own words: "[The world] is governed by force, [is] governed by power. These are the iron laws of the world since the beginning of time."
I think it's even simpler than that. Trump wants accolades next to his name - one of the few presidents to have won the Nobel Peace Prize and one of the few presidents that added land to the US.
Instead he will soon be remembered as the worst US president ever - even after his first term he was already third-worst in most rankings and his second term is orders of magnitude worse.
He will be remembered as the president that destroyed the constitution, destroyed America's formidable power projection, the president that destroyed 60 year long alliances, the president that was unimaginably corrupt. I just hope that American school books will also contain the verdict.
> Lax is completely open source and uses the MIT license.
I see that there is an MIT license file in the repo, but
the readme in the repo still says there is no license and redistribution in source or binary form is explicitly forbidden. This is a discrepancy that should probably be addressed
There are multiple Python 3 interpreters written in JavaScript that were very likely included in the training data. For example [1] [2] [3]
I once gave Claude (Opus 3.5) a problem that I thought was for sure too difficult for an LLM, and much to my surprise it spat out a very convincing solution. The surprising part was I was already familiar with the solution - because it was almost a direct copy/paste (uncredited) from a blog post that I read only a few hours earlier. If I hadn't read that blog post, I would have been none the wiser that copy/pasting Claude's output would be potential IP theft. I would have to imagine that LLMs solve a lot of in-training-set problems this way and people never realize they are dealing with a copyright/licensing minefield.
A more interesting and convincing task would be to write a Python 3 interpeter in JavaScript that uses register based bytecode instead of stack based, supports optimizing the bytecode by inlining procedures and constant folding, and never allocates memory (all work is done in a single user provided preallocated buffer). This would require integrating multiple disparate coding concepts and not regurgitating prior art from the training data
They can be used for isolated, treatment of high blood pressure, but they are also used for dual treatment of blood pressure and various heart issues (heart failure, stable angina, arrhythmias). If you have heart failure, beta blockers can reduce your relative annual mortality risk by about 25%.
I would not trust an LLM to weigh the pros and cons appropriately knowing their syncophantic tendencies. I suspect they are going to be biased toward agreeing with whatever concerns the user initially expresses to them.
[1]
reply