Agreed, would really like to understand what this (setting the LLM up to assume a role to improve performance) is doing under the cover and why it works.
Why aren't the labs training models to pick a mantra appropriate to the task and do this themselves? "Huh, a database question. I am going to pretend I'm a database expert with lots of experience. OK, here we go!"
My read was roughly that agents require constraining scaffolding (CLAUDE.md) and careful phrasing (prompt engineering) which together is vaguely like working in a DSL?
Many apps are missing many keyboard shortcuts that you may be used to if you’ve used the equivalent on the desktop. You’ll need to keep the iPad screen accessible to tap on UX elements. There’s also the issue that shortcuts that do exist may be hard to discover because there’s no menu bar to look in.
> Many apps are missing many keyboard shortcuts that you may be used to
This is true. To see the ones that are available, hold down the command ⌘ key to get a scrollable list of all of the shortcuts for the app you’re currently using, and use Fn-m or globe key-m to see a list of the system shortcuts.
I don't think hybrids use skateboard designs the way EVs do? The battery for a hybrid is so much smaller, they usually steal space under the rear seat and/or in the trunk afaik.
The point of the penny-farthing is that you drive the front wheel directly with the pedals, but this seems to have the pedals in a spot where they would drive a chain, although there is no chain?
You can see up-thread that the same model will produce different answers for different people or even from run to run.
That seems problematic for a very basic question.
Yes, models can be harnessed with structures that run queries 100x and take the "best" answer, and we can claim that if the best answer gets it right, models therefore "can solve" the problem. But for practical end-user AI use, high error rates are a problem and greatly undermine confidence.
Most importantly, Slack limits the amount of message history you get to keep if you’re not paying. And the payment plans are per-user fees which quickly becomes non-viable for non-commercial use.
Ideally, ethical buyers would cause the market to line up behind ethical products. For that to be possible, we have to have choices available to us. Seems to me Anthropic is making such a choice available to see if buyers will line up behind it.
The WF store I frequent has lousy cell reception, so add th step “open Settings app and get on store’s wifi” (and who knows what all that lets them track).
Why aren't the labs training models to pick a mantra appropriate to the task and do this themselves? "Huh, a database question. I am going to pretend I'm a database expert with lots of experience. OK, here we go!"
reply