Wow, Voxtral is amazing. It will be great when someone stitches this up so an LLM starts thinking, researching for you, before you actually finish talking.
Like, create a conversation partner with sub 0.5 second latency. For example, you ask it a multi part questions and, as soon as you finish talking, it gives you the answer to the first part while it looks up the rest of the answer, then stitches it together so that there's no break.
The 2-3 second latency of existing voice chatbots is a non-started for most humans.
I noticed that with both models voxtral-mini-transcribe-realtime-2602 and voxtral-mini-2602 filler words are ignored. I'd like to be able to count words/sounds, specifically "um" or "uh" for improvement purposes. Any good models that handle that?
This will destroy a lot of trust and between Apple and me.
When I buy Apple, I pay a premium price for a premium product.
The ways ads work, is that the ad revenue shows up in the company’s quarterly earnings, and very quickly the company has to increase ad load across every surface to meet earnings expectations. The user experience deteriorates, getting worse each year.
When I buy Apple, it’s based on my expectation that Apple will remain a premium, mostly ad-free experience for the life of the device. When, after my purchase, they make the product worse, like with the recent iOS update that rolled back usability, it destroys a lot of trust.
This will destroy a lot of trust and between Apple and me.
When I buy Apple, I pay a premium price for a premium product.
I would rather you simple charge me more for my product.
The ways ads work, is that the ad revenue shows up in the company’s revenue, and very quickly the company has to increase ad load across every surface to meet earnings expectations. The user experience deteriorates, getting worse each year.
When I buy Apple, it’s based on my expectation that Apple will remain a premium, ad-free experience for the life of the device. When, after my purchase, Apple make the product worse, like with the recent iOS update that rolled back usability, it destroys a lot of trust.
The author’s math considers how mines would be distributed if mines were distributed to the empty squares after reaching that board state.
This is wrong.
This is classic Monty Hall Problem. The author is doing the equivalent of saying “there are two doors left, so the odds are 50 / 50 that the prize is behind either door.
It invalidates all of the numbers after this point.
The difference is that Monty knows which of the doors the car is behind and deliberately avoids it, thereby giving away information about where the car is.
Whereas in Minesweeper we've just blindly stumbled across a situation where we have to guess.
It would be like if every show Monty always revealed a random door. Sometimes he would reveal the car and the game would end immediately. In the cases when he didn't reveal the car it really would be 50/50 between the remaining two doors.
Curious, I have the very opposite reaction, although I tolerate Python, but only for the massive amount of libraries and huge community. But as a language? Meh
What makes you so reliant on significant white space that any language without is a automatic dismissal?
Playing around with ML and being forced to use python, I find having no block termination character and selecting which context a line of code is in by how much you un-indent it, is the worst.
Give me braces and rip a formatter across the whole codebase and be done with it.
Furthermore, I think that the trend of compilers/interpreters caring about whitespace formatting rules should really end. You can't ever force everyone in the world to write aesthetically pleasing code in your language and someone out there will always be able to write a complete trainwreck, and everyone disagrees on what is or is not aesthetically pleasing. Leave the problem up to configurable code linters where it belongs. Let people make "mistakes" (in your eyes) if they want to.
Wow, Voxtral is amazing. It will be great when someone stitches this up so an LLM starts thinking, researching for you, before you actually finish talking.
Like, create a conversation partner with sub 0.5 second latency. For example, you ask it a multi part questions and, as soon as you finish talking, it gives you the answer to the first part while it looks up the rest of the answer, then stitches it together so that there's no break.
The 2-3 second latency of existing voice chatbots is a non-started for most humans.