I might be missing something, but it would be great if the charts would show inference speed, model size (required VRAM) and quality (benchmark results) in one. It might be that the same quality and speed and size can be attained by just quantizing, perhaps with added fine-tuning, without the sparseness. The post seems to imply that their method is better, but if that's the case, they could show that.
You are right that it is unlikely that one candidate gets the number of votes that exactly matches a certain percentage with one decimal (1:10.000 as per the source article).
But it's even more unlikely and astonishing that the second candidate also gets a number of votes corresponding to a percentage with one decimal!
This is highly suspicious if the vote counts are presented as official result.
But as mentioned in the comments, we cannot be sure that someone was given the total vote count, and the percentages rounded to one decimal, and thought it would be helpful to recalculate how many votes each candidate must have gotten.
The results we're discussing were read live by the president of the Venezuelan electoral authority. It is possible that they... simply read the wrong results? Like an internal estimate rather than the real numbers? But that is a wild mistake for the electoral authority to make.
you pay X $8/month and then you are eligible for a payout based off of some metric around how much your tweet is interacted with and the ads that were near it.
Using tiktokenizer, these are only two tokens: quote-colon is token 498, space-quote is token 330 (as per https://tiktokenizer.vercel.app/ ). But I agree to the general argument.
I think what factors in even more when you use the API is that you do not have fine-grained control over the generation process. If you follow the MS guidance approach, you fill in structured text yourself, and then let the model generate only the value parts, e.g. up to the next quote. To do that more or less word by word, you have multiple API calls, and have to be very smart about providing the right stop tokens.
"After that, participants solved mentally active programming tasks (coding) and monotonous ones (debugging)" ... this is a surprising take. Debugging, as the saying goes, is often like a murder mystery. Edit: I dont' think the authors are wrong about that, since they have observed the participants, I assume they chose a monotonous debugging task.
Maybe it’s a personal preference after all but my ADHD brain for sure prefers debugging.
Debugging is sometimes more challenging and the boundaries are clearly defined : you know how the program should behave and you know when you get it. You know what doesn’t work so it’s easy to go TDD : when the test is green you are good to go !
Whereas writing new code is a pain for my dopamine system because I never know when it’s done.
Getting the boring feature to work is easy but finishing is horrible.
Figuring if you managed every edge cases, if you wrote enough tests, if you respected the team defined architecture, that’s hard.
> Maybe it’s a personal preference after all but my ADHD brain for sure prefers debugging.
> Debugging is sometimes more challenging and the boundaries are clearly defined : you know how the program should behave and you know when you get it. You know what doesn’t work so it’s easy to go TDD : when the test is green you are good to go !
> Whereas writing new code is a pain for my dopamine system because I never know when it’s done.
Oooooh, that fits with my drifting towards debugging other's code (even when I have 0 experience with the language) and debugging infrastructure configuration.
I sometimes think I may have some ADHD traits but I don't believe I have it. Though I recently decided to adopt some ADHD strategies to organize my home/life and it has had some effects/benefits.
Maybe I should pivot harder away from coding and drift towards sysadmin.
As I was writing in another comment, I have a very clear deficit of attention and I love debugging! Especially when it's urgent, and I'm basically livecoding in front of other devs or clients, jumping about in our codebase, hacking through breakpoints and the interactive console.
It helps that JVM debugging is very good (can be done remotely over the network, can insert code, add conditional breakpoints, suspend single threads etc.) It's an almost lisp like experience!
And a bug report is the most scoped task I ever encounter frankly. You have a clear success condition, often a pretty clear deadline ("now!" or "before the release window"!) In fact, if anything I tend to take support tasks way too often, to the detriment of my feature building work.
When I get an email with a title like "Intermittent null pointer exception in running prod application", I just know I'll have a really good few days!
Maybe I am an optimist, but I'd say Vaadin is mostly Open Source (Apache License). You can build complete web applications with the open source version. Only some advanced components (e.g. an Excel-like grid, a WYSIWYG editor, Highcharts components) are proprietary and require a subscription for development, while the builds can be freely distributed.
BMW sales do not seem shrinking to me: They boast 35 consecutive quarters of growth [1] and sold more cars year over year, at least in 2014-2017 [2], don't know about 2018, as it's not yet in the statistics.
This is a fallacy - for all we know, SF might be spending that money to help 20,000 people to find homes, and 7,500 homeless people are still left or newly homeless.