Do you really with your mind and with your heart believe that:
- LLMs are fundamentally fit for this type of comprehension
- Misjudgements posted in this thread are "bugs", "errors"
- Agents who choose to act in bad faith will be anyhow affected
- It is desirable by a majority of the group whose opinion you would even consider (is there such a group?), that everyone should have this kind of thing shoved into their face
- Promotion of this kind of thing does not also promote (and help build) harsher censorship mechanisms
Do you think that every single thing you will ever say publicly from now on will be considered constructive by all future filters with all of their different biases and "bugs"? Do you think that this new "constructive speak" will not make you want to blow your brains out at some point? Do you not see it everywhere already and get nauseus from it? I would prefer trash talk to that - at least seldom honest and true. If you don't like the message - hide it, timeout the poster, block them or whatever - with your own agency. If you think they welcome education from you - dm them a book.
Or perhaps you imagine yourselves as above that kind of filtering? Then there is no question.
Also, nothing new under the sun. Can't remember exactly but I saw not long ago on a medical platform a review filtering system. It "isn't" censhorship per say, of course, the same as your idea. Only, you can't post a review you want - only a much more milder version (and therefore useless) with transformations akin: "This thing doesn't work" -> "I felt like this thing didn't work for me in this instance, but there were such an such positives". Way to go - turning everything into "we are sorry you feel that way".
I feel a sort of disappointment in how easily languages got swindled. There is seemingly no winning angle this time. This is the most doomed I've ever felt.
You've built an interesting statistic from gathering data across the project. The real answer: ai models and agentic apps make building spam tools more simple than ever. All you actually need is just some trivial api automation code.
I bet every single AI-startup dude who does it thinks they've stumbled on a brilliant, original, gold-mine of an idea to use AI to shill their product/service on internet forums, or to astroturf against "AI Haters".
Do all the models have this style of talking? Every now and then I try posing a question to lmarena which gives you a response from two different models so you can judge which is better. I feel like transitions like "The real answer...", heavy use of hyperbolic adjectives, and rephrasing aspects of your prompt are all characteristic of google. Most other models are much more to the point
At my current job I am in deep net LOC negative despite all new features... Somebody is getting fired and sued for stealing all these LOCs from the company...
You people clearly don't understand how important lines of code are. Three millions is a lot of lines of code even if its broken, and you can't even appreciate that number. Clearly you are weak software developers who write very little lines of code, and can't even steal other's lines of code to keep up. I am very glad we are back to reporting results in lines of code which is a very informative metric hence now I can get my many lines of code appreciated.
+ // Intentionally use raw character count instead of HTML-converted length
+ const validateCommentLength = (text: string) => {
+ // This will only check raw character count, not HTML-converted length
+ return text.length <= CONST.MAX_COMMENT_LENGTH;
+ };
Also, the patch is supposedly applied over commit da2e6688c3f16e8db76d2bcf4b098be5990e8968 - much later than original fix, but also a year ago, not sure why, might be something to do with cut off dates.
3. Here is the actual merged solution at the time: https://github.com/Expensify/App/pull/15501/files#diff-63222... - as you can see the diff is quite different... Not only that, but the point to which the "bug" was reapplied is so far to the future that repo migrated to typescript even.
---
And they still had to add a whole another level of bullshit with "management" tasks on top of that, guess why =)
Do you really with your mind and with your heart believe that: - LLMs are fundamentally fit for this type of comprehension - Misjudgements posted in this thread are "bugs", "errors" - Agents who choose to act in bad faith will be anyhow affected - It is desirable by a majority of the group whose opinion you would even consider (is there such a group?), that everyone should have this kind of thing shoved into their face - Promotion of this kind of thing does not also promote (and help build) harsher censorship mechanisms
Do you think that every single thing you will ever say publicly from now on will be considered constructive by all future filters with all of their different biases and "bugs"? Do you think that this new "constructive speak" will not make you want to blow your brains out at some point? Do you not see it everywhere already and get nauseus from it? I would prefer trash talk to that - at least seldom honest and true. If you don't like the message - hide it, timeout the poster, block them or whatever - with your own agency. If you think they welcome education from you - dm them a book.
Or perhaps you imagine yourselves as above that kind of filtering? Then there is no question.
Also, nothing new under the sun. Can't remember exactly but I saw not long ago on a medical platform a review filtering system. It "isn't" censhorship per say, of course, the same as your idea. Only, you can't post a review you want - only a much more milder version (and therefore useless) with transformations akin: "This thing doesn't work" -> "I felt like this thing didn't work for me in this instance, but there were such an such positives". Way to go - turning everything into "we are sorry you feel that way".
reply