Hacker Newsnew | past | comments | ask | show | jobs | submit | ferroman's commentslogin

Using prompt as source of truth is not reliable, because outputs for the same prompt may vary


Of course. But if this is the direction we’re going in, we need to try and do something no? Did you read the post I linked?


> Figuring out how to get others to see what you see but this is exactly the point of article: event if you make them see, they just pretend they don't because it's not in their personal immediate interest to admit you are right or you were right (later)


clinical psychologist's like to invent imaginary scenarios and add something that is not in text, aren't they?


As I wrote in the opening. That’s exactly what we do all the time. It’s called case formulation. It’s called hypothesis testing. In this case it’s also common sense about human nature.


It's called "strawman fallacy", you replacing the thesis and add things that wasn't there to draw plausible conclusions instead of trying to get more information if there's not enough. Calling it "hypothesis" isn't charging anything.


I think you just heard that word and use it because it makes you sound like a logical person. It’s not fitting at all here. After all, a straw man would be me taking a general claim and creating the weakest version of that argument.

If anything, you should argue that it’s overgeneralization, over-extrapolation, or an argument from authority. Hell, if you involved the concept of non sequitur, it would be better.

It’s like you’re cobbling together words related to scientific rigor without understanding the concepts. A hypothesis is, by definition, based on incomplete data. If it wasn’t, it would just be called an observation. So you make a hypothesis, see how it fits the data, and maybe even see how well it predicts the future.


"occurs when someone misrepresents, exaggerates, or fabricates an opponent's argument to make it easier to attack."

You literally imagine something that isn't in this text and start theroritizing based on this.


This is exactly my experience (and I've been doing this fo 20 years now). Saw it on every job I had. The usual bs is "we are developing new performance and grows framework" "promotions will happen on next cycle" "we are reorganizing now" "we want to add more transparency" etc. But somehow they always know who to call when shit hit th fan. Don't fall for it. Look for another job once you see this. Looking for new job nowadays take a while and it's better to be employed during this process.


well, that whole post convinced me to not even try any of this lol. gl with community tho.


I’m working on putting together a sort of personal framework for writing technical docs for software projects. This is stuff I'm really struggling with, so any tips are welcome.


That's cool


You can generalize it, but that was a point to make the list more specific. And yes, world is a very different place today. The number of SW engineers doubles every year, so, at least half for them are new. We are engineers and we should try to measure or competency, аnd should try to systematize the things that we use. Of course, this list is not something absolutely universal, but we should at least try think about standards we that we want to meet.


Unfortunately, it's much more complex than that. Does your software work? Does your software work right? How quick you can add this to it? How to avoid bugs when you introduce complex change? How to share knowledge with new people in project?


What do you mean? Is it's too low or too high?


It is less a question of whether the percentage is correct than whether the tests are useful. I've seen plenty of useless tests (testing getters and setters in Java) that assert nothing related to the codes functionality but exist solely to boost coverage. Which is why asserting a strict coverage percentage is dangerous.

Better to just do real TDD in the first place.


This. Slavish fetishization of a specific code coverage target is indicative of an underlying problem, IMO and that problem is far greater than one having relatively low code coverage.

It is far better IMO to go in with an understanding of where your potential hot spots are than simply adding a test to everything. Sure, in an ideal world we'd have 100% coverage of everything but this field is about tradeoffs and sometimes writing tests simply isn't worth the time it takes to have written them in the long run.


In my experience it's too high. I see a lot of unit test code that doesn't do anything except add complexity. But again, I guess this will depend on the nature of the project


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: