Hacker Newsnew | past | comments | ask | show | jobs | submit | rfjimen's commentslogin

Your point is valid if you believe LLM/Generative AI is deterministic; it is not. It is inference-based, and thus it provides different answers even given the same input at times.

The question then becomes, "How wrong can it be and still be useful?" This depends on the use case. It is much harder for applications that require high deterministic output but less important for those that do not. So yes, it does provide wrong outputs, but it depends on what the output is and the tolerance for variation. In the context of Question and Answer, where there is only one right answer, it may seem wrong, but it could also provide the right answer in three different ways. Therefore, understanding your tolerance for variation is most important, in my humble opinion.


> Your point is valid if you believe LLM/Generative AI is deterministic; it is not. It is inference-based, and thus it provides different answers even given the same input at times.

Inference is no excuse for inconsistency. Inference can be deterministic and so deliver consistency.


Yeah. Almost all of the "killer apps" for LLMs all revolve around generating content, images, or videos. My question is always the same, "Is there really such a massive market for mediocre content?"


Visualize Docker images and the layers that compose them. See how each command in the Dockerfile contributes to the final image, and discover which layers are shared by multiple images.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: