Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a few thoughts after reading this:

- I started to see LLMs as a kind of search engines. I cannot say they are better than traditional search engines. On one hand, they are better at personalizing the answer, on the other hand, they hallucinate a lot.

- There is a different view on how new scientific knowledge is made. It's all about connecting existing dots. Maybe LLMs can assist with this task by helping scientists discover relevant dots to connect. But as the author suggests, this is only part of the job. To find the correct ways to connect the dots, you need to ask the right questions, examine the space of counterfactuals, etc. LLMs can be useful tool, but they are not autonomous scientists (yet).

- As someone developing software on top of LLMs, I am slowly coming to a conclusion that human-in-the-loop approaches seem to work better than fully autonomous agents.



Instead of connecting language with physical existence, or entities, it's connecting tokens. An LLM may be able to describe scenes in a video, but a model would tell you that said video is a deep fake because of some principle like conservation of energy and mass informed by experience, assumptions, inference rules, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: