> I'm seeing the same thing with LLMs, all people are focused on is: Prove to me AI isn't evil - people can see a valuable use case in a demo but it doesn't matter, I think like blockchain some are beyond convincing. They just aren't into technology anymore.
You might be shadowboxing a bit with a point I didn't make (or maybe your comment was intentionally orthogonal to what I raised, not sure). I work with this technology every day in a professional, commercial context. Not just LLMs, but many other ML/DL implementations that walk the gamut of downstream tasks from anomaly detection, time series forecasting, etc. I think its useful enough to be building real things with it to improve the way my business functions. In the efforts of building those inference and training stacks from scratch, I've also seen how spectacularly they can fail and how often.
I don't think AI is evil. I think autoregressive token prediction is stochastic enough to be considered unreliable in its current state. That doesn't mean I am going to stop building things with it, it just means that I've seen these systems implode regularly enough, even with grounding via RAG, that I tend to push caution first and foremost (as I did in my original message).
Sorry - straw manning on internet comments is so bad I shouldn't have even gone there with the crypto analogy, couldn't help because I see parallels with regards to general reception.
I agree with what you said here 100%.
Working with it daily I can't help but be slightly more optimistic though. I see LLMs as being a major component of future apps. You have servers, databases, game engines, and now there's this generative token thing you can use for... quite a lot - without an internet connection no less. It will only get better.
The fact that RAG isolates specific document data in a db and is based on regular database querying IME solves the problem with regular LLM accuracy, but yeah ofc still could be some errors like with anything
You might be shadowboxing a bit with a point I didn't make (or maybe your comment was intentionally orthogonal to what I raised, not sure). I work with this technology every day in a professional, commercial context. Not just LLMs, but many other ML/DL implementations that walk the gamut of downstream tasks from anomaly detection, time series forecasting, etc. I think its useful enough to be building real things with it to improve the way my business functions. In the efforts of building those inference and training stacks from scratch, I've also seen how spectacularly they can fail and how often.
I don't think AI is evil. I think autoregressive token prediction is stochastic enough to be considered unreliable in its current state. That doesn't mean I am going to stop building things with it, it just means that I've seen these systems implode regularly enough, even with grounding via RAG, that I tend to push caution first and foremost (as I did in my original message).