If you look back just 2 years we had the grinds build those specialized models for QA, NER, Sentiment, Classification etc. and all their deep investment was rug-pulled by GPT-3 and then GPT-4.
You say that training datasets will win, but this is where OpenAI is currently have a big leg up: Everyone is dumping tons of real data into them, while the LocalLLM crowd is using GPT-4 to try to keep up.
You say that training datasets will win, but this is where OpenAI is currently have a big leg up: Everyone is dumping tons of real data into them, while the LocalLLM crowd is using GPT-4 to try to keep up.
We will see who is faster.