cleder@hachyderm.io
@cleder@hachyderm.io
I cringe when LLMs are referred to as "thinking" or "reasoning".
They merely simulate these concepts through advanced pattern matching. Admittedly, the models are quite successful and convincing doing that, but it is still a long way away from intelligence.
> "All models are wrong, but some are useful" George E. P. Box