

Are all outputs hallucinations? It’s just some happen to be correct and some aren’t. It doesn’t know and can’t tell unless it’s specifically told (hence the guard rails).
But if I’ve gotta build so many hand rails (instructions) then is it really “AI”?

I refuse to call it AI
It’s a LM… Pure and simple. Anyway none of the LMs can come up with theory of relatively (if you gave them all of the known physics up to 1915).
Nor can they play paper scissors rock (they don’t realise it’s pointless).
As far as I can tell they’re wrong more times then they’re right and the only use I have for them is as a glorified search engine (and even then they’re still fricking wrong.
They’re only useful if you already know the answer because if you don’t know the answer you don’t know if they’ve given you the wrong answer.