LLMs are so notoriously terrible at telling truth from lies that "AI hallucination" is a household phrase at this point, for better or for worse. But surely they work even better when asked to rate the truthfulness of things that are not in their corpus to begin with.
LLMs are so notoriously terrible at telling truth from lies that "AI hallucination" is a household phrase at this point, for better or for worse. But surely they work even better when asked to rate the truthfulness of things that are not in their corpus to begin with.