• VonCesaw@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      A LLM-based system cannot produce results that it hasn't explicitly been trained on, and even making its best approximation with given data will never give results based on the real thing. That, and most of the crap that LLMs """censor""" are legal self-defense