@simon What I disagree with are people who say that LLMs are _designed_ to produce bullshit; I recall someone saying that, particularly in an educational context, the one lesson we need to be teaching is to not use LLMs for that reason. I'm not so sure we should write them off like that. It's a new tool; we should look for the good things it can be used for as well as acknowledging the bad.
@matt I'm struggling with this: on the one hand, based on my own experience I think LLMs are one of the most powerful tools for self-learning I've ever encountered - but I don't know how to teach people to use them productively for that thanks to the hallucination problem
@simon @matt when I say that LLMs are designed
to emit bullshit, I don't mean it pejoratively. I realize that it might sound that way because I do, also, deeply dislike them. But how else would you describe the training criteria for the outputs they produce? "Plausible sounding, gramatically correct English prose" is inherently bullshit. It's not fiction, because it is not evaluated for its falsity one way or another, only its relevance. Beyond "text", what else would you call it?