@simon I agree but would go one further. It's not just a technical fix, we've got to educate people out of the mindset that it's possible to type a question in a box and get back "truth" in a neat package, easy to read, and less than 300 words...
@simon on the technical side though, I can't help but wonder, if we want accuracy and accountability, why have an LLM generate novel language at all? Why not do an embedding search of sources and just return excerpts (though maybe LLM summary could be useful here)
@afamiglietti79 there are all kinds of ways I'm finding LLMs useful that are dependent on their ability to generate completely new output - here's a recent example https://til.simonwillison.net/gpt3/gpt4-api-design
@simon oh, I agree, there are uses for LLMs! Lots of them! I just wonder if the search engine like function people are eager to use them for (and encouraged to use them for) is in fact their best use
@afamiglietti79 it's definitely not! I think that's the most important (and hardest) lesson for people to learn: this thing is a terrible search engine for many things - but identifying which things it's good at searching for and which it's useless at is highly unintuitive