What does feel realistic is training these models to be MUCH better at providing useful indications as to their confidence levels
The impact of these problems could be greatly reduced if we could counteract the incredibly convincing way that these confabulations are presented somehow
I also think there's a lot of room for improvement here in terms of the way the UI is presented, independent of the models themselves