The thing that strikes me about LLMs is that they have been created to chat. To converse. They’re partly influenced by Turing tests where the objective is to convince someone you’re human by keeping up a conversation. They weren’t designed to create meaningful content or factual content.
People still seem to want to use chat GPT to create something, and fix the accuracy as a second step. I say go back to the drawing board and create a tool that analyses statements and tries to create information based on trusted linked open data sources.
Discuss :)
Moderation in all things.
To avoid negative thinking, challenge the thought that a problem is personal, pervasive or permanent (Martin Seligman)
Parenting: Set a good example. Don’t punish. Teach. Tell them what TO do, not what not to do.
Having ideas about the way things ought to be is great, but you can only respond to what is.
Be excellent to each other. Do as you would be done by.
Thank you for replying. This is the level of info I used to love on Reddit and now love on Lemmy.