How you ask is more important than what you ask! A lawyer used ChatGPT for a legal filing. The chatbot cited nonexistent cases it just made up. | Mashable
What was the issue? The user asks for case file references, they did not follow best practice, which if you have attended our events know:
1) establish a persona
2) set context
3) query in relevance to your expected output
More from the article:
According to Schwartz, he was "unaware of the possibility that its content could be false.” The lawyer even provided screenshots to the judge of his interactions with ChatGPT, asking the AI chatbot if one of the cases were real. ChatGPT responded that it was. It even confirmed that the cases could be found in "reputable legal databases." Again, none of them could be found because the cases were all created by the chatbot.
It's important to note that ChatGPT, like all AI chatbots, is a language model trained to follow instructions and provide a user with a response to their prompt. That means, if a user asks ChatGPT for information, it could give that user exactly what they're looking for, even if it's not factual.
ChatGPT's output is only as good as the user chat session setup. Had this user followed best practice he could have avoided this disaster.
Perhaps even more damming, while you can use ChatGPT for work-related output you should not!
You can use the OpenAI LLM via API with the appropriate engine/model to generate results that are worthy of work related products.