Very interesting article. Thanks for providing an excellent test case.
The refusal of ChatGPT to offer citations is/was a red flag for me. Interesting to see that Bard similarly refuses to offer its sources up. For me that means that engaging an AI chat not is a lot like talking to a smart person who also has a good line in BS; it’s often useful to ask him/it a question, but you wouldn’t/shouldn’t trust it when it’s critical.
Thanks, for sharing these observations, Nicolas. Right now I'm researching the use of AI in public school education and there is the same phenomenon of not being able to discern whether content is fact or fiction.
Very interesting article. Thanks for providing an excellent test case.
The refusal of ChatGPT to offer citations is/was a red flag for me. Interesting to see that Bard similarly refuses to offer its sources up. For me that means that engaging an AI chat not is a lot like talking to a smart person who also has a good line in BS; it’s often useful to ask him/it a question, but you wouldn’t/shouldn’t trust it when it’s critical.
Thanks, for sharing these observations, Nicolas. Right now I'm researching the use of AI in public school education and there is the same phenomenon of not being able to discern whether content is fact or fiction.
Is it too late to take Bard’s advice and ask for sources on the answers it gave you?
Bard has started to put a little disclaimer under each dialogue box that essentially warns users that the answer they see may be false.