Today, we look at hallucinations - and how when it comes to making up the past (and compiling the "evidence" to prove it) humans and robots have much in common....
Specifically, the signal we look at is that of the professor who found out he had an (entirely made up) sexual misconduct complaint against him after ChatGPT both made up the allegations - and made up the "primary evidence" to support it - in the form of a seemingly credible Washington Post article (by a real columnist, no less) that the artificial "intelligence" also made up all by itself.
At the same time - as horrific as being "proven" guilty of a crime you did not commit - and suffering the social consequences nonetheless is, it pays to "remember" that many (most?) of our human "memories" are also made up (or at least not a real reflection of what really happened)...
(Recall the "Mandela effect"? (If not, look it up ;))
Do you trust the machines?
What safety measure have you put in place (personally and professionally) to prevent believing in made up hallucinations (human or otherwise)?
How well do you trust your own recollections?
Read more:
The Mandela Effect : https://en.wikipedia.org/wiki/False_memory
We’re all making it up as we go along : https://www.sciencealert.com/your-brain-can-create-a-false-memory-quicker-than-you-think
The truth machine (do you trust the robots?) : https://whatthefuturenow.com/2021/12/01/the-truth-machine/
Guilty by hallucination : https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/
The (very) state we’re in : https://www.fluxtrends.com/the-state-were-in-2023-six-key-trend-pillars-shaping-2023/
Share this post