Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Guilty until proven innocent

#DailySignals - Your 2 minute preview of the future

Today, we look at hallucinations - and how when it comes to making up the past (and compiling the "evidence" to prove it) humans and robots have much in common....

Specifically, the signal we look at is that of the professor who found out he had an (entirely made up) sexual misconduct complaint against him after ChatGPT both made up the allegations - and made up the "primary evidence" to support it - in the form of a seemingly credible Washington Post article (by a real columnist, no less) that the artificial "intelligence" also made up all by itself.

At the same time - as horrific as being "proven" guilty of a crime you did not commit - and suffering the social consequences nonetheless is, it pays to "remember" that many (most?) of our human "memories" are also made up (or at least not a real reflection of what really happened)...

(Recall the "Mandela effect"? (If not, look it up ;))

Do you trust the machines?

What safety measure have you put in place (personally and professionally) to prevent believing in made up hallucinations (human or otherwise)?

How well do you trust your own recollections?

Leave a comment

Read more:

Discussion about this video

Thoughts
Thoughts
Authors
Bronwyn Williams