How much deception and inauthenticity is AI creating?

Ever since ChatGPT came into play, we have stopped receiving requests written in Malaysian style broken English. All requests were then written in grammatically-perfect English but in a very robotic voice. But the requests were still genuine.

In the last few weeks, we have received two ridiculous requests.

The first was written using a lot of cut-and-paste information from the web. I don’t know how much of it was real, so we decided we would end the communication. There was no photograph of the dog, the number of dogs did not tally, most of the information was typically cut-and-paste work. It was sent by someone with a Chinese name.

The second was received just yesterday. This time the sender claimed to be a student in one of the local private universities. There was a photograph of a cat drooling thick saliva. The sender said they (I’m using a gender-free pronoun here because the person signed off with the name of the cat, a Chinese name) wanted medical aid for the drooling cat. The email was written in perfect English in a robotic voice.

As always, I have never not responded to an email unless it is completely rude. This wasn’t rude at all, in fact, it was very polite (ChatGPT or DeepSeek should be) and totally in a robotic voice.

So I responded to offer our medical aid and sent our policies with some explanation. I asked the alleged student to take the cat to the vet and get a proper diagnosis, a report and quotation of how much the cost of medical treatment would be. I also mentioned that our medical aid comes with our neutering aid, so it’s best to find out if the cat has already been neutered, etc.

By midnight, believe it or not, the alleged student had replied. I saw the email come into my mailbox but decided to read it this morning.

What greeted me this morning was a reply, also in a totally robotic voice. And the reply said the cat had already been neutered and an ECG has been done on the cat.

Er…what?? What has drooling from the mouth got to do with the cat’s heart?  An ECG report was attached. I looked at it and it’s dated October 2023. It was also AI-generated for a feline. And the feline’s name was a Chinese name and under Neutered: Intact. Obviously, the writer did not know what “Intact” meant (this means not neutered).

The report contained some graphs and it looked like a cut-and-paste of two reports. The report said there was nothing wrong with the feline’s heart.

What’s going on here?

I sent the photos and report to my friend who is still very much in touch with the modern-day corporate world. She told me to just ignore the whole thing. The entire thing is fake. Even the photos of the drooling cat can be generated by AI.

So, it looks like everything is fake.

To give this sender the benefit of the doubt, just in case there really is a drooling cat in need of help, I still replied stating clearly that he/she/it needs to take the alleged drooling cat to a vet and get the cat properly diagnosed.

I hope we don’t hear from this sender again, that the whole thing is fake and there is no drooling cat.

But isn’t this extremely worrying?

What is the world coming to?

Would it come a day when we have to change our claims procedures because all we ask for are photographs and documentation, and with AI, everything can be faked.

In the past, we have caught dishonest applicants cheating. But now, cheating has become so sophisticated. We might have to change our procedures.

How I miss the days of receiving requests in broken English!


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from AnimalCare

Subscribe now to keep reading and get access to the full archive.

Continue reading